I am just a medium, SQL Server the Goal

SQL Server Record Structures–Part 2

In the first part of this series, I had discussed about the regular record structure used in SQL Server. In this post I would be talking about a special case, Row Forwarding and how they effect the record structures in SQL.

In order to understand Row Forwarding, it’s important to first understand how non clustered indexes created on a Heap table works.

in case of heap tables, non-clustered indexes on their leaf pages have the RID along with the index key values. This RID value helps link the non-clustered index to the heap table during a scan or a seek. Consider the following example. This RID value is combination of PageID:RowID identifying a physical record in the table.

Assume we have a heap table with the following columns (Col1, Col2, Col3, Col4, Col5). Also assume there is a non-clustered index on the column (col1).

The index leaf page would have a similar structure (this is just an illustration!)

Index Key(a)  RID
1 222:0
2 222:1
3 222:2
4 222:3
5 222:4
6 222:5
7 222:6


now assume if we are running the following query, against the table

Select Col1, Col2, Col3 from Table1 

where Col1 = SomeValue

this query would can potentially use the Non Clustered Index (depending on whether the cost of using the index is lesser than table scan on not). If the query uses the NCI, then it can get the values of Col1 quiet easily, but for the values of the columns Col2 and Col3, it has to piggy back on the RID value to reach the actual data record in the table (PageID and RowID) and get the values from there.

This operation in SQL is called the Bookmark Lookup (SQL 2000) or the Lookup Operation (SQL 2005 and onwards).

Now assume we have a table with some Variable length columns. When the values in the variable length columns are updated, the update might result in an increase in the size of the column. SQL server might not be able to fit this new record on the same page and may cause a Page Split, thereby moving the current record and potentially other records on the page to a different page.

Now if there was a Non-clustered index on this table, then the non-clustered index would have to able to be modified to reflect movement of the rows. This would make the update operations very expensive.

So instead of having to update the non-clustered index, SQL server creates a pointer/stub at the initial location of the record to point to the new address of the record. This way, when the NCI scan or seek reaches the record, it simply reads the pointer record and reach the new location of the Row. The pointers are called forwarding Pointers/records and the actual record is called the Forwarded Records.

The same record can be modifies multiple times and a new forwarded record might have to be created. This could potentially lead to having a chain of forwarding/forwarded records. In reality this does not happen. What actually happens is that the Forwarded Record also contains a back-pointer to the forwarding record. So when the multiple changes are being made to the record, the engine just takes the new location of the record and updates the original forwarding record, to point to this new location.

Row Forwarding is bad for performance. Also, row forwarding only happens in HEAP Tables.

Example: Examining the Forwarding Record and Forwarded Record

CREATE TABLE ForwardingRecord


Col1 int NOT NULL,

Col2 char(1000) NOT NULL,

Col3 varchar(3000) NULL,

Col5 varchar(4100) NOT NULL



Insert into ForwardingRecord values (1, Replicate('a',1000), replicate('b',1000),replicate('b',1000))

Insert into ForwardingRecord values (2, Replicate('a',1000), replicate('b',1000),replicate('b',1000))


DBCC IND('DatabaseName','ForwardingRecord',-1)


DBCC PAGE(10,1,228,3)

The result of the DBCC Page, would show that there are two records on the page. Each record is about 3017 bytes in size.

now lets update the second record in just a way that it causes a Page Split.

Update ForwardingRecord set Col5 = Replicate('v', 4100) where col1=2

Dumping the same page again and looking at the Slot 1 (the original record was here) we see,


As can be seen, the forwarding record has the information about where the forwarded record exists. Let’s now try to dump the Page 280 and check the record.


as can be seen that the Forwarded record has information about the forwarding record. As Paul Randal mentions in this blog, the back pointer is 10 bytes in size.

SQL Server Record Structures–Part 1

Recently while talking about the SQL Server Storage engine at one of the customer workshops, I was asked about how data is actually stored in the tables and indexes. With the exception of Column Store Indexes (*introduced in SQL 2012), data in SQL Server is stored in rows.

A data row in SQL Server can be stored in one of 3 formats, depending on what SQL Server feature is enabled in the database.

  • Regular Data Record: The regular Data row structure which have been in use since the early days of SQL Server.
  • Compression Row Structure : When Row/Page compression is enabled on the Table.
  • Sparse Column Row Structure: When Sparse Columns are defined in the table.

Index Records are in two formats

  • Leaf record structure
  • Non-Leaf record structure

Additionally SQL Server has a separate type of record structure to store LOB data. LOB data can either be stored as

  • Off Row Data
  • Row-overflow data

In addition to the above mentioned data structures, SQL also has a Versioning Records (for all data, Index, text types) when row versioning is enabled or as used by some internal SQL features.

In the first part of the blog series I would be talking about the regular data row structure as has been used since the early days of SQL Server, the other record structures for compression and or sparse columns would be discussed in later posts. The Data records is part of a Heap Table (tables with no clustered index) or the leaf level of a clustered index. A Data Record is composed from the values of all the columns in the table. Another important point to keep in mind is that in SQL Server, the terms Row/Record/Slots are all synonymous and can be used interchangeably.


The record can be divided into 8 sections as depicted in the picture above.

  1. 2 Bytes Record Status (Record header) : Currently only 9 bits of this is being used. The header contains information about what kind of record it is. Whether it is versioned or ghosted record and so on. For more information on these 2 bytes refer, Microsoft® SQL Server® 2008 Internals by Kalen Delaney, Paul S. Randal, Kimberly L. Tripp , Conor Cunningham , Adam Machanic
  2. The second 2 bytes (Length of Fixed Length Columns) indicate the total length of fixed length columns (int, date, float, money char etc.)
  3. The 3rd section of N bytes is the actual storage for the Fixed length columns. Yes, you got it right, SQL Server stores all the fixed length columns together (irrespective of where they are defined in the table).
  4. 2 Bytes for Number of Columns – 2 bytes are used to store the number of columns in the record. In the default scenario (no sparse columns) a table in SQL 2008 can have up to 1024 column, which means at least 10 bits would be required to store this information.
  5. Null Bitmap (1 bit per column in the table): 1 bit would be used per column, to indicate whether the column can have a null value or not. This is done for all the columns in the table.
  6. 2 bytes – Number of variable length columns in the table: A count of the variable length columns in the table.
  7. Variable length Column offset (2 bytes per variable length column): This offset stores the ending offset for each variable length column in the table. This allows SQL Server to efficiently calculate the start,end and length of each variable length columns. 2 bytes is required because the column offset can be anywhere on the 8KB page.
  8. N bytes – For the variable length columns storage: Storage for the column values.

To understand this further, lets take few examples. In all the examples below, the following DBCC PAGE/DBCC TRACEON commands would be used to display the page/record structure.

Example 1: Table With All fixed length columns

Consider a SQL Table with the following structure

create table FixedLenghtColumns


col1 int,

col2 float, 

col3 datetime, 

col4 char(25)



-- Insert a record into the table

Insert into FixedLenghtColumns values (1, 1.0001, getdate(), 'Fixed Length Columns')


-- DBCC IND Command to display the allocations for this table

DBCC IND('Database Name','FixedLenghtColumns',-1)


-- DBCC PAGE Command to display the Page structure

DBCC TRACEON (3604,-1)

DBCC PAGE (10, 1, 276, 3)

Lets look at the page and record structure from the DBCC Page Output.


The fields marked in Red boxes are important.

pminlen – Indicates the minimum length of a record in the table. This table has 4 columns with a total length of 45 bytes. Add to it the Row header and 2 bytes for the length of the fixed length columns.

m_slotCnt: This field indicate the number of slots or records on the page.

Record Size/Record Attributes: This indicate the actual size of the this record and what are the attributes that this record has. For example this record contains a NULL_BITMAP field.

The “Slot 0 Offset 0x60 Length 52” section indicate that this is the first record on the page, and the record starts at offset 0x60 (decimal 96). Which is valid since the Page Header occupies the first 96 bytes of a SQL Server Page. Please note all offsets start at 0.

Now lets see why the record occupies 52 bytes. The first 4 bytes of the record are fixed for the record header and the length of fixed length columns. Next the total length of the fixed length columns is 45 bytes. Then we need 2 bytes for the number of columns and 1 byte for the Null Bitmap (4 bits for the four columns, but since it all byte allocation, we need 1 byte). This is visible in the record details we see in the DBCC PAGE output.


As can be seen, the first column value starts at offset 0x4 (decimal-4). Which is after the first 4 bytes for the Row header and the length of the fixed length columns. The other column values follow the similar pattern.

Example 2: Tables with mix of variable and fixed length columns

Lets consider another example, where the table contains a mix of fixed and variable length data types.

CREATE TABLE MixDataTypeColumns


Col1 int NOT NULL,

Col2 char(25) NOT NULL,

Col3 varchar(60) NULL,

Col4 money NOT NULL,

Col5 varchar(20) NOT NULL



Insert into MixDataTypeColumns values (10, 'Sourabh Kumar Agarwal', 'This is a test', 3764.846, 'last column')


DBCC IND('DatabaseName','MixDataTypeColumns',-1)


DBCC PAGE(10,1,274,3)

The output of the DBCC PAGE indicates the following…


The Record length here is 75 which can be summed up as follows

2 bytes Record Header+ 2 bytes for the length of fixed length data types columns + 37 bytes for the fixed length data types columns + 2 Bytes of No of Columns + 1 bytes (*5 bits) for NULL_BITMAP + 2 bytes for number of Variable length data type columns + 2*2 bytes for the Variable length column offset + 25 bytes used for the variable length column values.

Also as visible from the column offset information from the snapshot above, we can see that the fixed length columns are stored first (notice the column offsets) and then the variable length columns are stored, Which is

  • Col1 (first fixed length Column) – 0x4 – decimal 8
  • Col2 (second fixed length Column) – 0x8 – decimal 8
  • Col4 (last fixed length Column) – 0x21 – decimal 33
  • Col3 (first variable length Column) – 0x32 – decimal 50
  • Col5 (first fixed length Column) – 0x40 – decimal 64

In the next post, I will talk about 3 special cases involving versioning, ghost records and row forwarding.

Sending SQL Notification to C# Application using Service Broker

On one of my earlier post on CDC, I was asked if we can send out CDC notifications to a C# application. The ensuing post is an attempt to answer that question. This post does not specifically talk about sending CDC notification, but a general guidance on how Service Broker can be leveraged to send out notifications to an External Application.

In order to send out the notification using Service Broker to an external application, we would need to make use of the SQL Server Service Broker External activator, which is distributed as part of the SQL Server Feature Pack (available for 2005/2008/2008R2/2012). The links for the respective SQL Versions (latest versions).

From a work flow perspective, the following events take place.


The Application event or service communicates with the Notification Service (Event Notification). The External Activator services is continuously listening on the notification queue. When it encounters a message in the Queue, it invokes the External Application, which can process the required operation.

Once the External Activator is installed, it works as a NT Service.


Before the External Activator can be configured, the Event Notification mechanism needs to be created in SQL. This event notification would be fired on the Queue Activation Event on the Application Queue. In this example, I have implement an Async Trigger, which uses Service Broker to activate an external application.

Setting up the Async Trigger Code

Use master



create database SSBEA


-- Enable Trustworthy on the database







/***************************** SET UP Application Service and Queue **********************************************/

--Create Message type for Communication




-- Create a Contract for SSB communication

CREATE Contract  [MyContract]    

        ([MyMessage] Sent by ANY) 


-- Create a Queue for the Notification

Create Queue MySSBEAQueue

With Status = ON,Retention = off, Poison_Message_Handling (Status = off) 



-- Create a Service for the Notification

Create Service MySSBEAService 

    On Queue MySSBEAQueue




    (ADD CONTRACT [MyContract]) 


/****************** Create a Sample Table using which we would send message to be queue *****************/


Create table test_SSB(a int, b int)


-- Create a Trigger on this table which would invoke the SSB Service and post messages to the Queue.

alter trigger SSB_trigger 

on test_SSB



declare @h as uniqueIdentifier

declare @message Varchar(1000)


declare @test1 int, @test2 int

select @test1=a, @test2=b from INSERTED with(nolock)


Set @message = ''

set @message = @message + 'The value Inserted in the table were '+ Cast(@test1 as varchar(10)) + ' and '+ Cast(@test2 as varchar(10));




ON CONTRACT [MyContract]

with encryption = off;


print @h


IF EXISTS(select * from sys.conversation_endpoints 

                        where conversation_handle=@h and state='ER')


        RAISERROR ('Service Broker in error state',18,127)

        rollback transaction





        print 'Clean Up Completed'




Next we need to setup the objects for Event Notification

--- Create the Queues for Notification

CREATE QUEUE MyNotificationQueue 


-- create event notification service


CREATE SERVICE MyNotificationService

      ON QUEUE MyNotificationQueue





-- Create a Event Notification to allow the EA to listen for Notification




TO SERVICE 'MyNotificationService' , 'current database'


Once we have setup the Event notification, we need an create the External Application as per our requirement. In my case the External Application, takes the message from the Service and inserts it into a secondary table.

using System;

using System.Collections.Generic;

using System.Data.Sql;

using System.Data.SqlClient;

using System.Linq;

using System.Text;

using System.Diagnostics;

using System.IO;


namespace MyMessageApplication


    class Program


        /// <summary>

        /// Application name used when connecting to Sql Server and writing Windows Event Log entries.

        /// </summary>

        private const string ApplicationName = "MyMessageApplication";


        /// <summary>

        /// Defines how many milliseconds the <c>RECEIVE</c> statement should wait for messages.

        /// </summary>

        private const int WaitforTimeout = 5000;


        /// <summary>

        /// Predefined Sql Server message type name for end dialog messages.

        /// </summary>

        private const string EndDialogMessageType = "http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog";


        /// <summary>

        /// Predefined Sql Server message type name for conversation error messages.

        /// </summary>

        private const string ErrorMessageType = "http://schemas.microsoft.com/SQL/ServiceBroker/Error";

        /// <summary>

        /// Services request messages by doing all the necessary computation and preparing the payload of a reply message.

        /// </summary>



        ///Create a message logger function


        public static void LogMessageToFile(string message)


                StreamWriter sw = File.AppendText(@"D:\SQLDataFiles\MyMessageApplicationLog.txt");

                string entry = String.Format("{0:G}: {1}.", System.DateTime.Now, message);




        static void Main()


            SqlConnectionStringBuilder csb = new SqlConnectionStringBuilder();

            csb.ApplicationName = ApplicationName;

            csb.DataSource = "<ServerName>";

            csb.InitialCatalog = "SSBEA";

            csb.UserID = "UserAccount";

            csb.Password = "Password for the User Account above";

            csb.MultipleActiveResultSets = true;



                SqlConnection conn = new SqlConnection(csb.ToString());


                bool MsgRcv;



                    MsgRcv = false;

                    SqlTransaction tran = conn.BeginTransaction();

                    using (tran)


                        SqlCommand comm = conn.CreateCommand();

                        comm.Transaction = tran;

                        //LogMessageToFile("Querying from the Queue");

                        comm.CommandText = string.Format(

                            "WAITFOR (RECEIVE TOP (1) conversation_handle, message_type_name, convert(xml,message_body) FROM {0}), TIMEOUT {1}", 

                            "MySSBEAQueue", WaitforTimeout);

                        SqlDataReader reader = comm.ExecuteReader();



                            MsgRcv = true;

                            System.Data.SqlTypes.SqlGuid handle = reader.GetSqlGuid(0);

                            System.Data.SqlTypes.SqlString TypeName = reader.GetSqlString(1);

                            System.Data.SqlTypes.SqlXml message = reader.GetSqlXml(2);

                            if(TypeName == EndDialogMessageType || TypeName == ErrorMessageType)


                                if (TypeName == ErrorMessageType)




                                SqlCommand endconv = conn.CreateCommand();

                                endconv.Transaction = tran;

                                endconv.CommandText = "End Conversation @handle";

                                endconv.Parameters.Add("@handle", System.Data.SqlDbType.UniqueIdentifier);

                                endconv.Parameters["@handle"].Value = handle;





                                SqlCommand com = conn.CreateCommand();

                                //LogMessageToFile("Inserting into the table");

                                com.CommandText = "Insert into SSBEA.dbo.ReceievedMassage(RecievedMessage) values (convert(varchar(1000),@payload))";

                                com.Parameters.Add("@payload", System.Data.SqlDbType.Xml);

                                com.Parameters["@payload"].Value = message;

                                com.Transaction = tran;


                                //LogMessageToFile("Inserted this message to the table:- " + message.Value);





                } while (MsgRcv);


            catch (Exception e)







Once the application has been created, the last step would be to configure the EA.

In order to configure the EA to listen to listen for notification it needs to be configured accordingly using the configuration file which is created as part of the installation.


The above snapshot of the Configuration File shows the important parameters which needs to be specified for the EA to function properly.

  • Notification Service Name: Name of the Service which would be used for Notification.
  • Connection String: Information about which server/database on which the Service and the queues are created.
  • Application Name: Name of the application which would be invoked as part of External activation.
  • On Notification: The four-part name of the queue on which to listen for notification.
  • Launch Info: The executable path for the application which would be launched as part of the notification. The Concurrency section indication how many instance of the applications to be invoked in case multiple messages arrive in the queue.

In my case, there were some issues with launching the application from the External Activator which were resolved after setting some policies and UAC controls.

Hopefully this helps!!

SQL Server Memory–Rise of the Clerks – Part 2


Previous posts in this series.

SQL Server Memory – Underground

SQL Server Memory – The Evolution

SQL Server Memory – Rise of the Clerks – Part 1

In the last post, I had talked about the memory architecture in SQL Server 2005. In this post, we will talk more about the clerks.

As mentioned in my previous post, memory clerks are consumers of memory. The allocate memory from the memory nodes using one of the allocators as exposed by the memory node. In general there are 4 different types of clerks in SQL. Generic Clerks, Cache Store, Object Store, and User Store.

Before we talk about these stores and the clerks, its important to understand the difference between a Cache and a pool in SQL. Cache in SQL is a mechanism to Cache different types of data, which are typically controlled by a cost for each entry. Ideally, each of these entries in the cache has a state value associated with it. Caches typically implement a lifetime control on the entry. This is controlled by some sort of LRU policies. Depending on the type of data caches, the entries could be shared between multiple users. Pool on the other hand is a mechanism for caching homogenous data(same type of data). The data in these does not have have cost or state associated with it. There is no lifetime control implemented on the pool.

Buffer Pool, as the name suggest is NOT a pool, but a cache, as we will see later.

Cache Store/User Store: These are typical caches in SQL. The Cache store provides support for hashed tables, while the user store provides caching for user created objects. Caches implement two types of control on the entries,

  1. Lifetime control: As the name suggests, it provides time management for the entries.
  2. Visibility control: As the same suggests, it provides control over, whether the entry in the cache is visible to any look ups or not.

In both the case, the SQLOS’s caching framework provides visibility control and lifetime control (not in case of User store). Life Time of the entry is managed using the reference count in the Clock Entry info, once the reference count becomes zero, the entry is removed from the cache. Visibility is maintained by the Pin Count in the clock entry info. For an entry to be visible, its pin count needs to have a value grater than zero (along with being non-dirty and no marked for single usage).

A Cache Store, can support multiple types of Hash Tables, this design is very useful for lookups. For example the Procedure Cache, has hash tables for both name and id’s.

SQL implements lifetime/visibility control using the clock algorithm. The Clock algorithm implements two clock hands.

  • External Hand: Moved by the Resource Monitor, when there is a memory pressure.
  • Internal Hand: Used to control the size of a cache, relative to other caches in SQL. In other words, this hand puts a cap on the memory usage by a single cache.

So what does the clock hands really do??  Every time the clock hands sweeps through the cache, it divides the cost of the entry by 2. If the entry is not in use and its cost is zero, the clock hands first marks the entry as invisible and then removes it from the cache.

To read more on Clock hands, refer this post by Slava.

Object Store: Object store is a simple pool in SQL. it does not implement any costing for the entries. It controls the size, using a preconfigured max cap. In addition on every memory pressure notification from the resource monitor in SQL, it removes a preconfigured number of entries.

Generic Clerks: Provides mechanisms to allocate memory from the memory node using any of the allocators as exposed by the memory node.

In SQL Server 2005, Buffer Pool allocation is done in a similar way as with SQL 2000. We start of with checking how much RAM is installed on the machine, based on which we calculate the Buf array (array to monitor the buffers in the buffer pool). Once we have calculated the Buf array, we calculate the size of the BPool and allocate it. As with SQL 2000, the max server memory setting is only used to control the amount of memory used by buffer pool and not the entire SQL server. Buffer pool commits memory on demand and depending on the requirement, it calculates its Target Memory, the amount of memory it thinks it should commit before the server goes into memory pressure.

Unlike SQL 2000, Performance Monitor SQL Memory Manager:Total Server memory counter does not indicate the total SQL memory, but only the BPool usage.

Buffer Pool is a cache for Data Pages. Its primary purpose is to cache SQL Database pages, but in addition to caching data pages, other memory consumers(clerks) can borrow memory from the Buffer Pool. This memory is always in 8KB chunk and is not always contiguous. In other words, the buffer pool can be used as an underlying memory manager for other SQL Server memory clerks, as long as they allocate memory in the 8KB size.

In SQL 2005, during Startup, the SQL OS, configures the Buffer Pool to use single page allocators, which allows the Buffer Pool to provide for all Single page Allocations in SQL. In other words, all single page allocations come from the BPool.

To sum it up: During startup, the SQL OS allocates the Buffer Pool using the VM/AWE allocators (and not the page allocators, so in theory BPool is nothing but a large chunk of reserved memory) and configures it to use the Single Page Allocators. Once this is done, any of the memory clerks which need memory in the 8KB boundary, they would allocate the memory from the BPool (stolen pages). If the clerks require memory in large buffers (not in 8KB boundary), in those case, the Multi Pages Allocator or the virtual allocators and these would be allocated outside the buffer pool, area fondly called as the Mem-To-Leave.

In the next post in this series, I will talk about the NUMA support in SQL 2005 and how it affects SQL memory.

SQL Server Memory–Rise of the Clerks – Part 1

Pervious Posts in the Series

SQL Server Memory – Underground

SQL Server Memory – The Evolution

SQL Server 2005 introduced many changes to the Memory Architecture. Majority of these architectural changes continued till SQL Server 2008 R2. In this post, I will try to cover a high level overview of architectural changes introduced in SQL 2005.

The New Architecture

The new memory architecture consists of 4 major components

  1. Memory Nodes
  2. Memory Clerks
  3. Memory objects
  4. Memory Broker

Slava Oak’s has a blog post explaining these components in details. Most of what follows is an attempt at simplifying what Slava mentions his blog.

Memory Node: Memory nodes are internal SQLOS objects. The number of memory nodes depends on the NUMA configuration of the server. Memory nodes can be thought of as a large pool of Memory, which different components (Clerks) can allocate memory from. Memory Nodes expose a set of Allocators which are used by all memory Clerks to allocate the desired amount of memory from the Node.

  1. Page Allocators: There are 4 primary types of page allocators exposed by the memory node. These allocators are used to allocate a set of 8KB pages. Remember that SQL Server page granularity is 8KB and any memory allocated by these page allocators would always be in multiple of 8KB.
    • Single Page Allocator: As the name suggests, this allocator is used to allocate one 8KB page at a time.
    • Multi Page Allocator: Allocates Multiple contigous 8KB Pages
    • Large Page Allocators: These are used to allocate Large Pages. This feature is only present in IA64 or X64 editions of SQL, running on servers with more than 8GB of memory. Trace Flag 834 has to be enabled in order for SQL to use Large Pages. On a X64 server the Large Page granularity is 2MB, while on IA64 its 16MB.
    • Reserved Pages: Is special purpose allocator, which allocates a set of pages for emergencies. Typically the “Fail Safe” memory for SQL.
  2. Virtual Memory Allocator: Uses Windows VirtualAlloc API’s, when SQL needs to allocate contigous memory which does not necessarily fall in the 8K page boundary.
  3. Shared Memory Allocator: Uses windows file mapping API’s and provides LPC (shared memory) capabilities.

DBCC memory status gives some information about the Memory Nodes, but they are NOT exactly memory node but CPU nodes.


Locked Pages allocated would only show up when SQL Server has been configured to use Lock Pages in memory.

Memory Clerks: Clerks are consumers of memory in SQL. In general there are 4 kinds of clerks, generic, cache store, object store, user store. The clerks make use of the memory node allocators to allocate memory. SQL has a vast number of Clerks predefined, and each of these clerks corresponds to the type of allocation which needs to be done. SQL exposes the clerk information using the “sys.dm_os_memory_clerks” DMV. This information is also exposed through the DBCC MEMORYSTATUS. In the memory status output one can see the clerks being listed multiple times. This is because of the fact that the clerks are created per NUMA node. So if we have 4 NUMA nodes on the machine, we could see the clerks being reported 4 times.

Memory Objects: Memory objects are the memory allocations made through the clerks. There are 3 kinds of memory objects which are supported by SQL. Memory objects can be viewed using the “sys.dm_os_memory_objects” DMV.

  1. Variable Memory Object: Supports variable size memory allocations.
  2. Mark/Shrink Objects: This allocations are typically done in two stages. During the first stage the allocations grows in size (more and more memory are added as part of this allocation) and during the second phase the memory is de-allocated. As Slava points out, these type of allocations are very useful in scenarios like compilation or execution.
  3. Fixed Size Memory objects: Supports memory allocations which are fixed in size.

Memory Broker: Memory broker can be though of as the Ring Master in a Circus. The purpose of Memory Broker is to provide a centralized mechanism to distribute memory or control the allocations made by each component in SQL Server. Memory Broker monitors the demand consumption of memory by each component and then based on the information collected, it calculates and optimal value of memory for each of these components. This information is then broadcast to each of the components, which grow or shrink their usage as required. Memory broker is exposed through the DMV “sys.dm_os_ring_buffers” where the Ring_buffer_type is “RING_BUFFER_MEMORY_BROKER”.

To tie these pieces together. 

In order for the Clerks to allocated Memory, they have to create a memory object using one of the Allocators exposed by the Memory Node. These allocations (memory objects) could be of any type as mentioned above. This entire allocation and de-allocation is monitored by the memory broker, which redirects the clerks to grow/shrink their memory usage as required.


In the next post, I will talk about the memory clerks in more details.