SQLUNINTERRUPTED

I am just a medium, SQL Server the Goal

Category Archives: SQL Engine

Introduction to Stretch Databases – Part Two

In my earlier post I had talked about implementing or rather enabling Stretch Databases in SQL Server 2016 using both the UI and T-SQL and how the Azure SQL Databases are utilized as the remote data storage. In this post, we would talk about implementation details of enabling stretch options on a table and the background activities which goes on in SQL Server and the Azure SQL Database as an effect of enabling the stretch option.

Enabling stretch option for a table is very simple, and can be done by right clicking on the table and selecting the stretch option.

image

The T-SQL Syntax is

   1: Create Table Test_RemoteArchive 

   2: (

   3: a int primary key,

   4: b varchar(10),

   5: c datetime,

   6: d float,

   7: e char(2),

   8: )

   9:  

  10: Alter table Test_RemoteArchive  ENABLE REMOTE_DATA_ARCHIVE WITH ( MIGRATION_STATE = ON )

  11: Alter table Test_RemoteArchive  ENABLE REMOTE_DATA_ARCHIVE WITH ( MIGRATION_STATE = OFF )

The Migration_State option indicates whether data in the table can be migrated to the remote storage or not. If the option is set to OFF, then no fresh data would be sent to the remote table (already migrated data would continue to exist on the remote table).

A few things to keep in mind when stretching a table.

  • If the base table has primary key, the constraint would not be enforced on the remote table.
  • A new column (batchID_<objectid>) is added to the table, where ObjectID is the object id of the current table which was enabled for stretch. I would talk about the batchID column later in details.
  • A non-unique, non-clustered index is created on this new column.
  • A new SP (sp_StretchMigration_<objectid>) where ObjectID is the object id of the current table which was enabled for stretch is created for each table enabled for stretch.
  • A trigger (trigger_RemoteDataArchive__<objectid>) is created on the table. The text for this trigger is encrypted.
  • If a Row has been identified for migration (which is all the rows as off now), the row cannot be Updated/Deleted from the current table. Though it can be done from the remote table on the Azure SQL Database.
  • All DDL operations, except for Alter Table.. Enable Remote_Data_Archive  on the tables are prohibited.
  • There is no option to disable stretch, the only possible way is to create a new table and then perform data migration to the new table.

3 new columns have been added to the sys.tables catalog view, which indicates whether the table is stretched or not, and if its stretched what is the migration state for the table. Further details can be found here.

  • is_remote_data_archive_enabled
  • remote_data_archive_migration_state
  • remote_data_archive_migration_state_desc

When a row is inserted into the table, the row gets inserted into the local table, which after some time, is archived to the Remote table. When a select operation is run on the table, SQL Server queries both the remote and local storage to get the rows which meet the predicate condition.

image

As can be seen from the Plan, SQL is using a concatenation of the results from the local storage and remote storage. The actual query which get’s executed on the remote storage looks something like

   1: EXECUTE sp_prepexec, Int <noname>='0', NText <noname>='@P1 bigint', 

   2: NText <noname>='SELECT 

   3:                     "Tbl1004"."a" "Col1014",

   4:                     "Tbl1004"."b" "Col1015",

   5:                     "Tbl1004"."c" "Col1016",

   6:                     "Tbl1004"."d" "Col1017",

   7:                     "Tbl1004"."e" "Col1018" 

   8:                     FROM "RDAStretchDB_Test9A52425E-7FC9-45A7-9731-FDE7AFAC8512"."dbo"."dbo_Test_RemoteArchive_1509580416_BF917E35-F421-490F-AC92-8F096286696E" "Tbl1004"

   9:                      WHERE "Tbl1004"."a"%(2)=(0) AND "Tbl1004"."batchID--1509580416"<=@P1', 

  10: BigInt <noname>='4'

Notice the additional predicate for batchId, which is there to ensure that in-transit records are not returned. The value of 4, is the max batchid value in my azure table. The plan for the query on the Remote server is included below.

image

Note: Given the fact that the Query is getting data from the remote storage, which is on Azure, the performance of the query would be greatly impacted by the network and the amount of data being retrieved from the remote server. But since the objective of Stretch Database is to only remote archive less frequently used data, this performance impact may be very small when compared to the overall cost benefits.

SQL Server uses a max batch size of 10000 Rows, while archiving data on the remote storage. For every batch which was sent (via bulk insert) to the remote storage, a batch Id is associated with the records. For example, if a table being stretched has 1 million records, SQL Server would use 100 batches (each consisting of 10000 rows). On the remote table, the first 10000 rows would have a batchid column value of 1, the next 10000 rows a value of 2 and so on. I enabled stretch on the FactInternetSales table (AdventureWorks2014 Sample DB) and the partial results for the stretch are.

   1: batchID--1237579447    CountPerBatch

   2: 1                        10000

   3: 2                        10000

   4: 3                        10000

   5: 4                        10000

   6: 5                        10000

   7: 6                        10000

The migration status for any stretch enabled table can be monitored using the DMV “sys.dm_db_rda_migration_status”. The start_time and the end_time in the table are recorded in UTC. Depending on your network traffic the each batch can take some time for migration.

   1: Select 

   2:     table_id, 

   3:     SwitchOffset(cast(start_time_utc as datetimeoffset), '+05:30') as LocalStartTime,  

   4:     SwitchOffset(cast(end_time_utc as datetimeoffset), '+05:30') as LocalEndTime,

   5:     DateDiff(ss, start_time_utc, end_time_utc) As Duration,

   6:     migrated_rows, 

   7:     Error_number, 

   8:     Error_state, 

   9:     error_severity

  10:     from sys.dm_db_rda_migration_status

  11: where 

  12:     table_id = object_id('FactInternetSales')

  13: Order by 

  14:     SwitchOffset(cast(start_time_utc as datetimeoffset), '+05:30') desc

On my test environment, it takes anywhere between 20-25 seconds for one batch to be migrated to the remote table.

   1: table_id      LocalStartTime                        LocalEndTime                        Duration    migrated_rows

   2: 1237579447    2015-06-24 10:15:46.1530000 +05:30    2015-06-24 10:16:09.2770000 +05:30    23        10000

   3: 1237579447    2015-06-24 10:15:21.6870000 +05:30    2015-06-24 10:15:46.1530000 +05:30    25        10000

   4: 1237579447    2015-06-24 10:14:58.2370000 +05:30    2015-06-24 10:15:21.6870000 +05:30    23        10000

   5: 1237579447    2015-06-24 10:14:34.4700000 +05:30    2015-06-24 10:14:58.2370000 +05:30    24        10000

   6: 1237579447    2015-06-24 10:14:10.7830000 +05:30    2015-06-24 10:14:34.4700000 +05:30    24        10000

   7: 1237579447    2015-06-24 10:13:47.6070000 +05:30    2015-06-24 10:14:10.7830000 +05:30    23        10000

   8: 1237579447    2015-06-24 10:13:23.8870000 +05:30    2015-06-24 10:13:47.6070000 +05:30    24        10000

   9: 1237579447    2015-06-24 10:13:00.2000000 +05:30    2015-06-24 10:13:23.8870000 +05:30    23        10000

  10: 1237579447    2015-06-24 10:12:36.8100000 +05:30    2015-06-24 10:13:00.2000000 +05:30    24        10000

The familiar “sp_spaceused” SP, has a new parameter @mode, which indicates if the information is to be returned for the local table, the remote table or both. With the default being ‘ALL’, which returns information about both local and remote storage.

   1: Sp_spaceused 'FactInternetSales', 'True', 'ALL'

   2: GO

   3: Sp_spaceused 'FactInternetSales', 'True', 'Local_Only'

   4: GO

   5: Sp_spaceused 'FactInternetSales', 'True', 'Remote_Only'

   6: GO

image 

Given that it’s just the first release of the feature, there are a few limitations and restrictions with it. A comprehensive list can be found on this MSDN page.

In my next post, I will talk about a few considerations and things to keep in mind when using stretch databases.

Introduction to Stretch Databases with SQL Server 2016

Its been a long time since I have written a blog and what better way to start again than by writing about one of the most interesting features being introduced in SQL Server. In a series of blogs over the next few weeks, I will try and get into the details of how Stretch Databases are implemented and some of the inner workings of the feature to watch out for.

Before, we delve into the feature its important to understand that SQL 2016 is still in the CTP phase and some of the things can change once the product RTM’s. I will blog about the changes if any in another post when it happens.

Ok, so without wasting any more time, Stretch Databases, as the name suggests, stretches your databases/tables to a remote storage, while masking the implementation details from the end user. You continue to access the table in the same way as before , while SQL Server internally traverses the local and the remote storage to get the requested data set. In the current release, the Stretch option moves the entire table to the remote storage, which is a V12 Azure SQL Database (Standard S3 Tier).

Use cases for Stretch Databases

Stretch Databases are useful in scenarios where there is a ton of transactional data, which needs to be stored in your environment for historical querying or maybe government regulations. You only query/work with a small subset of data in these tables (mostly latest transaction data) on a frequent basis. Even with the modern day archiving, manageability and compression techniques (like ColumnStore Indexes, partitioning, using combination of tables/views) the storage cost or the man hours required to update/change your application are prohibitively high, when compared to the Azure Storage utilized by the Azure SQL Database.

Implementing Stretch Databases

Implementing Stretch Databases is really simple. In order to enable a database for Stretch, the ‘Remote Data Archive’ configuration option needs to be set. This can be done using the sp_configure command

   1: sp_configure 'remote data archive', 1

   2: Reconfigure

Next, right click on the DB, on which stretch needs to be enabled, go to task->‘Enable Database for Stretch’. This would launch the wizard for stretch configuration. Sign in with your Azure credentials

image image

The window let’s you choose the subscription in case there are more than one. The next screen allows you to select the Stretch Settings (essentially the location of the Azure SQL Database, the admin login for the WASD server and setting the IP exception rule for your current SQL Server IP).

image

Once you have made sure the settings are current and the click on finish, the following steps are executed on the SQL Server and the Azure SQL Database.

  1. Provision a New SQL Azure Database Server (fixed naming convention)
  2. Configure the firewall exceptions for the WASD server
  3. Create a Credential on the current SQL Environment for the WASD
  4. Create a linked Server to the Azure SQL Database Server
  5. Create the SQL Azure Database.

image

Stretch DB creation creates a log in the “\Users\<current_user>\AppData\Local\SQL Server\Stretch Database to SQL Azure” folder, which can be used for troubleshooting purposes.

Once the option is enabled for the database, the following entries are logged in the SQL Error Log.

   1: 2015-06-20 23:15:44.89 spid51      Setting database option remote_data_archive to ON for database 'StretchDB_Test'.

   2: 2015-06-20 23:15:49.88 spid51      Setting database option remote_data_archive to ON for database 'StretchDB_Test'.

The Wizard does not allow you to choose an existing WASD server, but if creating through T-SQL, you can choose an existing WASD server.

   1: /* Enabling Stretch Database Using T-SQL */

   2:  

   3: /* Step 1 -> Enable Firewall Exceptions on the Azure SQL DB */ 

   4: -- Can be performed through the Azure Portal/Commands on the sp_set_firewall_rule Extended SP on the master DB of the WASD Server

   5:  

   6: /* Step 2 -> Create the Credentials for the WASD Server (linked Server would use these credentials */ 

   7: CREATE CREDENTIAL [tnkll47icl.database.windows.net] 

   8: WITH IDENTITY = '<ServerAdmin>', SECRET = '<Password>'

   9:  

  10: /* Step 3 -> Enable the Stretch DB Option */ 

  11: ALTER DATABASE [StretchDB_Test] SET REMOTE_DATA_ARCHIVE = ON 

  12: (SERVER = N'<SQL Azure Database Server')

 

In the next post, I will talk about the implementation details of stretch the tables within a database and other system SP’s/functions/DMV’s as part of the Stretch DB implementation.

Optimizing your backup – Tips and Tricks

Last week at the TechCon2013, sponsored by the SQLBangalore User Group and other User Groups in Bangalore, I did a talk on how to optimize your backup/restore. I focused on the three main aspects of Backup/Restore and talked about some of the things which can be done to optimize them

  1. Read from Disk
  2. Store in Memory
  3. Write to Disk/Tape

For the Demos, I used 3 databases as described.

Database 1

  • Size – 12 GB
  • DB Name – AdventureWorks2012SingleFile
  • Number of Data Files: 1
  • Disk Allocation Size: 4KB

Database 2

  • Size – 12 GB
  • DB Name – AdventureWorks2012MultiFile
  • Number of Data Files: 4 (spread on 4 different physical disk)
  • Disk Allocation Size: 64KB

Database 3

  • Size – 4 GB
  • DB Name – AnotherSingleFileDatabase
  • Number of Data Files: 1
  • Disk Allocation Size: 64KB

The slide deck for the presentation is included below.

Optimizing Reads

To optimize reads, I focused mainly on the following

  1. Reading from a single MDF file vs. reading from multiple Data files
  2. Using higher disk allocation unit, I would be writing another post on effect of disk allocation unit size to SQL backup.

All backups were performed to a Null device. The idea was to test the read performance. I saw the following backup performance for the databases mentioned above.

   1:  

   2: backup database [AdventureWorks2012SingleFile]

   3: to disk = 'NUL' WITH COPY_ONLY

   4: --BACKUP DATABASE successfully processed 1521753 pages in 26.510 seconds (448.460 MB/sec).

   5:  

   6: backup database [AnotherSingleFileDatabase]

   7: to disk = 'NUL' WITH COPY_ONLY

   8: --BACKUP DATABASE successfully processed 569482 pages in 9.202 seconds (483.489 MB/sec).

   9:  

  10: backup database [AdventureWorks2012MultiFile]

  11: to disk = 'NUL' WITH COPY_ONLY

  12: --BACKUP DATABASE successfully processed 1599457 pages in 25.244 seconds (494.998 MB/sec).

I get better performance with later disk allocation sizes and multiple sizes.

Optimizing Writes

To optimize reads, I focused mainly on the following

  1. Writing to a single backup file vs. writing to multiple backup files
  2. Using Compression
  3. Changing the MaxTransferSize option for the backups.

All backups were performed to a 4K allocation unit drive, except for the multiple file backup, which were performed on a 64K Allocation unit disk. The following backup performance was observed.

   1: backup database [AnotherSingleFileDatabase]

   2: to disk = 'C:\AnotherSingleFileDatabase.bak'

   3: Go

   4: --BACKUP DATABASE successfully processed 569493 pages in 52.510 seconds (84.729 MB/sec).

   5:  

   6: -- Delete the file to preserve Space

   7: xp_cmdshell 'del C:\AnotherSingleFileDatabase.bak'

   8: -- Backup taken to Multiple Files on Different drives of 64K disk allocation Units

   9: backup database [AnotherSingleFileDatabase]

  10: to 

  11: disk = 'F:\AnotherSingleFileDatabase_BackupFile1.bak',

  12: disk = 'G:\AnotherSingleFileDatabase_BackupFile1.bak',

  13: disk = 'H:\AnotherSingleFileDatabase_BackupFile1.bak',

  14: disk = 'E:\AnotherSingleFileDatabase_BackupFile1.bak'

  15: Go

  16: --BACKUP DATABASE successfully processed 569490 pages in 33.600 seconds (132.414 MB/sec).

  17: --- Delete all the files

  18: xp_cmdshell 'del E:\AnotherSingleFileDatabase_BackupFile1.bak'

  19: go

  20: xp_cmdshell 'del H:\AnotherSingleFileDatabase_BackupFile1.bak'

  21: go

  22: xp_cmdshell 'del G:\AnotherSingleFileDatabase_BackupFile1.bak'

  23: go

  24: xp_cmdshell 'del F:\AnotherSingleFileDatabase_BackupFile1.bak'

  25: go

  26:  

  27: -- Backup With COMPRESSION

  28: backup database [AnotherSingleFileDatabase]

  29: to disk = 'C:\AnotherSingleFileDatabase.bak'

  30: WITH COMPRESSION

  31: Go

  32: ---BACKUP DATABASE successfully processed 569491 pages in 25.721 seconds (172.977 MB/sec).

  33: xp_cmdshell 'del C:\AnotherSingleFileDatabase.bak'

  34: Go

  35:  

  36: -- MAXTRANSFER OPTION

  37: DBCC TRACEON(3605,3213,-1)

  38: -- Writes backup Configuration related informtion to the trace file.

  39:  

  40: backup database [AnotherSingleFileDatabase]

  41: to disk = 'C:\AnotherSingleFileDatabase.bak'

  42: WITH COMPRESSION

  43: go

  44: xp_cmdshell 'del C:\AnotherSingleFileDatabase.bak'

  45: Go

  46: sp_readerrorlog

  47: go

  48:  

  49: backup database [AnotherSingleFileDatabase]

  50: to disk = 'C:\AnotherSingleFileDatabase.bak'

  51: WITH COMPRESSION, MAXTRANSFERSIZE = 2097152

  52: Go

  53: -- BACKUP DATABASE successfully processed 569490 pages in 21.920 seconds (202.971 MB/sec).

  54: xp_cmdshell 'del C:\AnotherSingleFileDatabase.bak'

  55: Go

We got better write performance when using a 64K allocation unit disk and when using Multiple Files. Further performance improvement was observed with Compression and by increasing the MaxTransferSize for the backups.

Optimizing Memory

To optimize reads, I focused mainly on changing the BufferCount option for the backups. This would help create more buffers in the memory for the buffer.

Total Memory used = BufferCount * MaxTransferSize

While i did not have a demo for the performance improvement when increasing the number of buffer, I had a demo on the side affect of increasing the BufferCount/MaxTransferSize to be very high.

   1: backup database [AnotherSingleFileDatabase]

   2: to disk = 'C:\AnotherSingleFileDatabase.bak'

   3: WITH COMPRESSION, MAXTRANSFERSIZE = 2097152, BUFFERCOUNT = 3000

   4: Go

   5: -- Error Message

   6: --There is insufficient system memory in resource pool 'default' to run this query.

If we increase the BufferCount and the MaxTransferSize to be very high, we would get into memory issues on the server.

Hopefully this information helps.

TECHCON September 21 2013 – A Day to Remember

September 21st 2013, marked a remarkable day for the SQLBangalore UG, BITPro, PSBUG (Power Shell Bangalore User Group) and BDotNet UG, when all the four groups came together to host one of the most extensive technical conference ever in Bangalore.

With over 250+ Participants and 20 session being conducted in parallel across three conference rooms at the Microsoft facility at Bangalore, this easily was the biggest technical conference brought to you by the community. I had the privilege to be part of the SQL Server Track, both as the host and the presenter and it was a pleasure to see close to 130 participants coming in as early as 8:30 AM in the morning on a Saturday, which speaks volumes about their passion and the desire to learn.

The Day started on a very healthy note with Ryan Fernando (Founder:Qua Nutrition L|B) speaking on “Techies Guide to Better Nutrition”. Ryan’s talk was really an Eye opener and with my wife accompanying me, it opened up the Pandora’s box. I don’t think I need to mention what kind of treat, I am in for later when we reach home.

The second session was on Big Data, the hottest commodity on the Database market. Amarpreet Basan, Technical Lead, Microsoft SQL Server Support team at IGTSC talked in details about Big Data and did some really cool demo’s with HDInsight and Hadoop clusters. The slide decks and the demo’s scripts for this would be available on the SQL Bangalore UG page on Facebook.

I had the privilege to present next (this being my 4th session for the User Group). For this session I went with something which we always talk about in Best Practices, but not necessarily follow it in when working with SQL Server. I focused mainly on the Optimizing your Backups, where in we talked about how to optimize the backup operation for your SQL Server.

Prabhjot Kaur, (IGTSC, SQL Support) went next with her session on demystifying the myths of Tempdb. All the participants present work with SQL Server day in and day out, and would have spent a considerable time optimizing Tempdb for their SQL Server environments and the important of this presentation could be estimated from the fact that here was pin drop silence in the room for the 50 odd minutes when Probhjot was talking.

Next in line was the most important part of the day, LUNCH. A highly nutritious lunch was provided by iTIFFIN (No Its Not An APPLE product). With 499 Calories and absolutely the right ingredients, this was exactly what the doctor ordered for the DBA/Developer crowd in the room.

To make lunch even more interesting we had Balmukund Lakhani (B|T|F) with his usual trivia’s and jokes. All in all the 45 minute lunch break was both healthy and fun filled, thanks to ITiffin and coordinators for this event.

Post lunch we had 3 sessions lined up on some of the most sought after topics in the Market right now. We had Ajay Kumar (Technical Lead, IGTSC) and Sunil Kumar B.S (Escalation Engineer IGTSC) talking about SQL Azure, the Microsoft Cloud offering for SQL. Their session slide deck and the demo content would be made available on the SQL Bangalore User Group page on facebook.

This was followed by Sumit Sarabhai’s (Support Escalation Engineer, IGTSC) session on Query Tuning and Optimizations. With a room full of SQL Developers and Administrators, there was no doubt that this would end up as the most saught after sessions. I mean, is there a SQL Developer/Admin who hasn’t had the bad luck of having to tune a query and make it run faster. Sumit’s session covered some of the salient points about what are the things to do when tuning a query.

To end the sessions, we had Selva R. (Sr. Support Escalation Engineer, IGTSC), presenting on one of the coolest (if not the coolest) thing in the MSBI stack, Power BI. Speaking about Power BI, always reminds of a line in the Second Spiderman Movie (starring Tobey Maguire): “The Power of sun, in the palm of my hands”. Such is the power of Power BI, the master set comprising of Power View, Power Pivot, Power Map and Power Query.

We finished the day around 5:30 PM, with Balmukund fittingly pulling the curtains on a day filled with learning and knowledge sharing. After having worked the entire week and then having spent another 9 hours on a weekend, I thought I have had it enough.I could not have been more wrong, it was heartening to see most of the folks hanging around to interact and sharing their SQL Server experience with one another.

To sum it up, it was an over whelming experience being part of such a fabulous event, and the only parting words I could think off is “May the SQL Server be with you

SQL Server Record Structures–Part 4

Previous Posts in this series.

  1. SQL Server Record Structures-Part 1
  2. SQL Server Record Structures-Part 2
  3. SQL Server Record Structures-Part 3

In the pervious posts I have discussed the structure of the data record in SQL (without compression or sparse columns). We also discussed a few special cases which affect how the record is stored in SQL. In the ensuing post, I would be talking about record structures when data compression is enabled on the table/index.

Data compression was first introduced in SQL with SQL Server 2005, in the form of VarDecimal Storage, where in any column with Decimal/Numeric data type was converted into VarDecimal storage in order to utilize only the required amount of bytes. This obviously was subject to enabling VarDecimal Storage on the DB and the table.

With SQL Server 2008, two new compression techniques were introduced. VarDecimal Storage was deprecated in this version of SQL as row compression achieves the almost the same results.

Row Compression

When Row compression is enabled on the table/Index all columns in the table are converted to use variable length storage, using only enough bytes to store the data. For example if a column is declared as Char(50) and has the following string as data “SQLUninterrupted” instead of using the 50 bytes of storage( as would be the case when no compression is enabled), SQL only uses 16 bytes when compression is enabled.

Some points to keep in mind when enabling Row Compression on a table/Index

  1. Compression does not change the Max Size limits on the record or Index keys.
  2. Compression cannot be implemented if the maximum size of the record, plus the Compression Information overhead exceed 8060 bytes.
  3. Compression can be enabled on individual partitions of a Partitioned table/Index.
  4. Changing the compression setting of a heap table requires a rebuilt of all the non-clustered indexes on the table.

Page Compression

When Page Compression is enabled on a Table/Index it takes a 3 level approach. First row compression is enabled on the page, followed by a Prefix and Dictionary compression. Details of how Prefix and dictionary compression work is available in Book Online topic “Page Compression Implementation”. When Page Compression is enabled, a special record called the Compression Information Record is added to the page. This CI record stores the Prefix/Dictionary compression information.

Some points to keep in mind when enabling Page Compression on a table/index.

  1. Page Compression only kicks in when the Page is full and a new row is being inserted. If enough space can be saved to accommodate the new record and the compression information on the page, the page will be compressed. Otherwise it would not be compressed.
  2. Enabling Page Compression on a heap table does not enforce compression on the pages until the heap is rebuilt.
  3. New pages being added to the heap are also not compressed until the heap is rebuilt.
  4. Non-Leaf pages of an index cannot be page compressed. This is to avoid the overhead of uncompressing the record during index operations.

Compression Record Structure

When compression is enabled on a table, the data records are stored in a new format commonly called the CD (Column Descriptor) format. This format is entirely different from the regular record structure used when there is no compression or no sparse columns defined in the table. The Structure of the record takes the following format.

image

Record Header Region

This is a 1 byte section with the following information

  • Bit 0 – Type of Record. This is set to 1 for CD format Records
  • Bit 1 – Indicates if the record has versioning information.
  • Bit 2-4 – Indicates the formation stored in the row. Details of these bits can be found in the Microsoft® SQL Server® 2008 Internals by Kalen Delaney, Paul S. Randal, Kimberly L. Tripp , Conor Cunningham , Adam Machanic
  • Bit 6 – Indicates if the record contains a long data region or not.
  • Bit 7 – Not used.

Column Descriptor Region

This region is composed of two parts, the first part is either 1 byte or 2 bytes and indicated the number of columns in the table. The second part is 4 bit (per column) descriptor which indicates the length of the column. The 4 bits can be used to represented 16 different values, of which only 13 are used with SQL 2008. For details of the descriptors refer the SQL Server 2008 Internals book mentioned above.

image

Short Data Region

Contains columns which are less than equal to 8 bytes in length. Since the length of the columns are not directly stored in this region, in tables with a lot of columns, it can be expensive to access the last column, as this would require processing all the previous columns (length of each column) and then computing the start and end of the desired column. To mitigate this, the columns are divided into clusters of 30 columns each. A Short Data Cluster Array(byte array) is then created in this region to store the length of each cluster. Since there are only 30 columns in each cluster (with each column being max of 8 bytes), a single cluster can have a max length 240 bytes.

image

Long Data Region

All columns with data length grater than 8 bytes are stored in the long data region. Since the length of these columns is not stored in the CD regions (each CD descriptor is only 4 bits, whereas the column length can be anything), the Long Data region requires a column offset array to correctly locate the column data in the Long Data Region.

Again, in order to reduce the cost of locating a column value in this region, we have a Long Data Cluster array, which  stores the number of columns in each clusters (again each cluster represents up to 30 columns).

image

Note: This illustration has been taken from the Microsoft® SQL Server® 2008 Internals by Kalen Delaney, Paul S. Randal, Kimberly L. Tripp , Conor Cunningham , Adam Machanic

Special Information Section

Contains optional information about forwarding/back pointer or the Versioning information.

Row Compression Example

Create Table RowCompressionExample

(

Col1 int, 

Col2 bigint, 

Col3 char(40), 

Col4 char(30), 

col5 numeric(18,7), 

Col6 varchar(300), 

col7 datetime, 

col8 varchar(400), 

col9 char(100), 

col10 char(100)

)

 

-- Enable Row Compression on the table

ALTER TABLE RowCompressionExample REBUILD WITH (DATA_COMPRESSION = ROW)

 

----- Insert 2 records into the table using

insert into RowCompressionExample values

(10, 345678345, 'Sourabh', 'Agarwal', 123345456.3456, 'This is first Long data', getdate(), 

'This is a Second long Data', 'This is a third long Data', 'short')

 

insert into RowCompressionExample values

(76854, 2000, 'Short1', 'LongDataValue1', 3847.34, 'This is first Long data', getdate(), 

'This is a Second long Data', 'This is a third long Data', 'LongDataRegion')

 

-- Use DBCC IND to display the Pages used for this Table.. 

DBCC IND(10,'RowCompressionExample',-1)

 

--- Use DBCC PAGE to check the records structure on the Page

DBCC TRACEON (3604, 1)

DBCC PAGE(10,1,285,3)

I have created a sample table with 10 columns of varying size and then enabled Row Compression on the Table. I then inserted two records in the table. Now Lets look at the record structure from the output of the DBCC Page command.

image

As indicated by the DBCC PAGE output, the total record length is only 128 bytes as compared to 362 bytes that might have been required to store the same record in the uncompressed format.

Next we also have the CD Array displayed in the output. Since there are 10 Columns in the table, there are 10 entries in the CD array.

As can be seen, the columns Col6, Col8 and Col 9 have values longer than 8 bytes and hence are marked as Long in the CD array. Also notice that since there are only 10 columns, all the columns are part of the same cluster (cluster 0).

Page Compression Example

I used a similar table structure  as above, to demonstrate Page Compression. As I mentioned earlier, Page compression does not kick in until the Page is full and a new record has to be inserted into the table.

I created the following tabled and enabled Page Compression on the Table.

Create Table PageCompressionExample

(

Col1 int, 

Col2 bigint, 

Col3 char(40), 

Col4 char(30), 

col5 numeric(18,7), 

Col6 varchar(300), 

col7 datetime, 

col8 varchar(400), 

col9 char(100), 

col10 char(100)

)

 

-- Enable Row Compression on the table

ALTER TABLE PageCompressionExample REBUILD WITH (DATA_COMPRESSION = PAGE)

Next I inserted 64 records into the table using the following script

declare @count int = 0

While(@count < 32)

begin

insert into PageCompressionExample values

(10, 345678345, 'Sourabh', 'Agarwal', 123345456.3456, 'This is first Long data', getdate(), 

'This is a Second long Data', 'This is a third long Data', 'short')

insert into PageCompressionExample values

(76854, 2000, 'Short1', 'LongDataValue1', 3847.34, 'This is first Long data', getdate(), 

'This is a Second long Data', 'This is a third long Data', 'LongDataRegion')

set @count = @Count+1

end

At this point, there are two data pages in the table with row compression only. So why did the Page Compression not take effect?

The reason is that on a heap, PAGE Compression would not take effect until the heap table has been rebuilt. So lets rebuild the table..

ALTER TABLE PageCompressionExample REBUILD WITH (DATA_COMPRESSION = PAGE)

Now there is only one Page in the table with Page Compression enabled..

When Page Compressed, SQL Server adds a new record to the top of the page (right after the Header Info) called the Compression Information record. Additionally SQL server adds information in the page header to indicate that the page is “Page Compressed”. If the value of m_typeFlagBits = 0x80, then the page is Page Compressed.

In the next part of this blog series I will talk about the Compression Information record structure.

Follow

Get every new post delivered to your Inbox.

Join 147 other followers