SQLUNINTERRUPTED

I am just a medium, SQL Server the Goal

Monthly Archives: September 2013

Optimizing your backup – Tips and Tricks

Last week at the TechCon2013, sponsored by the SQLBangalore User Group and other User Groups in Bangalore, I did a talk on how to optimize your backup/restore. I focused on the three main aspects of Backup/Restore and talked about some of the things which can be done to optimize them

  1. Read from Disk
  2. Store in Memory
  3. Write to Disk/Tape

For the Demos, I used 3 databases as described.

Database 1

  • Size – 12 GB
  • DB Name – AdventureWorks2012SingleFile
  • Number of Data Files: 1
  • Disk Allocation Size: 4KB

Database 2

  • Size – 12 GB
  • DB Name – AdventureWorks2012MultiFile
  • Number of Data Files: 4 (spread on 4 different physical disk)
  • Disk Allocation Size: 64KB

Database 3

  • Size – 4 GB
  • DB Name – AnotherSingleFileDatabase
  • Number of Data Files: 1
  • Disk Allocation Size: 64KB

The slide deck for the presentation is included below.

Optimizing Reads

To optimize reads, I focused mainly on the following

  1. Reading from a single MDF file vs. reading from multiple Data files
  2. Using higher disk allocation unit, I would be writing another post on effect of disk allocation unit size to SQL backup.

All backups were performed to a Null device. The idea was to test the read performance. I saw the following backup performance for the databases mentioned above.

   1:  

   2: backup database [AdventureWorks2012SingleFile]

   3: to disk = 'NUL' WITH COPY_ONLY

   4: --BACKUP DATABASE successfully processed 1521753 pages in 26.510 seconds (448.460 MB/sec).

   5:  

   6: backup database [AnotherSingleFileDatabase]

   7: to disk = 'NUL' WITH COPY_ONLY

   8: --BACKUP DATABASE successfully processed 569482 pages in 9.202 seconds (483.489 MB/sec).

   9:  

  10: backup database [AdventureWorks2012MultiFile]

  11: to disk = 'NUL' WITH COPY_ONLY

  12: --BACKUP DATABASE successfully processed 1599457 pages in 25.244 seconds (494.998 MB/sec).

I get better performance with later disk allocation sizes and multiple sizes.

Optimizing Writes

To optimize reads, I focused mainly on the following

  1. Writing to a single backup file vs. writing to multiple backup files
  2. Using Compression
  3. Changing the MaxTransferSize option for the backups.

All backups were performed to a 4K allocation unit drive, except for the multiple file backup, which were performed on a 64K Allocation unit disk. The following backup performance was observed.

   1: backup database [AnotherSingleFileDatabase]

   2: to disk = 'C:\AnotherSingleFileDatabase.bak'

   3: Go

   4: --BACKUP DATABASE successfully processed 569493 pages in 52.510 seconds (84.729 MB/sec).

   5:  

   6: -- Delete the file to preserve Space

   7: xp_cmdshell 'del C:\AnotherSingleFileDatabase.bak'

   8: -- Backup taken to Multiple Files on Different drives of 64K disk allocation Units

   9: backup database [AnotherSingleFileDatabase]

  10: to 

  11: disk = 'F:\AnotherSingleFileDatabase_BackupFile1.bak',

  12: disk = 'G:\AnotherSingleFileDatabase_BackupFile1.bak',

  13: disk = 'H:\AnotherSingleFileDatabase_BackupFile1.bak',

  14: disk = 'E:\AnotherSingleFileDatabase_BackupFile1.bak'

  15: Go

  16: --BACKUP DATABASE successfully processed 569490 pages in 33.600 seconds (132.414 MB/sec).

  17: --- Delete all the files

  18: xp_cmdshell 'del E:\AnotherSingleFileDatabase_BackupFile1.bak'

  19: go

  20: xp_cmdshell 'del H:\AnotherSingleFileDatabase_BackupFile1.bak'

  21: go

  22: xp_cmdshell 'del G:\AnotherSingleFileDatabase_BackupFile1.bak'

  23: go

  24: xp_cmdshell 'del F:\AnotherSingleFileDatabase_BackupFile1.bak'

  25: go

  26:  

  27: -- Backup With COMPRESSION

  28: backup database [AnotherSingleFileDatabase]

  29: to disk = 'C:\AnotherSingleFileDatabase.bak'

  30: WITH COMPRESSION

  31: Go

  32: ---BACKUP DATABASE successfully processed 569491 pages in 25.721 seconds (172.977 MB/sec).

  33: xp_cmdshell 'del C:\AnotherSingleFileDatabase.bak'

  34: Go

  35:  

  36: -- MAXTRANSFER OPTION

  37: DBCC TRACEON(3605,3213,-1)

  38: -- Writes backup Configuration related informtion to the trace file.

  39:  

  40: backup database [AnotherSingleFileDatabase]

  41: to disk = 'C:\AnotherSingleFileDatabase.bak'

  42: WITH COMPRESSION

  43: go

  44: xp_cmdshell 'del C:\AnotherSingleFileDatabase.bak'

  45: Go

  46: sp_readerrorlog

  47: go

  48:  

  49: backup database [AnotherSingleFileDatabase]

  50: to disk = 'C:\AnotherSingleFileDatabase.bak'

  51: WITH COMPRESSION, MAXTRANSFERSIZE = 2097152

  52: Go

  53: -- BACKUP DATABASE successfully processed 569490 pages in 21.920 seconds (202.971 MB/sec).

  54: xp_cmdshell 'del C:\AnotherSingleFileDatabase.bak'

  55: Go

We got better write performance when using a 64K allocation unit disk and when using Multiple Files. Further performance improvement was observed with Compression and by increasing the MaxTransferSize for the backups.

Optimizing Memory

To optimize reads, I focused mainly on changing the BufferCount option for the backups. This would help create more buffers in the memory for the buffer.

Total Memory used = BufferCount * MaxTransferSize

While i did not have a demo for the performance improvement when increasing the number of buffer, I had a demo on the side affect of increasing the BufferCount/MaxTransferSize to be very high.

   1: backup database [AnotherSingleFileDatabase]

   2: to disk = 'C:\AnotherSingleFileDatabase.bak'

   3: WITH COMPRESSION, MAXTRANSFERSIZE = 2097152, BUFFERCOUNT = 3000

   4: Go

   5: -- Error Message

   6: --There is insufficient system memory in resource pool 'default' to run this query.

If we increase the BufferCount and the MaxTransferSize to be very high, we would get into memory issues on the server.

Hopefully this information helps.

Advertisement

TECHCON September 21 2013 – A Day to Remember

September 21st 2013, marked a remarkable day for the SQLBangalore UG, BITPro, PSBUG (Power Shell Bangalore User Group) and BDotNet UG, when all the four groups came together to host one of the most extensive technical conference ever in Bangalore.

With over 250+ Participants and 20 session being conducted in parallel across three conference rooms at the Microsoft facility at Bangalore, this easily was the biggest technical conference brought to you by the community. I had the privilege to be part of the SQL Server Track, both as the host and the presenter and it was a pleasure to see close to 130 participants coming in as early as 8:30 AM in the morning on a Saturday, which speaks volumes about their passion and the desire to learn.

The Day started on a very healthy note with Ryan Fernando (Founder:Qua Nutrition L|B) speaking on “Techies Guide to Better Nutrition”. Ryan’s talk was really an Eye opener and with my wife accompanying me, it opened up the Pandora’s box. I don’t think I need to mention what kind of treat, I am in for later when we reach home.

The second session was on Big Data, the hottest commodity on the Database market. Amarpreet Basan, Technical Lead, Microsoft SQL Server Support team at IGTSC talked in details about Big Data and did some really cool demo’s with HDInsight and Hadoop clusters. The slide decks and the demo’s scripts for this would be available on the SQL Bangalore UG page on Facebook.

I had the privilege to present next (this being my 4th session for the User Group). For this session I went with something which we always talk about in Best Practices, but not necessarily follow it in when working with SQL Server. I focused mainly on the Optimizing your Backups, where in we talked about how to optimize the backup operation for your SQL Server.

Prabhjot Kaur, (IGTSC, SQL Support) went next with her session on demystifying the myths of Tempdb. All the participants present work with SQL Server day in and day out, and would have spent a considerable time optimizing Tempdb for their SQL Server environments and the important of this presentation could be estimated from the fact that here was pin drop silence in the room for the 50 odd minutes when Probhjot was talking.

Next in line was the most important part of the day, LUNCH. A highly nutritious lunch was provided by iTIFFIN (No Its Not An APPLE product). With 499 Calories and absolutely the right ingredients, this was exactly what the doctor ordered for the DBA/Developer crowd in the room.

To make lunch even more interesting we had Balmukund Lakhani (B|T|F) with his usual trivia’s and jokes. All in all the 45 minute lunch break was both healthy and fun filled, thanks to ITiffin and coordinators for this event.

Post lunch we had 3 sessions lined up on some of the most sought after topics in the Market right now. We had Ajay Kumar (Technical Lead, IGTSC) and Sunil Kumar B.S (Escalation Engineer IGTSC) talking about SQL Azure, the Microsoft Cloud offering for SQL. Their session slide deck and the demo content would be made available on the SQL Bangalore User Group page on facebook.

This was followed by Sumit Sarabhai’s (Support Escalation Engineer, IGTSC) session on Query Tuning and Optimizations. With a room full of SQL Developers and Administrators, there was no doubt that this would end up as the most saught after sessions. I mean, is there a SQL Developer/Admin who hasn’t had the bad luck of having to tune a query and make it run faster. Sumit’s session covered some of the salient points about what are the things to do when tuning a query.

To end the sessions, we had Selva R. (Sr. Support Escalation Engineer, IGTSC), presenting on one of the coolest (if not the coolest) thing in the MSBI stack, Power BI. Speaking about Power BI, always reminds of a line in the Second Spiderman Movie (starring Tobey Maguire): “The Power of sun, in the palm of my hands”. Such is the power of Power BI, the master set comprising of Power View, Power Pivot, Power Map and Power Query.

We finished the day around 5:30 PM, with Balmukund fittingly pulling the curtains on a day filled with learning and knowledge sharing. After having worked the entire week and then having spent another 9 hours on a weekend, I thought I have had it enough.I could not have been more wrong, it was heartening to see most of the folks hanging around to interact and sharing their SQL Server experience with one another.

To sum it up, it was an over whelming experience being part of such a fabulous event, and the only parting words I could think off is “May the SQL Server be with you