SQLUNINTERRUPTED

I am just a medium, SQL Server the Goal

Monthly Archives: March 2012

Change Data Capture – We don’t see, what we don’t know

Recently a customer asked me what really happens in the background when we enable Change Data Capture on a Database/Table and if there is a way to monitor all the changes which happens on the DB after enabling CDC. Honestly I did not know for sure what happens in the background. So I did what I do best. I created a small repro for a small database and a table  and used profiler to see what really happens in the background.

Before we get into the specifics of the background operations, lets spend a minute on CDC architecture.

For CDC, the source for tracking the changes is the SQL Server Transaction Logs. As DML operations happen on the Database, they are recorded in the SQL Server transaction, which serves as the input to the change capture process. The change process used Log Reader (same as what Transactional Replication uses) to read the entries from the transaction log file. The capture process than segregates this information based on the tracked tables and put the corresponding entries in the change table associated with the tracked tables. These changes can then be enumerated using the functions provided by CDC.

The first step in enabling CDC is to enable it on the Database. This is done using the system stored procedure “sys.sp_cdc_enable_db”.

This stored procedure needs to be executed in the context of the database on which CDC is to be enabled. The stored procedure does not take any arguments. When you executed the stored procedure the following happen.

  1. is_cdc_enabled bit in the sys.database catalog view is enabled.
  2. A new database schema (named CDC) and a new DB user (named CDC) is created in the DB.
  3. A DDL trigger tr_MScdc_ddl_event is created in the database.
  4. Creates the following tables in the database
    • cdc.captured_columns — Contains the details of all the columns being tracked by CDC at a database level.
    • cdc.change_tables — Stores information about the cdc enabled tables and their capture instance. One row for each capture instance.
    • cdc.ddl_history — All DDL operations happening on the DB.
    • cdc.index_columns — Tracks the columns on which the indexes have been defined. This is done for the indexes defines in the @index_name parameter for the sys.sp_cdc_enable_table. If the index name is null, then the primary key index of the table is used.
    • cdc.LSN_timeMapping — Maps the LSN in the cdc.<capture_instance>_CT table to the time the LSN’s were generated.
  5. The Following CDC related sp’s are created in the database.
    • cdc.sp_batchinsert_lsn_time_mapping
    • cdc.sp_ins_dummy_lsn_time_mapping
    • cdc.sp_ins_instance_enabling_lsn_time_mapping
    • cdc.sp_ins_lsn_time_mapping

Once the database has been enabled for CDC, a table can be configured for change capture. To enable a Table for tracking, the following Stored procedure needs to be used “sys.sp_cdc_enable_table”.

This stored procedure takes about 9 parameters (some are optional) as described below.

EXEC sys.sp_cdc_enable_table
    @source_schema = N’dbo’
  , @source_name = N’test1′
  , @role_name = N’CDC_Admin’
  , @capture_instance = N’CaptureInst1′
  , @supports_net_changes = 1
  , @index_name = null
  , @captured_column_list = N’ id, name, email ‘
  , @filegroup_name = null
  , @allow_partition_switch = 1
GO

@source_schema and @source_name are the system names of the schema and the table which would be tracked by CDC.

@role_name is the database role which is used to allow access to the change data. If the role does not exists in the database, the stored procedure would try to create the role. If the user running the SP does not have privileges to create a database role, the sp execution would fail.

@capture_instance is the instance used to track changes for a particular table. This capture instance would be used to create a cdc table in the database to track changes specific to the source table. For example, in the above case, a cdc table called cdc.CaptureInst1_CT is created to track changes specific to the TEST1 table.

@supports_net_changes parameter controls the creation of a CDC function, which can be used to provide net/overall changes to a table. For example, lets assume test1 table has 1 record in it. In a Transaction there are 10 DML operations on the record. The net changes function would report the last changed value for the record.

@index_name: This parameter is used to provide the name of the unique index on the table, in case if there is no primary key or unique index already present in the table. CDC needs to identify each record in the table. This can be used using any unique constraint/primary key constraint on the table. If the index is specified and there is a primary key in the table, then the index columns are given precedence in determine row uniqueness.

@captured_column_list: this is a comma separated list of all the columns which needs to be tracked by CDC.

@filegroup_name indicates the filegroup which is to be used for creating the Change Table associated with the tracked table.

@allow_partition_switch indicated whether Alter table.. Switch partition can be executed on the table or not. The default value is 1.

When this stored procedure is executed, the following operations happen in the back ground.

  1. A change table associated with the tracked table (in example above test1) is created in the DB. This change table has the name as cdc.<capture_instance>_CT.
  2. Two functions (depending on whether @supports_net_changes was set to 0 or 1) are created in the database. These functions have the following names
    1. cdc.fn_cdc_get_all_changes_<capture_instance_name>
    2. cdc.fn_cdc_get_net_changes_<capture_instance_name>
  3. If this was the first execution of the sp in the DB, it also creates two SQL Agent Jobs, along with the necessary entries in the MSDB database.
    • cdc.ChangeDataCapture_capture
    • cdc.ChangeDataCapture_cleanup
  4. Set the is_tracked_by_cdc bit in the sys.tables catalog view.

At this point, database/table has been setup for change capture using CDC. In the various tables mentioned above, we would have certain metadata information created. For example

Select * from cdc.captured_columns

image

select
OBJECT_NAME(object_id) as ChangeTrackingTable,
OBJECT_NAME (source_object_id) as TrackedTable,
capture_instance,
supports_net_changes,
role_name,
index_name,
filegroup_name,
partition_switch
from cdc.change_tables

image

Next, I inserted a single record in the Test1 table, and then updated the same record. Then I checked for the entries in my Change Table.

select __$start_lsn, __$seqval, __$operation, __$update_mask, ID, name, email

from cdc.CaptureInst1_CT

image

The first column in the output indicate the LSN (log sequence number) of the commit associated with the transaction. CDC would populate the changes in the Change Table only when the DML transaction commits. If the transaction is open, changes made by the transaction does not reflect in the Change Table.

If in the transaction there are multiple DML statements, then they are tracked using the __$seqval, which provides the sequential id for the operations in the transaction.

The next column __$operation provides the type of the operation which was recorded. For Insert and delete operations the operation is straight forward (values of 1 for delete and 2 for insert) and show up accordingly in the change table. But in case of Update, the operation is recorded as an before and after change values. The __$operation value of 3 indicates an update operation with the data before changes, while a value of 4 indicates the update operation with the column values after the change.

The __$update_mast is a bit mask for the tracked columns in the source table. For example if 3 columns are being tracked by CDC, then the bit mask has 3 bits indicating the 3 columns. The bits are arranged in the order of the column ordinals in the table.

For example, assume we have a table test1 with 5 columns (a, b, c, d, e). We have enabled CDC for this table on the following columns (a,b,e).

Then the bit mask is created with 3 bits. The right most bit indicates first column (i.e. column a in our example), while the left most bit indicates the last column (i.e. column e).

Whenever a perticular column has been modified the corresponding bit for that column is set to 1. In the output above, the value of 0x4 (100) indicates that the third column (email) was modified.

Advertisement

Replicating a Table with FileStream Data using Merge Replication

With SQL Server 2008, FILESTREAM support was added to SQL Server. FileStream allows creation of large objects as File System objects instead of database objects. With file stream, the data resides on the File System, and can be accessed under the context of a SQL Transaction using T-SQL or Win32 API’s.

FileStream is compatible with other existing SQL Server features like Replication and Mirroring. Recently I was working with a customer where they were trying to implement Merge Replication on a database using File Stream and were running into the below error during synchronization.

The schema script ‘DocumentStore_2.sch’ could not be propagated to the subscriber. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147201001)
Default FILESTREAM filegroup is not available in database ‘ReplicatedFileStorage’. (Source: MSSQLServer, Error number: 1969)

We tried to create a test scenario for the above to see why that was happening.

On the production database we created a Test Database with FileStream enabled. In the test Database, we created a table with a FileStream column and tried to replicate that database using Merge Replication. Since Merge Replication requires a rowguid column to be present on the table, we also added a new unique identifier column to the table, with a default constraint of NEWSEQUENTIALID(). We did not use NEWID() as NEWSEQUENTIALID() performs better.

Included below is the script for the Database and the tables.

   1: USE [master]

   2: GO

   3:  

   4: /****** Object:  Database [TestFileStream]    Script Date: 03/27/2012 07:00:44 ******/

   5: CREATE DATABASE [TestFileStream] ON  PRIMARY 

   6: ( NAME = N'TestFileStream', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\TestFileStream.mdf' , SIZE = 4352KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ), 

   7:  FILEGROUP [FILESTREAMGROUP1] CONTAINS FILESTREAM  DEFAULT 

   8: ( NAME = N'FSGroup1File', FILENAME = N'C:\Program Files\Microsoft SQL Server\SQLFILESTREAM\FSData' )

   9:  LOG ON 

  10: ( NAME = N'TestFileStream_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\TestFileStream_log.LDF' , SIZE = 3840KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)

  11: GO

  12:  

  13: USE [TestFileStream]

  14: GO

  15:  

  16: CREATE TABLE [dbo].[DocumentStore](

  17:     [DocumentID] [int] IDENTITY(1,1) PRIMARY KEY,

  18:     [Document] [varbinary](max) FILESTREAM  NULL,

  19:     [DocGUID] [uniqueidentifier]  NOT NULL ROWGUIDCOL UNIQUE DEFAULT NEWSEQUENTIALID() ,

  20:     [ModifiedDate] [datetime] NULL

  21: ) ON [PRIMARY] FILESTREAM_ON [FILESTREAMGROUP1]

Next we created a Merge Publication for the database. In this case, we did not use the UI to create the publication, as the UI does not provide any specific option for FileStream Replication. We created a script for the Publication.

   1: use [TestFileStream]

   2: exec sp_replicationdboption @dbname = N'TestFileStream', @optname = N'merge publish', @value = N'true'

   3: GO

   4: -- Adding the merge publication

   5: use [TestFileStream]

   6: exec sp_addmergepublication @publication = N'FileStreamPublication', @description = N'Merge publication of database ''TestFileStream'' from Publisher ''SQLSCOM''.', @sync_mode = N'native', @retention = 14, @allow_push = N'true', @allow_pull = N'true', @allow_anonymous = N'true', @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true', @compress_snapshot = N'false', @ftp_port = 21, @allow_subscription_copy = N'false', @add_to_active_directory = N'false', @dynamic_filters = N'false', @conflict_retention = 14, @keep_partition_changes = N'false', @allow_synctoalternate = N'false', @max_concurrent_merge = 0, @max_concurrent_dynamic_snapshots = 0, @use_partition_groups = null, @publication_compatibility_level = N'100RTM', @replicate_ddl = 1, @allow_subscriber_initiated_snapshot = N'false', @allow_web_synchronization = N'false', @allow_partition_realignment = N'true', @retention_period_unit = N'days', @conflict_logging = N'both', @automatic_reinitialization_policy = 0

   7: GO

   8:  

   9: exec sp_addpublication_snapshot @publication = N'FileStreamPublication', @frequency_type = 4, @frequency_interval = 14, @frequency_relative_interval = 1, @frequency_recurrence_factor = 0, @frequency_subday = 1, @frequency_subday_interval = 5, @active_start_time_of_day = 500, @active_end_time_of_day = 235959, @active_start_date = 0, @active_end_date = 0, @job_login = null, @job_password = null, @publisher_security_mode = 1

  10:  

  11:  

  12: use [TestFileStream]

  13: exec sp_addmergearticle 

  14: @publication = N'FileStreamPublication', 

  15: @article = N'DocumentStore', 

  16: @source_owner = N'dbo', 

  17: @source_object = 

  18: N'DocumentStore', 

  19: @type = N'table', 

  20: @description = N'', 

  21: @creation_script = null, 

  22: @pre_creation_cmd = N'drop', 

  23: @schema_option = 0x000000010C034FD1, 

  24: @identityrangemanagementoption = N'auto', 

  25: @pub_identity_range = 10000, 

  26: @identity_range = 1000, 

  27: @threshold = 80, 

  28: @destination_owner = N'dbo', 

  29: @force_reinit_subscription = 1, 

  30: @column_tracking = N'false', 

  31: @subset_filterclause = null, 

  32: @vertical_partition = N'false', 

  33: @verify_resolver_signature = 1, 

  34: @allow_interactive_resolver = N'false', 

  35: @fast_multicol_updateproc = N'true', 

  36: @check_permissions = 0, 

  37: @subscriber_upload_options = 0, 

  38: @delete_tracking = N'true', 

  39: @compensate_for_errors = N'false', 

  40: @stream_blob_columns = N'true', 

  41: @partition_options = 0

  42: GO

In the sp_addmergearticle, there are two options which needs to be configured for Replicating a FileStream object to the subscriber.

@schema_option = 0x000000010C034FD1

@stream_blob_columns = N’true’

For a FileStream object to be replicated without issues to the client, the @schema_option should be set accordingly. There are two specific hex codes, which needs to be present in the @schema_option.

0x100000000  –> Use this option to replicate the FILESTREAM attribute if it is specified on varbinary(max) columns. Do not specify this option if you are replicating tables to SQL Server 2005 Subscribers. Replicating tables that have FILESTREAM columns to SQL Server 2000 Subscribers is not supported, regardless of how this schema option is set.

0x800000000  –> Set this option to store FILESTREAM data on its own filegroup at the Subscriber. If this option is not set, FILESTREAM data is stored on the default filegroup. Replication does not create filegroups; therefore, if you set this option, you must create the filegroup before you apply the snapshot at the Subscriber.

The second hex value, which needs to be added to the @schema_option indicates where the FileStream objects would be created. If the second option is not specified, then SQL would try to add the FileStream objects to the the default FileStream filegroup (this is different from the default data filegroup i.e. primary filegroup). If there is no default file stream filegroup added to the subscriber database, replication would fail with the above messages. This also means that if we are trying to create the database from the “Create Subscription wizard” we need to make sure we also specify a default FileStream Filegroup during DB creation.

The second parameter @stream_blob_columns is used in cases where the file stream objects are expected to be more than 2 GB in size. If this parameter is not set to true, then replication would fail whenever the FileStream object is grater than 2 GB.

In the customer scenario, since he was creating a new DB through the “Create Subscription Wizard” and not specifying the FileStream filegroups, replication was failing with the above message.

The above used @schema_option can be broken down as follows.

0x100000000 
0x8000000  
0x4000000   
0x20000

0x10000 
0x4000

0x800

0x400

0x200

0x100

0x80

0x40

0x10

0x01

Details of the various hex codes can be found here.

Error 15581 – Please create a master key in the database or open the master key in the session before performing this operation

Recently I was working with a client, when a unique issue was brought to my notice. As per the client, when they are tying to restore a TDE enabled database to another server, they cannot perform the restore using SSMS User interface. They were able to restore the database using T-SQL though.

Now this sounded a bit funny, but then the customer showed it to me. Since I did not have any answers at that time, I decided to try it out later and check and see if anything they are doing is wrong or not.

To begin with, I checked their scripts for enabling Transparent Database Encryption on the databases and also their backup scripts for the master key and certificates.

Sample code for the screen provided by the customer is provided below.

   1: USE master;

   2: GO

   3:  

   4: CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'MyStrongPassword';

   5: go

   6:  

   7: BACKUP MASTER KEY TO FILE = 'D:\MasterKey.dat'  ENCRYPTION BY PASSWORD = 'MyStrongPassword';

   8: go

   9:  

  10: CREATE CERTIFICATE MyServerCert WITH SUBJECT = 'My DEK Certificate';

  11: go

  12:  

  13: BACKUP CERTIFICATE MyServerCert TO FILE = 'D:\MyCert.dat'

  14:     WITH PRIVATE KEY ( FILE = 'D:\PrivKey.dat' , 

  15:     ENCRYPTION BY PASSWORD = 'MyStrongPassword' );

  16: GO

  17:  

  18: USE AdventureWorks2008R2;

  19: GO

  20:  

  21: CREATE DATABASE ENCRYPTION KEY

  22: WITH ALGORITHM = AES_128 ENCRYPTION BY SERVER CERTIFICATE MyServerCert;

  23: GO

  24:  

  25: ALTER DATABASE AdventureWorks2008R2

  26: SET ENCRYPTION ON;

  27: GO

So there is nothing wrong with the encryption scripts. I next checked for the script which was being used by the customer during restore.

   1: Restore master key from File = '\\win2k3dc-alh16t\E$\MasterKey.dat'

   2: decryption by password = 'MyStrongPassword'

   3: encryption by password = 'MyStrongPassword'

   4: go

   5:  

   6: CREATE CERTIFICATE MYServCert

   7: From File = '\\win2k3dc-alh16t\E$\MyCert.dat'

   8: With private key (file ='\\win2k3dc-alh16t\E$\PrivKey.dat',

   9: decryption by password = 'MyStrongPassword')

So the Master key and the certificate are in place. When we try to restore the database using the SSMS, it throws an error.

Please create a master key in the database or open the master key in the session before performing this operation.

RESTORE FILELIST is terminating abnormally. (Microsoft SQL Server, Error: 15581)

The same error would come from a T-SQL window, if we try to restore the database using the command below…

   1: restore database AdventureWorks2008R2

   2: from disk = '\\win2k3dc-alh16t\E$\AdventureWorks2008.bak'

   3: with move 'AdventureWorks2008R2_Data' TO 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\Advworks2008.mdf', 

   4: move 'AdventureWorks2008R2_log' To 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\Advworks2008_log.ldf',

   5: move 'FileStreamDocuments2008R2' To 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\FSDATA'

This also generates the same error as above.

This error simply means that we need to open the Master key in the same session (server process id) as the restore operation. If I run the below command, and then perform the restore, there is no error and the Database restore is successful.

   1: OPEN MASTER KEY DECRYPTION BY PASSWORD = 'MyStrongPassword'

When restoring using SSMS, since there is no provision for opening the master key, restore operation fails to decrypt the encrypted files and hence fails.

The same error happens, if you try to backup an encrypted database using SSMS. This error would also be generated, if we try to backup/restore the database from a T-SQL window, which does not have the OPEN MASTER KEY command.

In order for the Master key to be used by all the session, it needs to be encrypted by the Service Master Key. When the Database master key is encrypted by the Service Master Key, it can be automatically opened and closed, and we would not need to specify any OPEN MASTER KEY command.

In the above script for restoring the master key and the restoring the Certificate, we would need to add an additional command for

   1: ALTER MASTER KEY

   2:     ADD ENCRYPTION BY SERVICE MASTER KEY

How to validate if SQL FILESTREAM document was modified outside SQL

Recently, I came across a question about validating if the SQL FileStream document has been modified outside of SQL or not.

My immediate reaction was one of shock. If a FileStream document has been modified outside of SQL, then we have bigger problems at hand. We have serious issues with SQL Server security and data consistency is at risk.

FileStream storage, is not supposed to be modified outside of SQL. Though we can have Win32 streaming of these documents(documents created on the File System), but these has to be done in the context of a SQL Transaction and only within the context of a SQL Transaction. FileStream was not created for external modifications.

If the documents are modified outside of SQL, then the database is risk. For example, if we by mistake delete the files from the file system, SQL Would treat it as a database corruption and would throw errors during the next Check DB run.

Msg 7904, Level 16, State 2, Line 1
Table error: Cannot find the FILESTREAM file "00000020-00000146-000a" for column ID 2 (column directory ID d35bf83a-99c0-4a7d-ac24-e9f7cf15a54b) in object ID 101575400, index ID 1, partition ID 72057594038910976, page ID (1:170), slot ID 0.
There are 4 rows in 1 pages for object "DocumentStore".
CHECKDB found 0 allocation errors and 3 consistency errors in table ‘DocumentStore’ (object ID 101575400).

But the question is, how do we figure out if a File Stream document was modified outside of SQL or not.

Short answer is there is NO way to do it. There is no inbuilt function or mechanism to figure that. The simple reason why this cannot be done is because the path to store the File Stream data is not exposed for Direct Access. Also the file names which are visible in the File System are not the same which SQL Server maintains. They are different and there is no way to correlate the SQL Names with the File system Name.

As a SQL Server DBA, you need to make sure that the File Stream files are not get modified outside of SQL.

Creating a Report Subscription using .Net

In order to create a report subscription, we can use any of the following Reporting Services endpoint.

  • ReportService2005 -  If Reporting Services (SSRS 2008 or earlier) is running in Native Mode
  • ReportService2006 – If Reporting Services (SSRS 2008 or earlier) is running in SharePoint Integration Mode.
  • ReportService2010 – For Reporting Services 2008 R2.

Each of these endpoint, expose a Method for the creation of a Report Subscription. In this example, I am creating a Report Subscription for windows File Share. Same techniques could be used for creating a email based subscription.

Steps to create a Report subscription

Declare an object of the ReportService2005 (or any of the above mentioned endpoints) and bind the object to the Report Server.

   1: ReportingService2005 rs = new ReportingService2005();

   2: //            rs.GetExtensionSettings("

   3:             rs.Url = "http://localhost/reportserver/ReportService2005.asmx";

   4:             rs.Credentials = System.Net.CredentialCache.DefaultCredentials;

 

Provide the path for the report, for which the subscription needs to be made and define a schedule for the Subscription. In this example, I created a Subscription schedule to run at 5PM every Thursday.

   1:         string report =  "/PowerManagementSolution/MachineTypeInformation";

   2:             string desc = "Created using .Net";

   3:             string eventType = "TimedSubscription";

   4:             string scheduleXml =

   5:                @"<ScheduleDefinition>" +

   6:                 "   <StartDateTime>2012-02-16 T17:00:00" +

   7:                 "   </StartDateTime>" +

   8:                 "   <WeeklyRecurrence>" +

   9:                 "      <WeeksInterval>1</WeeksInterval>" +

  10:                 "      <DaysOfWeek>" +

  11:                 "         <Thursday>True</Thursday>" +

  12:                 "      </DaysOfWeek>" +

  13:                 "   </WeeklyRecurrence>" +

  14:                 "</ScheduleDefinition>";

Next, I defined the parameters for the Subscription. For example, the Format, the location where the rendered report needs to be created, using a ParameterValue collection.

   1:     ParameterValue[] extensionParams = new ParameterValue[7];

   2: // file name for Report rendering.

   3:             extensionParams[0] = new ParameterValue();

   4:             extensionParams[0].Name = "FILENAME";

   5:             extensionParams[0].Value = "Sales Order Detail";

   6:  

   7:             extensionParams[1] = new ParameterValue();

   8:             extensionParams[1].Name = "FILEEXTN";

   9:             extensionParams[1].Value = "True";

  10:  

  11: // File Share and format for Rendering Report

  12:             extensionParams[2] = new ParameterValue();

  13:             extensionParams[2].Name = "PATH";

  14:             extensionParams[2].Value = @"\\myfileshare\MyReports";

  15:             

  16:             extensionParams[3] = new ParameterValue();

  17:             extensionParams[3].Name = "RENDER_FORMAT";

  18:             extensionParams[3].Value = "EXCEL";

  19:  

  20: // The login account, which has permissions to write to the file share.

  21:             extensionParams[4] = new ParameterValue();

  22:             extensionParams[4].Name = "USERNAME";

  23:             extensionParams[4].Value = @"<NetworkLogin>";

  24:  

  25:             extensionParams[5] = new ParameterValue();

  26:             extensionParams[5].Name = "PASSWORD";

  27:             extensionParams[5].Value = "<NetworkPassowrd>";

  28:  

  29: // Write Mode for file creation.

  30:             extensionParams[6] = new ParameterValue();

  31:             extensionParams[6].Name = "WRITEMODE";

  32:             extensionParams[6].Value = "None";

once the parameters and the Report Values have been specified, we need to call the CreateSubscription method for creating the subscription.

   1: /// Setting the Report Server Subscription Details. This is a FileShare based Subscription.

   2:  

   3:             string matchData = Convert.ToString(scheduleXml);

   4:             ExtensionSettings extSettings = new ExtensionSettings();

   5:             extSettings.ParameterValues = extensionParams;

   6:             extSettings.Extension = "Report Server FileShare";

   7:  

   8:             try

   9:             {

  10:                 // call the Create Subscription method in the Reporting Services endpoint.

  11:                 rs.CreateSubscription(report, extSettings, desc,

  12:                     eventType, matchData, null);

  13:                 MessageBox.Show("Susbcription Created");

  14:                 Application.Exit();

  15:             }

  16:  

  17:             catch (SoapException E)

  18:             {

  19:                 Console.WriteLine(MessageBox.Show(E.Message));

  20:             }

This code would create the Subscription on the mentioned Report Server instance.