What Did that Geek Just Say?

Working in a technical field has it’s ups and downs. One of the more common annoyances I run into is people that can either not explain what they mean in simple terms or that choose not to in order to appear knowledgeable. I still have a long way to go before I can suggest a sure-fire approach to curtailing this behavior. Quite honestly, I have been known to fall back on “fancy tech words” on more than one occasion to hide ignorance or to try to seem smart (OK that was hiding ignorance twice but it sounds better the way I said it). What I can offer is a great resource to understand what people are saying and maybe even have some fun with them/me.

The National Institute of Standards and Technology, NIST, maintains a Dictionary of Algorithms and Data Structures. The dictionary is a great place to go to find out what people are talking about. It is also a great place to looking for different ways of doing things. I sometimes even go to the site just to look for entertaining technical concepts, like the “Cactus Stack”, “Stooge Sort” or even “Big-O Notation”.

Have fun with the big words but please only use them for good or at least funny evil. Go ahead and post any you feel are interesting or just plain funny in the comments of this post. The more time I spend on that site the more terms I find I was overlooking.

How is Fill Factor Impacting My Indexes?

TSQLTuesday LogoThe theme for this month’s T-SQL Tuesday is indexes so it seemed like the perfect excuse to blog about a script that I have written to see what choices for fill factors on indexes actually does to the structure of those indexes. I have to give special thanks to Brent Ozar (Blog|Twitter) for taking the time to review and offer his thoughts on the query. I have to admit that I was nervous to publish the script because I have not seen anything else like it and figured there must have been a reason for that.

For those that are unfamiliar, fill factor is an optional parameter that can be specified when adding or rebuilding an index. Specifying a fill factor tells SQL Server to leave a certain percentage of each data page open for future inserts in order to lessen the likelihood of page splits. Page splits are what happens when SQL Server tries to add another row to a data page that it does not fit on. Most page splits involve taking half the rows on the page and putting them onto a newly allocated page somewhere else on your data file, allowing sufficient room for the new row to be added to either page. If you are lucky enough that the row you are adding would be the last row on the page then the existing page is left as is and the new row is added to the newly allocated page. Regardless of how the page splits, the new page is almost never anywhere near the other pages of the index it goes with. The scattering of index pages means that the disk heads have to move around a lot more leading to poor performance.

Now that we have talked about the problems that fill factor can help us with, we should talk about the dark side. Yes, the dark side. Setting the fill factor to anything other than the default decreases the rows per page for that index, thereby increasing the number of pages that must be read. According to Books Online, the read performance penalty is twice the chosen fill factor. This means that setting the fill factor to 50% will lead to twice as many reads to get the same data. Even a more reasonable number like 90% would have a 20% performance penalty on all reads.

By now it should be clear that choosing the right fill factor for your indexes is one of the more important steps in creating an index, right behind picking the right key columns. The problem is knowing how to pick a good number and here is where it gets tough because like everything else: It Depends and It Changes. My method of setting fill factors is to calculate the rows per page of an index then use the expected change in rows between reindex operations to figure out what percentage of rows need to be left free per page. The exception to this process is if the index is on an ever increasing value, like an identity column, then the fill factor is automatically 100.

My process works very well for the “It Depends” part of setting a fill factor but completely ignores the “It Changes” part. Over time as tables get larger, the fill factor setting on a table needs to be adjusted down. I have also run into servers where the default fill factor has been set to a value other than 0 (same as 100%), creating a need to quickly identify indexes that could perform better. What I needed was a simple query that I could run that would very quickly give me an idea of where I can adjust fill factors to improve performance.

Here is that query:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
SELECT      OBJECT_NAME(ips.object_id) AS table_name,
            ips.index_type_desc,
            ISNULL(i.name, ips.index_type_desc) AS index_name,
            ISNULL(REPLACE(RTRIM((  SELECT      c.name + CASE WHEN c.is_identity = 1 THEN ' (IDENTITY)' ELSE '' END + CASE WHEN ic.is_descending_key = 0 THEN '  ' ELSE ' DESC  ' END
                                    FROM        sys.index_columns ic
                                                    INNER JOIN sys.columns c
                                                          ON ic.object_id = c.object_id
                                                                AND ic.column_id = c.column_id
                                    WHERE       ic.object_id = ips.object_id
                                                          AND ic.index_id = ips.index_id
                                                                AND ic.is_included_column = 0
                                    ORDER BY    ic.key_ordinal
                                    FOR XML PATH(''))), '  ', ', '), ips.index_type_desc)  AS index_keys,
            ips.record_count,
            (ips.page_count / 128.0) AS space_used_in_MB,
            ips.avg_page_space_used_in_percent,
            CASE WHEN i.fill_factor = 0 THEN 100 ELSE i.fill_factor END AS fill_factor,
            8096 / (ips.max_record_size_in_bytes + 2.00) AS min_rows_per_page,
            8096 / (ips.avg_record_size_in_bytes + 2.00) AS avg_rows_per_page,
            8096 / (ips.min_record_size_in_bytes + 2.00) AS max_rows_per_page,
            8096 * ((100 - (CASE WHEN i.fill_factor = 0 THEN 100.00 ELSE i.fill_factor END)) / 100.00) / (ips.avg_record_size_in_bytes + 2.0000) AS defined_free_rows_per_page,
            8096 * ((100 - ips.avg_page_space_used_in_percent) / 100.00) / (ips.avg_record_size_in_bytes + 2) AS actual_free_rows_per_page,
            reads = ISNULL(ius.user_seeks, 0) + ISNULL(ius.user_scans, 0) + ISNULL(ius.user_lookups, 0),
            writes =  ISNULL(ius.user_updates, 0),
            1.00 * (ISNULL(ius.user_seeks, 0) + ISNULL(ius.user_scans, 0) + ISNULL(ius.user_lookups, 0)) / ISNULL(CASE WHEN ius.user_updates > 0 THEN ius.user_updates END, 1) AS reads_per_write
FROM        sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'SAMPLED') ips
            INNER JOIN sys.indexes i
                ON ips.object_id = i.object_id
                    AND ips.index_id = i.index_id
            LEFT OUTER JOIN sys.dm_db_index_usage_stats ius
                ON ius.database_id = DB_ID()
                    AND ips.object_id = ius.object_id
                        AND ips.index_id = ius.index_id
WHERE       ips.alloc_unit_type_desc != 'LOB_DATA'
ORDER BY    ips.index_type_desc,
            OBJECT_NAME(ips.object_id),
            (ips.page_count / 128.0)

The query should be very familiar to anyone that has looked at index fragmentation in SQL 2005 or newer. The same rules apply, the only difference is the columns that are being used. For larger databases consider limiting the scan to a single table or even a single index. It is also a good idea to ignore smaller tables here. I leave it up to the individual running the script to define a small table. For some that will be 100 pages, others 500 pages, but anything over 1000 pages should probably be looked at.

The size calculations used in the query are based on the formulas found here: http://msdn.microsoft.com/en-us/library/ms178085(SQL.90).aspx, although the math is quite simple because the DMV accounts for things like null bitmaps and row version information.

I assume that everyone will come up with slightly different ways to use the query. I like to make 2 passes over the data, the first in the morning and the second after the end of the business day. My first pass through the results is used to look for indexes that have too small of a fill factor set. They are easy to find because their free rows per page numbers are less than 1. A value of less than 1 means that the fill factor either needs to be changed to allow some free rows per page or to be more honest about the actual number of free rows per page. My second pass is used to look at the change over the course of the day. The best way to do the comparison is to paste both result sets into Excel and use formulas to look for differences. The second pass will show the indexes that have their factor set either too high or too low. The idea is to focus just as much on the indexes that show significant changes as much as those that do not show any changes at all.

So there it is, a query to tell how good the current fill factor settings are.

To make sure that all users stay as happy as possible it is best to run the query the first time during an off-peak time so that impact can be safely gauged.

Please let me know if you run into any issues, have any ideas that would make this script better or just want to share how you are using it. As always scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

A Stored Procedure to Move SSIS Packages Between Servers

Today’s post is one that I have been debating on whether to publish for a while. The purpose of the stored procedure I am sharing is to move SSIS packages stored via SQL Server Storage from one server SQL 2005 server to another in a way that can easily be invoked by any release management system that can call stored procedures. The part I have reservations about is that it uses linked servers. I almost never allow linked servers to be created on the servers I manage, mostly because they can be a security problem. Breaking the rules in this case is what was right for the particular problems I was trying to solve. Please consider whether you can implement this logic another way before using this stored procedure in your environment.

This stored procedure is not terribly complicated so I will run through what it does fairly quickly. The first step is to get the folder_id of the package we want to copy. If it gets more than 1 folder name back it throws an error because it does not know which package to move. If the folder_id returned is null then an error is thrown. If the stored procedure makes it through those checks, the current version a the destination is deleted and the new version is copied there.

Here is the code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
CREATE PROCEDURE dbo.move_ssis_package @from_server_name varchar(256), @to_server_name varchar(256), @package_name sysname

AS

DECLARE @sql_command nvarchar(4000),
@folder_id uniqueidentifier,
@foldername sysname

SELECT @sql_command = 'SELECT @folder_id = pf2.[folderid]
FROM [' + @from_server_name + '
].[msdb].[dbo].[sysdtspackagefolders90] pf
INNER JOIN [' + @from_server_name + '].[msdb].[dbo].[sysdtspackages90] p
ON pf.folderid = p.folderid
LEFT OUTER JOIN [' + @to_server_name + '].[msdb].[dbo].[sysdtspackagefolders90] pf2
ON pf.[foldername] = pf2.[foldername]
WHERE p.name = @package_name'

EXEC sp_executesql @sql_command, N'
@package_name sysname, @folder_id uniqueidentifier OUTPUT', @package_name = @package_name, @folder_id=@folder_id OUTPUT

IF @@ROWCOUNT > 1
BEGIN
RAISERROR ('
This package exists in more than one location.', 16, 1)
END

IF @folder_id IS NULL
BEGIN
RAISERROR ('
SSIS Folder does not exist.', 16, 1)
END

SELECT @sql_command = '
DELETE [' + @to_server_name + '].[msdb].[dbo].[sysdtspackages90]
WHERE name = @package_name'

EXEC sp_executesql @sql_command, N'
@package_name sysname', @package_name = @package_name

SELECT @sql_command = '
INSERT [' + @to_server_name + '].[msdb].[dbo].[sysdtspackages90]
SELECT [name]
,[id]
,[description]
,[createdate]
,@folder_id AS [folderid]
,[ownersid]
,[packagedata]
,[packageformat]
,[packagetype]
,[vermajor]
,[verminor]
,[verbuild]
,[vercomments]
,[verid]
,[isencrypted]
,[readrolesid]
,[writerolesid]
FROM [' + @from_server_name + '].[msdb].[dbo].[sysdtspackages90]
WHERE name = @package_name'

EXEC sp_executesql @sql_command, N'
@package_name sysname, @folder_id uniqueidentifier', @package_name = @package_name, @folder_id=@folder_id

Please let me know if you run into any issues, have any ideas that would make this stored procedure better or just want to share how you are using it. As always scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

Great News! SSMS Tools Pack 1.9 is Coming Out!

Mladen Prajdic (Blog|Twitter) recently announced that the newest version of SSMS Tools Pack is coming out and I am excited.

Why am I excited?

Well, I am glad you asked.

I am excited because it will allow me to define my window colors in SQL Management Studio using regular expressions rather than having to define them each individually.

Why is that such a big deal?

I have 100s of servers and I am constantly adding new servers while decommissioning old ones. The sheer amount of changes that would have to be made manually has always kept me from being able to take advantage of window coloring. Rather than have 100s of rules I now have less than 10 regular expressions that cover all of my servers. Here, check it out:

SSMS Tools Pack Connection Coloring Options Window
Naming convention changed to protect the employed.

So is that all the SSMS Tools Pack does is color windows?

No, not at all. The SSMS Tools Pack is a large suite of plug-ins for SQL Management Studio available as a free download. There are several features that I cannot live without. My favorite feature is that it can be configured to keep a journal of all queries you have run. This can be especially useful if you work in an environment where a sys admin can push updates that cause your machine to reboot whenever they feel like it. If you are not familiar with all of the features currently in the product then please go check out the list here.

If you have not tried out the SSMS Tools Pack then I highly suggest you give version 1.9 a whirl. I know I will.

How Can I Quickly Script Out Replication?

I recently finished taking down a bunch of servers that I was using to scale out my environment by forcing read only connections off of my main read-write servers. To make a long story short, hardware advances and the additional diagnostic information in SQL 2005 allowed me to consolidate to a few very powerful, reasonably well-tuned read-write servers. The consolidation of servers allowed me to save a ton of power and cooling along with some rack space and a good size chunk of SAN disk.

Taking down the servers means that I now have to update all of my environment diagrams, server configuration scripts and even a spreadsheet or two. Anyone who has ever done this before is cringing right now. One of the worst tasks is updating the replication scripts. I script my replication settings to a network share just in case I do something silly and need to revert to my last know good setup. The scripts can really save my bacon but they are incredibly tedious to create. I have to go into Management Studio, right-click on each publication, select generate script, select script to file then finally find the existing file for that database to add to or decide there is not one and start a new file. With the amount of scripts I had to create it would have easily taken 4, make that 8 hours with interruptions to get everything scripted.

Given, that the whole process would have taken hours and probably would have gotten screwed up along the way I decided to turn to PowerShell. Unfortunately, I did not have a script ready to go….WHHAAAT?…yeah I know..I don’t have a script for everything..so I threw the question out to Twitter. Aaron Nelson (Blog|Twitter) came back right away, pointing me toward SQL PowerShell Extensions (SQLPSX) and very quickly I had a working script. If you are not familiar with SQLPSX please take some time to check it out. It really makes coding PowerShell for SQL Server fast. More importantly, if you are not part of the SQL community on Twitter then get there first.

The actual script is not terribly complex. It takes a distribution server name and an output directory as parameters then works through all publications on each of the servers that connects to the distribution server, scripting them out.

I have only run this script against a dedicated distribution server but it should also work where the publisher is the distributor too.

I spent about 4 hours throwing the script together and generated all of the scripts I needed in a little over 1 minute.

With that, here is the script:

Update: Chad Miller (Blog|Twitter) showed how this script could take better advantage of the features of SQLPSX. His version of the script is available here: http://sev17.com/2010/08/quickly-script-out-replication-redux/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
param ([string]$sqlServer, [string]$outputDirectory, [bool]$scriptPerPublication)

if ($sqlServer -eq "")
{
    $sqlserver = Read-Host -Prompt "Please provide a value for -sqlServer"
}

if ($outputDirectory -eq "")
{
    $outputDirectory = Read-Host -Prompt "Please provide a value for -outputDirectory"
}

function ScriptPublications
{
    param ([string]$sqlServer, [string] $outputDirectory, [bool] $scriptPerPublication)
   
    Import-Module Repl
   
    [string] $path =  "$outputDirectory$((get-date).toString('yyyy-MMM-dd_HHmmss'))"
   
    New-Item $path -ItemType Directory | Out-Null
   
    foreach($publication in Get-ReplPublication $sqlServer)
    {
        [string] $fileName = "{0}{1}.sql" -f $path,$publication.DatabaseName.Replace(" ", "")
        if($scriptPerPublication)
        {
            $fileName = "{0}{1}_{2}.sql" -f $path,$publication.DatabaseName.Replace(" ", ""),$publication.Name.Replace(" ", "")
        }
        [string] $progressText = "Scripting {0} to {1}" -f $publication.Name.Replace(" ", ""),$fileName
        Write-Output $progressText
        $publication.Script([Microsoft.SqlServer.Replication.scriptoptions]::Creation `
            -bor  [Microsoft.SqlServer.Replication.scriptoptions]::IncludeArticles `
            -bor  [Microsoft.SqlServer.Replication.scriptoptions]::IncludePublisherSideSubscriptions `
            -bor  [Microsoft.SqlServer.Replication.scriptoptions]::IncludeCreateSnapshotAgent `
            -bor  [Microsoft.SqlServer.Replication.scriptoptions]::IncludeGo `
            -bor  [Microsoft.SqlServer.Replication.scriptoptions]::EnableReplicationDB `
            -bor  [Microsoft.SqlServer.Replication.scriptoptions]::IncludePublicationAccesses `
            -bor  [Microsoft.SqlServer.Replication.scriptoptions]::IncludeCreateLogreaderAgent `
            -bor  [Microsoft.SqlServer.Replication.scriptoptions]::IncludeCreateQueuereaderAgent `
            -bor  [Microsoft.SqlServer.Replication.scriptoptions]::IncludeSubscriberSideSubscriptions) | Out-File $fileName -Append
    }
}

[Microsoft.SqlServer.Management.Common.ServerConnection] $serverConnection = new-object Microsoft.SqlServer.Management.Common.ServerConnection($sqlServer)
[Microsoft.SqlServer.Replication.ReplicationServer] $distributor = New-Object Microsoft.SqlServer.Replication.ReplicationServer($serverConnection);

foreach($distributionPublisher in $distributor.DistributionPublishers)
{
    if($distributionPublisher.PublisherType -eq "MSSQLSERVER")
    {
        [string] $path = $outputDirectory + "from_" + $distributionPublisher.Name.Replace("", "_")
        ScriptPublications -sqlServer $distributionPublisher.Name -outputDirectory $path -scriptPerPublication $false
    }
}

As usual, I hope you find this script helpful. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

What is an Easy Way to Return Results from a CLR Stored Procedure?

Introduction

What is an Easy Way to Return Results from a CLR Stored Procedure? The question sounds simple enough but yet when I went searching for answers I could not find it. This post describes a helper class that I came up with to handle returning values from a CLR stored procedure.

My Solution

When I set out to write my first CLR stored procedure I expected to be able to do something easy, like write a method that returns an array and have SQL Server work out how to display it as a recordset. In the end I found that CLR works sort of like that, except that you have to figure out all the sizes, declare the structure then handle the passing back of each and every cell in each and every row. I guess that is OK, but if you have read any of my other posts you will have noticed a common theme: I am Lazy. Being as lazy as I am, I started digging into Intellisense to see what methods the various classes exposed to make my life easier. Pretty quickly I found SqlMetaData.InferFromValue to define the columns of the result without having to figure out what SQLMetaData type each column in the result set converted to.

Armed with a way to quickly define a column in a recordset I started adding iterative code to walk through various types of objects. I started with walking a DataReader then added DataTables and DataSets, then finally progressed to using reflection to display all of the properties of an object or even all of the properties for all of the objects in an array. I have also added an optional debug flag to output information about the result set to make it easy to define a temporary table to hold the results. Now I have a helper class that I can reference from my CLR stored procedures to quickly return results without very much time spent coding.

Here is an example to show how easy the helper class makes it to code a CLR stored procedure:

1
2
3
4
5
6
7
8
9
10
11
12
13
using System;
using System.IO;
using AdventuresInSql;

public partial class StoredProcedures
{
    [Microsoft.SqlServer.Server.SqlProcedure]
    public static void FileInfo(String filePath)
    {
        FileInfo fileInfo = new FileInfo(filePath);
        SqlClrHelper.RenderResults(fileInfo);
    }
};

On my systems, I deploy the helper class in it’s own assembly, add the assembly to the server I want to develop against, then open a new project, connect to that server, reference that assembly and write my code. I realize that most people are not using CLR in any distributed manner, making it easiest to just include the class in their project and run with it there.

Warnings: I highly suggest taking the time to deploy this class in it’s own assembly. The assembly this class resides in has to be marked UNSAFE. The database this assembly is deployed to must also be marked TRUSTWORTHY so I highly suggest keeping CLR objects in their own highly secured database. Most importantly, if you do not know what these setttings do stop now and find out before moving any further. UPDATE: Per Adam Machanic’s (Blog|Twitter) comments below, the TRUSTWORTHY setting is not needed if you use certificates.

With that, here is the code for the helper class Updated 7/26/2010 to better handle null values, increase performance and make easier to use. The biggest changes are switching to generic methods rather than using object typed parameters and getting column definitions more efficiently.:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Data.OleDb;
using System.Data.SqlClient;
using System.Globalization;
using System.Reflection;
using Microsoft.SqlServer.Server;

public sealed class SqlClrHelper
{

    #region Member Variables

    private const String ARGUMENT_EXCEPTION_STRING = "Extract of property: "{0}" failed with the following message: {1}";
    private const String COLUMN_NAME = "ColumnName";
    private const String COLUMN_SIZE = "ColumnSize";
    private const String DATA_TYPE = "DataType";
    private const String DEBUG_WARNING_MESSAGE = "***Turn off debug before trying to select into a table to avoid conversion exceptions***";
    private const String OBJECT_TYPE_DIFFERENT_EXCEPTION = "All objects in objectsToRender[] must be of the same type.";
    private const String TO_STRING = "ToString()";

    #endregion

    #region Internal Methods

    /// <summary>  
    ///<para>Class will only ever contain static methods.
    ///Added private constructor to prevent compiler from generating default constructor.</para>  
    /// </summary>  
    private SqlClrHelper()
    {
    }

    /// <summary>  
    ///<para>This method takes a column name, type and maximum length, returning the column definition as SqlMetaData.</para>  
    /// </summary>
    /// <param name="System.String">A column name to be used in the returned Microsoft.SqlServer.Server.SqlMetaData.</param>
    /// <param name="System.Type">A column data type to be used in the returned Microsoft.SqlServer.Server.SqlMetaData.</param>
    /// <param name="System.Int32">The maximum length of the column to be used in the returned Microsoft.SqlServer.Server.SqlMetaData.</param>
    private static SqlMetaData ParseSqlMetaData(String columnName, Type type, Int64 maxLength)
    {
        SqlParameter sqlParameter = new SqlParameter();
        sqlParameter.DbType = (DbType)TypeDescriptor.GetConverter(sqlParameter.DbType).ConvertFrom(type.Name);
        if (sqlParameter.SqlDbType == SqlDbType.Char || sqlParameter.SqlDbType == SqlDbType.NChar || sqlParameter.SqlDbType == SqlDbType.NVarChar || sqlParameter.SqlDbType == SqlDbType.VarChar)
        {
            if (maxLength > 8000)
            {
                maxLength = -1;
            }
            return new SqlMetaData(columnName, sqlParameter.SqlDbType, maxLength);
        }
        else if (sqlParameter.SqlDbType == SqlDbType.Text || sqlParameter.SqlDbType == SqlDbType.NText)
        {
            return new SqlMetaData(columnName, sqlParameter.SqlDbType, -1);
        }
        else
        {
            return new SqlMetaData(columnName, sqlParameter.SqlDbType);
        }
    }

    /// <summary>  
    ///<para>This method takes a single object and renders it back to the client.</para>  
    /// </summary>  
    /// <param name="<T>">A populated object.</param>
    [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Performance", "CA1811:AvoidUncalledPrivateCode")]
    private static void RenderResults<T>(T objectToRender)
    {
        RenderResults(objectToRender, false);
    }

    /// <summary>  
    ///<para>This method takes the SqlMetaData created to render an object and renders it back to the client.</para>  
    /// </summary>  
    /// <param name="System.Collections.Generic.List<Microsoft.SqlServer.Server>">A reference to a populated SqlMetaData List.</param>
    private static void WriteRecordStructure(List<SqlMetaData> sqlMetaDataList)
    {
        RenderResults(sqlMetaDataList.ToArray(), false);
    }

    #endregion

    #region Public Methods

    #region RenderResults Overloads

    #region Pass-Through Overloads

    /// <summary>  
    ///<para>This method takes a DataSet and renders it back to the client.</para>  
    /// </summary>  
    /// <param name="System.Data.DataSet">A reference to a populated DataSet.</param>  
    public static void RenderResults(DataSet dataSet)
    {
        RenderResults(dataSet);
    }

    /// <summary>  
    ///<para>This method takes a DataSet and renders it back to the client.</para>  
    /// </summary>  
    /// <param name="System.Data.DataSet">A reference to a populated DataSet.</param>
    /// <param name="System.Boolean">A boolean value indicating whether or not to return information
    /// about the record structure to the client.</param>  
    public static void RenderResults(DataSet dataSet, Boolean isDebugOn)
    {
        foreach (DataTable dataTable in dataSet.Tables)
        {
            RenderResults(dataTable, isDebugOn);
        }
    }

    /// <summary>  
    ///<para>This method takes a DataTable and renders it back to the client.</para>  
    /// </summary>  
    /// <param name="System.Data.DataTable">A reference to a populated DataTable.</param>  
    public static void RenderResults(DataTable dataTable)
    {
        RenderResults(dataTable, false);
    }

    /// <summary>  
    ///<para>This method takes an OleDbDataReader and renders it back to the client.</para>  
    /// </summary>  
    /// <param name="System.Data.OleDb.OleDbDataReader">A reference to a populated OleDbDataReader.</param>
    public static void RenderResults(OleDbDataReader dataReader)
    {
        RenderResults(dataReader, false);
    }

    /// <summary>  
    ///<para>This method takes a single object and renders it back to the client.</para>  
    /// </summary>  
    /// <param name="<T>">A reference to a populated object.</param>
    /// <param name="System.Boolean">A boolean value indicating whether or not to return information
    /// about failed argument exceptions and record structure to the client.</param>
    public static void RenderResults<T>(T objectToRender, Boolean isDebugOn)
    {
        T[] objectsToRender = new T[1];
        objectsToRender[0] = objectToRender;
        RenderResults(objectsToRender, isDebugOn);
    }

    /// <summary>  
    ///<para>This method takes an array of objects and renders it back to the client.</para>  
    /// </summary>  
    /// <param name="System.Object[]">A reference to a populated object.</param>
    public static void RenderResults<T>(T[] objectsToRender)
    {
        RenderResults(objectsToRender, false);
    }

    #endregion

    /// <summary>  
    ///<para>This method takes a DataTable and renders it back to the client.</para>  
    /// </summary>  
    /// <param name="System.Data.DataTable">A reference to a populated DataTable.</param>  
    /// <param name="System.Boolean">A boolean value indicating whether or not to return information
    /// about the record structure to the client.</param>  
    public static void RenderResults(DataTable dataTable, Boolean isDebugOn)
    {
        List<SqlMetaData> sqlMetaDataList = new List<SqlMetaData>();
        for (int i = 0; i < dataTable.Rows[0].ItemArray.Length; i++)
        {
            sqlMetaDataList.Add(ParseSqlMetaData(dataTable.Columns[i].ColumnName, dataTable.Columns[i].DataType, dataTable.Columns[i].MaxLength));
        }
        SqlDataRecord sqlDataRecord = new SqlDataRecord(sqlMetaDataList.ToArray());
        SqlContext.Pipe.SendResultsStart(sqlDataRecord);
        if (SqlContext.Pipe.IsSendingResults)
        {
            foreach (DataRow dataRow in dataTable.Rows)
            {
                sqlDataRecord.SetValues(dataRow.ItemArray);
                SqlContext.Pipe.SendResultsRow(sqlDataRecord);
            }
            SqlContext.Pipe.SendResultsEnd();
        }
        if (isDebugOn)
        {
            WriteRecordStructure(sqlMetaDataList);
        }
    }

    /// <summary>  
    ///<para>This method takes an OleDbDataReader and renders it back to the client.</para>  
    /// </summary>  
    /// <param name="System.Data.OleDb.OleDbDataReader">A reference to a populated OleDbDataReader.</param>
    /// <param name="System.Boolean">A boolean value indicating whether or not to return information
    /// about the record structure to the client.</param>  
    public static void RenderResults(OleDbDataReader oleDBDataReader, Boolean isDebugOn)
    {
        Int64 columnSize = 0;
        List<SqlMetaData> sqlMetaDataList = new List<SqlMetaData>();
        foreach (DataRow dataRow in oleDBDataReader.GetSchemaTable().Rows)
        {
            if (Int64.TryParse(((Int32)dataRow[COLUMN_SIZE]).ToString(CultureInfo.CurrentCulture), out columnSize))
            {
                sqlMetaDataList.Add(ParseSqlMetaData((String)dataRow[COLUMN_NAME], (Type)dataRow[DATA_TYPE], columnSize));
            }
            else
            {
                sqlMetaDataList.Add(ParseSqlMetaData((String)dataRow[COLUMN_NAME], (Type)dataRow[DATA_TYPE], -1));
            }
        }
        SqlDataRecord sqlDataRecord = new SqlDataRecord(sqlMetaDataList.ToArray());
        Object[] objects = new Object[sqlMetaDataList.Count];
        SqlContext.Pipe.SendResultsStart(sqlDataRecord);
        if (SqlContext.Pipe.IsSendingResults)
        {
            while (oleDBDataReader.Read())
            {
                oleDBDataReader.GetValues(objects);
                sqlDataRecord.SetValues(objects);
                SqlContext.Pipe.SendResultsRow(sqlDataRecord);
            }
            SqlContext.Pipe.SendResultsEnd();
        }
        if (isDebugOn)
        {
            WriteRecordStructure(sqlMetaDataList);
        }
        if (oleDBDataReader.NextResult())
        {
            RenderResults(oleDBDataReader, isDebugOn);
        }
    }

    /// <summary>  
    ///<para>This method takes an array of objects and renders it back to the client.</para>  
    /// </summary>  
    /// <param name="<T>[]">A reference to an array of populated objects.</param>
    /// <param name="System.Boolean">A boolean value indicating whether or not to return information
    /// about failed argument exceptions and record structure to the client.</param>
    public static void RenderResults<T>(T[] objectsToRender, Boolean isDebugOn)
    {
        List<SqlMetaData> sqlMetaDataList = new List<SqlMetaData>();
        List<List<Object>> sqlMetaDataValues = new List<List<Object>>();
        SqlDataRecord sqlDataRecord = null;
        Type objectType = null;
        for (int i = 0; i < objectsToRender.Length; i++)
        {
            if (objectsToRender[i] == null)
            {
                continue;
            }
            T objectToRender = objectsToRender[i];
            if (objectType == null)
            {
                objectType = objectToRender.GetType();
            }
            if (objectToRender.GetType() != objectType)
            {
                throw (new InvalidCastException(OBJECT_TYPE_DIFFERENT_EXCEPTION));
            }
            foreach (PropertyInfo property in objectToRender.GetType().GetProperties())
            {
                SqlMetaData sqlMetaData = null;
                if (property.CanRead && property.GetIndexParameters().Length == 0)
                {
                    try
                    {
                        sqlMetaData = SqlMetaData.InferFromValue(property.GetValue(objectToRender, null), property.Name.ToString());
                        for (int j = 0; j < sqlMetaDataList.Count; j++)
                        {
                            if (sqlMetaDataList[j].Name == sqlMetaData.Name)
                            {
                                if (sqlMetaDataList[j].MaxLength < sqlMetaData.MaxLength)
                                {
                                    sqlMetaDataList[j] = sqlMetaData;
                                }
                                sqlMetaData = null;
                                break;
                            }
                        }
                        if (sqlMetaData != null)
                        {
                            sqlMetaDataList.Add(sqlMetaData);
                        }
                        if (sqlMetaDataValues.Count == i)
                        {
                            sqlMetaDataValues.Add(new List<Object>());
                        }
                        sqlMetaDataValues[i].Add(property.GetValue(objectToRender, null));
                    }
                    catch (ArgumentException ex)
                    {
                        if (isDebugOn)
                        {
                            SqlContext.Pipe.Send(String.Format(CultureInfo.CurrentCulture, ARGUMENT_EXCEPTION_STRING, property.Name.ToString(), ex.Message.ToString()));
                        }
                    }
                }
            }
            if (i == 0)
            {
                sqlMetaDataList.Add(SqlMetaData.InferFromValue(objectToRender.ToString(), TO_STRING));
            }
            sqlMetaDataValues[i].Add(objectToRender.ToString());
        }
        sqlDataRecord = new SqlDataRecord(sqlMetaDataList.ToArray());
        SqlContext.Pipe.SendResultsStart(sqlDataRecord);
        if (SqlContext.Pipe.IsSendingResults)
        {
            sqlMetaDataValues.ForEach(sqlMetaDataValue =>
            {
                sqlDataRecord.SetValues(sqlMetaDataValue.ToArray());
                SqlContext.Pipe.SendResultsRow(sqlDataRecord);
            });
            SqlContext.Pipe.SendResultsEnd();
        }
        if (isDebugOn)
        {
            WriteRecordStructure(sqlMetaDataList);
        }

    }

    #endregion

    #endregion

}

Conclusion

I fear that by making it easy to use CLR that I may be opening up a can of worms. I ask that before using CLR that you make sure that it is the best way to accomplish the task you have been given.

As usual, I hope you find this class useful. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that code from the internet is like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

Why Would a Delete Make My Database Grow?

Introduction

A while back I had a developer come to me complaining that every time they ran a large delete statement on a certain database the delete would fail with a message claiming the database was full. My first instinct was that they were doing something wrong so I asked for the script so I could try it myself. To my surprise, running the delete actually did fill the database.

Troubleshooting

To figure out why a delete would cause a database to grow, I started with Profiler to see if anything was running as a side-effect of the deletes. The only thing that Profiler showed was the delete. Unable to explain what was happening, I threw the question to #SqlHelp on Twitter. Almost immediately, Paul Randal (Blog|Twitter) asked if I had Read Committed Snapshot Isolation (RCSI) turned on for that database. I confirmed that the database did in fact have RCSI turned on and Paul explained that what I was seeing was SQL Server adding the pointers to the version store to the data pages as they are marked deleted.

Moving Forward

Once I knew what the issue was my mind began to shift gears into how to prevent it from biting me in production. The obvious answer is to set the database size to be sufficiently high enough to allow extra space for large updates or deletes to add version information to the data pages. The problem with this approach is that eventually the reason for the free space will be forgotten and normal growth of the database will eventually eat up that free space.

While looking for options I remembered how I had a similar experience rebuilding indexes to move them to a new file group on new disk. The idea of the project was to move the database to new disk with minimal downtime. To accomplish the move I created a new file group and rebuilt all of the indexes onto it, starting with clustered indexes. Once the primary data file was down to just the system tables I shrunk it, took the whole database offline, moved the file and brought the database back online. The total downtime was about 15 seconds but all of the work took about a week. The work I was doing had to be minimally disruptive so I used the ONLINE flag along with DROP_EXISTING to recreate the indexes.  I was surprised to find out at the end of that work to find out that my database had grown significantly in size. After a ton of research I discovered that the ONLINE flag was adding version information to each page, leading to the unexpected growth.

Could the version information for online index operations be the same as what is used by Read Committed Snapshot Isolation?

Could rebuilding all of the indexes in my database with help me to pre-size my database, avoiding later surprises in production?

How would I go about proving my theory?

The Proof

To prove my theory that rebuilding my indexes with would allow me to pre-size my database; I found a quiet corner of the development environment and created a test database.

Here is the script to create the database if you should choose to follow along at home:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
USE master
GO
IF EXISTS(SELECT * FROM sys.databases WHERE name = 'RecordSizeTest')
    DROP DATABASE RecordSizeTest
GO

CREATE DATABASE RecordSizeTest
GO

USE RecordSizeTest
GO
IF EXISTS(SELECT * FROM sys.tables WHERE name = 'TestTable')
    DROP TABLE dbo.TestTable
GO

CREATE TABLE dbo.TestTable
(
    id              int identity(1,1) NOT NULL,
    varchar_value   varchar(400) NOT NULL,
    bit_value       bit NOT NULL,
    create_date     smalldatetime NOT NULL DEFAULT(GETDATE())
)
GO

CREATE UNIQUE CLUSTERED INDEX IX_TestTable_id ON dbo.TestTable (id)
GO

INSERT  dbo.TestTable (varchar_value, bit_value)
    SELECT  REPLICATE('TEST', COUNT(*)),
            COUNT(*) % 2
    FROM    dbo.TestTable
GO 100

ALTER DATABASE RecordSizeTest SET READ_COMMITTED_SNAPSHOT ON
GO

SELECT * FROM dbo.TestTable

Now that the database is created, the first step is to see what the database pages look like by default using DBCC IND and DBCC PAGE.

Note: The commands I am using are well-covered elsewhere so I am not going to spend any time describing them beyond just showing how I used them. It goes without saying that you should not run anything on your systems without first taking the time to understand what it does.

The first step is to figure out where the table ended up. Time for DBCC IND:

1
2
DBCC IND(RecordSizeTest, TestTable, 0)
GO

Below are the results of the DBCC IND command. To keep things easy I am looking for the first page of the table. To find it I look for a page that has a PrevPageId of 0 and a PageType of 1. In this case the page I am looking for is 145.

1
2
3
4
5
6
7
8
9
10
PageFID PagePID     IAMFID IAMPID      ObjectID    IndexID     PartitionNumber PartitionID          iam_chain_type       PageType IndexLevel NextPageFID NextPagePID PrevPageFID PrevPagePID
------- ----------- ------ ----------- ----------- ----------- --------------- -------------------- -------------------- -------- ---------- ----------- ----------- ----------- -----------
1       146         NULL   NULL        2105058535  1           1               72057594038845440    In-row data          10       NULL       0           0           0           0
1       145         1      146         2105058535  1           1               72057594038845440    In-row data          1        0          1           150         0           0
1       150         1      146         2105058535  1           1               72057594038845440    In-row data          1        0          1           153         1           145
1       153         1      146         2105058535  1           1               72057594038845440    In-row data          1        0          0           0           1           150

(4 row(s) affected)

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Next I want to get a look at that page so I run the following command:

1
2
3
4
5
DBCC TRACEON(3604)
GO

DBCC PAGE(RecordSizeTest, 1, 145, 3)
GO

Below are the DBCC PAGE results. The 2 things to really notice are that free space on the page, m_freeCnt is currently 212 bytes and that the values for Record Attributes are NULL_BITMAP and VARIABLE_COLUMNS. While we are looking at this page I am also making sure that the record in slot 0 has an id of 1 for the next step in the test.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
PAGE: (1:145)


BUFFER:


BUF @0x0000000090FC6300

bpage = 0x000000009018C000           bhash = 0x0000000000000000           bpageno = (1:145)
bdbid = 6                            breferences = 0                      bUse1 = 31047
bstat = 0x6c00009                    blog = 0x21432159                    bnext = 0x0000000000000000

PAGE HEADER:


Page @0x000000009018C000

m_pageId = (1:145)                   m_headerVersion = 1                  m_type = 1
m_typeFlagBits = 0x4                 m_level = 0                          m_flagBits = 0x200
m_objId (AllocUnitId.idObj) = 28     m_indexId (AllocUnitId.idInd) = 256  
Metadata: AllocUnitId = 72057594039762944                                
Metadata: PartitionId = 72057594038845440                                 Metadata: IndexId = 1
Metadata: ObjectId = 2105058535      m_prevPage = (0:0)                   m_nextPage = (1:150)
pminlen = 13                         m_slotCnt = 58                       m_freeCnt = 212
m_freeData = 7864                    m_reservedCnt = 0                    m_lsn = (34:159:16)
m_xactReserved = 0                   m_xdesId = (0:0)                     m_ghostRecCnt = 0
m_tornBits = -436072588              

Allocation Status

GAM (1:2) = ALLOCATED                SGAM (1:3) = NOT ALLOCATED          
PFS (1:1) = 0x60 MIXED_EXT ALLOCATED   0_PCT_FULL                         DIFF (1:6) = CHANGED
ML (1:7) = NOT MIN_LOGGED            

Slot 0 Offset 0x60 Length 16

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP     Record Size = 16

Memory Dump @0x000000002309A060

0000000000000000:   10000d00 01000000 00530384 9d040000 †.........S.„....

Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 1                              

Slot 0 Column 2 Offset 0x0 Length 0 Length (physical) 0

varchar_value =                      

Slot 0 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 0                        

Slot 0 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 0 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (010086470766)        
Slot 1 Offset 0x70 Length 24

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS
Record Size = 24                    
Memory Dump @0x000000002309A070

0000000000000000:   30000d00 02000000 01530384 9d040000 †0........S.„....
0000000000000010:   01001800 54455354 †††††††††††††††††††....TEST        

Slot 1 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 2                              

Slot 1 Column 2 Offset 0x14 Length 4 Length (physical) 4

varchar_value = TEST                

Slot 1 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 1                        

Slot 1 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 1 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (020068e8b274)        
Slot 2 Offset 0x88 Length 28

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS
Record Size = 28                    
Memory Dump @0x000000002309A088

0000000000000000:   30000d00 03000000 00530384 9d040000 †0........S.„....
0000000000000010:   01001c00 54455354 54455354 ††††††††††....TESTTEST    
<snip>
.
.
.
</snip
Slot 57 Offset 0x1dc0 Length 248

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS
Record Size = 248                    
Memory Dump @0x000000002309BDC0

0000000000000000:   30000d00 3a000000 01530384 9d040000 †0...:....S.„....
0000000000000010:   0100f800 54455354 54455354 54455354 †..ø.TESTTESTTEST
0000000000000020:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000030:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000040:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000050:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000060:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000070:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000080:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000090:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000A0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000B0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000C0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000D0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000E0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000F0:   54455354 54455354 †††††††††††††††††††TESTTEST        

Slot 57 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 58                              

Slot 57 Column 2 Offset 0x14 Length 228 Length (physical) 228

varchar_value = TESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTT
ESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTE
STTESTTESTTESTTEST                  

Slot 57 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 1                        

Slot 57 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 57 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (3a0026382d41)        


DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Next I want to delete the first row from the table to see what effect that has. I picked the first row because I already know what page it is on so I can quickly see the impact, if any, of deleting it. I chose to do this in a transaction to also see what effect a rollback might have. Here is the next bit of code in the test:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
DBCC TRACEON(3604)
GO

BEGIN TRANSACTION

DELETE  dbo.TestTable
WHERE   id = 1
GO

DBCC PAGE(RecordSizeTest, 1, 145, 3)
GO

ROLLBACK

DBCC PAGE(RecordSizeTest, 1, 145, 3)
GO

Below are the latest DBCC PAGE results from before the rollback. Right away it is clear that the row in Slot 0 has been deleted because it’s Record Type is now GHOST_DATA_RECORD. It is also noteable that even though the row has been marked deleted the m_freeCnt on the page has gone down to 198. Sticking with our theory, the reduction in free space should be caused by the addition of version information and, sure enough, the Record Attributes now include VERSIONING_INFO and a new Version Information section is visible with a Transaction Timestamp and a Version Pointer to a location in TempDB. We have now proven that we know how to make a page grow on demand by running a delete.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
PAGE: (1:145)


BUFFER:


BUF @0x0000000090FC6300

bpage = 0x000000009018C000           bhash = 0x0000000000000000           bpageno = (1:145)
bdbid = 6                            breferences = 1                      bUse1 = 31887
bstat = 0x6c0000b                    blog = 0x21432159                    bnext = 0x0000000000000000

PAGE HEADER:


Page @0x000000009018C000

m_pageId = (1:145)                   m_headerVersion = 1                  m_type = 1
m_typeFlagBits = 0x4                 m_level = 0                          m_flagBits = 0x2000
m_objId (AllocUnitId.idObj) = 28     m_indexId (AllocUnitId.idInd) = 256  
Metadata: AllocUnitId = 72057594039762944                                
Metadata: PartitionId = 72057594038845440                                 Metadata: IndexId = 1
Metadata: ObjectId = 2105058535      m_prevPage = (0:0)                   m_nextPage = (1:150)
pminlen = 13                         m_slotCnt = 58                       m_freeCnt = 198
m_freeData = 7956                    m_reservedCnt = 0                    m_lsn = (34:324:10)
m_xactReserved = 0                   m_xdesId = (0:973)                   m_ghostRecCnt = 1
m_tornBits = -436072588              

Allocation Status

GAM (1:2) = ALLOCATED                SGAM (1:3) = NOT ALLOCATED          
PFS (1:1) = 0x68 MIXED_EXT ALLOCATED   0_PCT_FULL                         DIFF (1:6) = CHANGED
ML (1:7) = NOT MIN_LOGGED            

Slot 0 Offset 0x1ef6 Length 30

Record Type = GHOST_DATA_RECORD      Record Attributes =  NULL_BITMAP VERSIONING_INFO
Record Size = 30                    
Memory Dump @0x000000002309BEF6

0000000000000000:   5c000d00 01000000 00530384 9d040000 †........S.„....
0000000000000010:   b0010000 01000000 3dbd0500 0000††††††°.......=½....  

Version Information =
    Transaction Timestamp: 376125
    Version Pointer: (file 1 page 432 currentSlotId 0)


Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 1                              

Slot 0 Column 2 Offset 0x0 Length 0 Length (physical) 0

varchar_value =                      

Slot 0 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 0                        

Slot 0 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 0 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (010086470766)        
Slot 1 Offset 0x1ede Length 24

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS
Record Size = 24                    
Memory Dump @0x000000002309BEDE

0000000000000000:   30000d00 02000000 01530384 9d040000 †0........S.„....
0000000000000010:   01001800 54455354 †††††††††††††††††††....TEST        

Slot 1 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 2                              

Slot 1 Column 2 Offset 0x14 Length 4 Length (physical) 4

varchar_value = TEST                

Slot 1 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 1                        

Slot 1 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 1 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (020068e8b274)        
Slot 2 Offset 0x88 Length 28

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS
Record Size = 28                    
Memory Dump @0x000000002309A088

0000000000000000:   30000d00 03000000 00530384 9d040000 †0........S.„....
0000000000000010:   01001c00 54455354 54455354 ††††††††††....TESTTEST    
<snip>
.
.
.
</snip>
Slot 57 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 58                              

Slot 57 Column 2 Offset 0x14 Length 228 Length (physical) 228

varchar_value = TESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTT
ESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTE
STTESTTESTTESTTEST                  

Slot 57 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 1                        

Slot 57 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 57 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (3a0026382d41)        


DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Next are the DBCC PAGE results from after the rollback. The m_freeCnt has gone back to 212, the row in slot 0 no longer shows as deleted and the versioning information has been removed. The fact that the versioning information goes away as part of the rollback is interesting. It means that no matter how many times I try to do a delete that fills the database I will always start from the same point. It makes perfect sense in terms of ACID but until I saw it for myself I was not sure how it would work.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
PAGE: (1:145)


BUFFER:


BUF @0x0000000090FC6300

bpage = 0x000000009018C000           bhash = 0x0000000000000000           bpageno = (1:145)
bdbid = 6                            breferences = 0                      bUse1 = 32941
bstat = 0x6c0000b                    blog = 0x21432159                    bnext = 0x0000000000000000

PAGE HEADER:


Page @0x000000009018C000

m_pageId = (1:145)                   m_headerVersion = 1                  m_type = 1
m_typeFlagBits = 0x4                 m_level = 0                          m_flagBits = 0x6000
m_objId (AllocUnitId.idObj) = 28     m_indexId (AllocUnitId.idInd) = 256  
Metadata: AllocUnitId = 72057594039762944                                
Metadata: PartitionId = 72057594038845440                                 Metadata: IndexId = 1
Metadata: ObjectId = 2105058535      m_prevPage = (0:0)                   m_nextPage = (1:150)
pminlen = 13                         m_slotCnt = 58                       m_freeCnt = 212
m_freeData = 7972                    m_reservedCnt = 0                    m_lsn = (34:324:13)
m_xactReserved = 0                   m_xdesId = (0:973)                   m_ghostRecCnt = 0
m_tornBits = -436072588              

Allocation Status

GAM (1:2) = ALLOCATED                SGAM (1:3) = NOT ALLOCATED          
PFS (1:1) = 0x60 MIXED_EXT ALLOCATED   0_PCT_FULL                         DIFF (1:6) = CHANGED
ML (1:7) = NOT MIN_LOGGED            

Slot 0 Offset 0x1f14 Length 16

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP     Record Size = 16

Memory Dump @0x000000002309BF14

0000000000000000:   10000d00 01000000 00530384 9d040000 †.........S.„....

Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 1                              

Slot 0 Column 2 Offset 0x0 Length 0 Length (physical) 0

varchar_value =                      

Slot 0 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 0                        

Slot 0 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 0 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (010086470766)        
Slot 1 Offset 0x1ede Length 24

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS
Record Size = 24                    
Memory Dump @0x000000002309BEDE

0000000000000000:   30000d00 02000000 01530384 9d040000 †0........S.„....
0000000000000010:   01001800 54455354 †††††††††††††††††††....TEST        

Slot 1 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 2                              

Slot 1 Column 2 Offset 0x14 Length 4 Length (physical) 4

varchar_value = TEST                

Slot 1 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 1                        

Slot 1 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 1 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (020068e8b274)
<snip>
.
.
.
</snip>
Slot 57 Offset 0x1dc0 Length 248

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS
Record Size = 248                    
Memory Dump @0x000000002309BDC0

0000000000000000:   30000d00 3a000000 01530384 9d040000 †0...:....S.„....
0000000000000010:   0100f800 54455354 54455354 54455354 †..ø.TESTTESTTEST
0000000000000020:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000030:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000040:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000050:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000060:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000070:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000080:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000090:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000A0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000B0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000C0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000D0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000E0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000F0:   54455354 54455354 †††††††††††††††††††TESTTEST        

Slot 57 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 58                              

Slot 57 Column 2 Offset 0x14 Length 228 Length (physical) 228

varchar_value = TESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTT
ESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTE
STTESTTESTTESTTEST                  

Slot 57 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 1                        

Slot 57 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 57 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (3a0026382d41)        


DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Now that I have proven that I can delete a record in a database that uses Read Committed Snapshot Isolation and cause space usage to increase I now want to repeat the test after rebuilding the clustered index on the table with ONLINE=ON. Here is the next bit of code to run:

1
2
CREATE UNIQUE CLUSTERED INDEX IX_TestTable_id ON dbo.TestTable (id) WITH (ONLINE=ON, DROP_EXISTING=ON)
GO

Now that the index has been rebuilt it is time to figure out where it ended up. Time for DBCC IND:

1
2
DBCC IND(RecordSizeTest, TestTable, 0)
GO

Based on the results of DBCC IND we are looking for Page 156. It is noteable that DBCC IND returned 5 rows this time instead of 4. More pages generally means more data so lets dig into it.

1
2
3
4
5
6
7
8
9
10
11
PageFID PagePID     IAMFID IAMPID      ObjectID    IndexID     PartitionNumber PartitionID          iam_chain_type       PageType IndexLevel NextPageFID NextPagePID PrevPageFID PrevPagePID
------- ----------- ------ ----------- ----------- ----------- --------------- -------------------- -------------------- -------- ---------- ----------- ----------- ----------- -----------
1       157         NULL   NULL        2105058535  1           1               72057594038910976    In-row data          10       NULL       0           0           0           0
1       156         1      157         2105058535  1           1               72057594038910976    In-row data          1        0          1           160         0           0
1       160         1      157         2105058535  1           1               72057594038910976    In-row data          1        0          1           161         1           156
1       161         1      157         2105058535  1           1               72057594038910976    In-row data          1        0          1           162         1           160
1       162         1      157         2105058535  1           1               72057594038910976    In-row data          1        0          0           0           1           161

(5 row(s) affected)

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Here is the syntax of the next DBCC PAGE command to run:

1
2
3
4
5
DBCC TRACEON(3604)
GO

DBCC PAGE(RecordSizeTest, 1, 156, 3)
GO

The results below seem quite different. First off, m_freeCnt is 680 instead of 212. Adding version information should not increase free space so there must be less records here, m_slotCnt proves that. This page has 53 slots or rows while the earlier page held 58 rows of data. That proves that the extra row in DBCC IND is an extra row of data that was added after the rebuild of the index to add the versioning information. Looking at the record in slot 0, it now looks like it did before the rollback of the delete. The Record Attributes include VERSIONING_INFO and there is a Version Information section just below that. The Version Information includes a Transaction Timestamp from when the index was rebuilt but no Version Pointer because the row is unchanged.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
PAGE: (1:156)


BUFFER:


BUF @0x0000000090FC6080

bpage = 0x0000000090182000           bhash = 0x0000000000000000           bpageno = (1:156)
bdbid = 6                            breferences = 0                      bUse1 = 34110
bstat = 0x6c0000b                    blog = 0x432159bb                    bnext = 0x0000000000000000

PAGE HEADER:


Page @0x0000000090182000

m_pageId = (1:156)                   m_headerVersion = 1                  m_type = 1
m_typeFlagBits = 0x4                 m_level = 0                          m_flagBits = 0x2000
m_objId (AllocUnitId.idObj) = 29     m_indexId (AllocUnitId.idInd) = 256  
Metadata: AllocUnitId = 72057594039828480                                
Metadata: PartitionId = 72057594038910976                                 Metadata: IndexId = 1
Metadata: ObjectId = 2105058535      m_prevPage = (0:0)                   m_nextPage = (1:160)
pminlen = 13                         m_slotCnt = 53                       m_freeCnt = 680
m_freeData = 7406                    m_reservedCnt = 0                    m_lsn = (34:385:19)
m_xactReserved = 0                   m_xdesId = (0:0)                     m_ghostRecCnt = 0
m_tornBits = 0                      

Allocation Status

GAM (1:2) = ALLOCATED                SGAM (1:3) = ALLOCATED              
PFS (1:1) = 0x60 MIXED_EXT ALLOCATED   0_PCT_FULL                         DIFF (1:6) = CHANGED
ML (1:7) = NOT MIN_LOGGED            

Slot 0 Offset 0x60 Length 30

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VERSIONING_INFO
Record Size = 30                    
Memory Dump @0x0000000018A4A060

0000000000000000:   50000d00 01000000 d0530384 9d040000 †P.......ÐS.„....
0000000000000010:   00000000 00000000 47be0500 0000††††††........G¾....  

Version Information =
    Transaction Timestamp: 376391
    Version Pointer: Null


Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 1                              

Slot 0 Column 2 Offset 0x0 Length 0 Length (physical) 0

varchar_value =                      

Slot 0 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 0                        

Slot 0 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 0 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (010086470766)        
Slot 1 Offset 0x7e Length 38

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS VERSIONING_INFO
Record Size = 38                    
Memory Dump @0x0000000018A4A07E

0000000000000000:   70000d00 02000000 d1530384 9d040000 †p.......ÑS.„....
0000000000000010:   01001800 54455354 00000000 00000000 †....TEST........
0000000000000020:   47be0500 0000††††††††††††††††††††††††G¾....          

Version Information =
    Transaction Timestamp: 376391
    Version Pointer: Null


Slot 1 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 2                              

Slot 1 Column 2 Offset 0x14 Length 4 Length (physical) 4

varchar_value = TEST                

Slot 1 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 1                        

Slot 1 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 1 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (020068e8b274)  
<snip>
.
.
.
</snip>
Version Information =
    Transaction Timestamp: 376391
    Version Pointer: Null


Slot 52 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 53                              

Slot 52 Column 2 Offset 0x14 Length 208 Length (physical) 208

varchar_value = TESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTT
ESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTEST

Slot 52 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 0                        

Slot 52 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 52 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (350070284e19)        


DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Now that the versioning information has been added it is time to re-run the delete test to see what effect a delete and subsequent rollback has. Here is the code for this test:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
DBCC TRACEON(3604)
GO

BEGIN TRANSACTION

DELETE  dbo.TestTable
WHERE   id = 1
GO

DBCC PAGE(RecordSizeTest, 1, 156, 3)
GO

ROLLBACK

DBCC PAGE(RecordSizeTest, 1, 156, 3)
GO

The first set of DBCC PAGE results shows exactly what I expected. The record in slot 0 is marked as a GHOST_DATA_RECORD and the version pointer is now populated with a pointer to a location in TempDB. Note: You can run DBCC PAGE to look at that record in TempDB if the transaction is still open. It is beyond the scope of this post so I will not cover it here but definitely interesting to look at. What has not changed is that m_freeCnt is still 680. There was no change in record size or space usage.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
PAGE: (1:156)


BUFFER:


BUF @0x0000000090FC6080

bpage = 0x0000000090182000           bhash = 0x0000000000000000           bpageno = (1:156)
bdbid = 6                            breferences = 1                      bUse1 = 35111
bstat = 0x6c0000b                    blog = 0x432159bb                    bnext = 0x0000000000000000

PAGE HEADER:


Page @0x0000000090182000

m_pageId = (1:156)                   m_headerVersion = 1                  m_type = 1
m_typeFlagBits = 0x4                 m_level = 0                          m_flagBits = 0x2000
m_objId (AllocUnitId.idObj) = 29     m_indexId (AllocUnitId.idInd) = 256  
Metadata: AllocUnitId = 72057594039828480                                
Metadata: PartitionId = 72057594038910976                                 Metadata: IndexId = 1
Metadata: ObjectId = 2105058535      m_prevPage = (0:0)                   m_nextPage = (1:160)
pminlen = 13                         m_slotCnt = 53                       m_freeCnt = 680
m_freeData = 7474                    m_reservedCnt = 0                    m_lsn = (34:435:35)
m_xactReserved = 0                   m_xdesId = (0:992)                   m_ghostRecCnt = 1
m_tornBits = 0                      

Allocation Status

GAM (1:2) = ALLOCATED                SGAM (1:3) = ALLOCATED              
PFS (1:1) = 0x68 MIXED_EXT ALLOCATED   0_PCT_FULL                         DIFF (1:6) = CHANGED
ML (1:7) = NOT MIN_LOGGED            

Slot 0 Offset 0x1d14 Length 30

Record Type = GHOST_DATA_RECORD      Record Attributes =  NULL_BITMAP VERSIONING_INFO
Record Size = 30                    
Memory Dump @0x0000000018A4BD14

0000000000000000:   5c000d00 01000000 d0530384 9d040000 †.......ÐS.„....
0000000000000010:   c8010000 01000100 a0bf0500 0000††††††È....... ¿....  

Version Information =
    Transaction Timestamp: 376736
    Version Pointer: (file 1 page 456 currentSlotId 1)


Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 1                              

Slot 0 Column 2 Offset 0x0 Length 0 Length (physical) 0

varchar_value =                      

Slot 0 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 0                        

Slot 0 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 0 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (010086470766)        
Slot 1 Offset 0x1cee Length 38

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS VERSIONING_INFO
Record Size = 38                    
Memory Dump @0x0000000018A4BCEE

0000000000000000:   70000d00 02000000 d1530384 9d040000 †p.......ÑS.„....
0000000000000010:   01001800 54455354 00000000 00000000 †....TEST........
0000000000000020:   47be0500 0000††††††††††††††††††††††††G¾....          

Version Information =
    Transaction Timestamp: 376391
    Version Pointer: Null


Slot 1 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 2                              

Slot 1 Column 2 Offset 0x14 Length 4 Length (physical) 4

varchar_value = TEST                

Slot 1 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 1                        

Slot 1 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 1 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (020068e8b274)
<snip>
.
.
.
</snip>
KeyHashValue = (3400154ff2a1)        
Slot 52 Offset 0x1bfc Length 242

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS VERSIONING_INFO
Record Size = 242                    
Memory Dump @0x0000000018A4BBFC

0000000000000000:   70000d00 35000000 d0530384 9d040000 †p...5...ÐS.„....
0000000000000010:   0100e400 54455354 54455354 54455354 †..ä.TESTTESTTEST
0000000000000020:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000030:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000040:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000050:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000060:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000070:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000080:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000090:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000A0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000B0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000C0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000D0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000E0:   54455354 00000000 00000000 47be0500 †TEST........G¾..
00000000000000F0:   0000†††††††††††††††††††††††††††††††††..              

Version Information =
    Transaction Timestamp: 376391
    Version Pointer: Null


Slot 52 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 53                              

Slot 52 Column 2 Offset 0x14 Length 208 Length (physical) 208

varchar_value = TESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTT
ESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTEST

Slot 52 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 0                        

Slot 52 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 52 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (350070284e19)        


DBCC execution completed. If DBCC printed error messages, contact your system administrator.

The second DBCC PAGE result shows that the record in slot 0 is no longer a GHOST_DATA_RECORD and the version pointer has reverted to Null. The m_freeCnt is still 680.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
PAGE: (1:156)


BUFFER:


BUF @0x0000000090FC6080

bpage = 0x0000000090182000           bhash = 0x0000000000000000           bpageno = (1:156)
bdbid = 6                            breferences = 3                      bUse1 = 35512
bstat = 0x6c0000b                    blog = 0x432159bb                    bnext = 0x0000000000000000

PAGE HEADER:


Page @0x0000000090182000

m_pageId = (1:156)                   m_headerVersion = 1                  m_type = 1
m_typeFlagBits = 0x4                 m_level = 0                          m_flagBits = 0x6000
m_objId (AllocUnitId.idObj) = 29     m_indexId (AllocUnitId.idInd) = 256  
Metadata: AllocUnitId = 72057594039828480                                
Metadata: PartitionId = 72057594038910976                                 Metadata: IndexId = 1
Metadata: ObjectId = 2105058535      m_prevPage = (0:0)                   m_nextPage = (1:160)
pminlen = 13                         m_slotCnt = 53                       m_freeCnt = 680
m_freeData = 7504                    m_reservedCnt = 0                    m_lsn = (34:435:38)
m_xactReserved = 0                   m_xdesId = (0:992)                   m_ghostRecCnt = 0
m_tornBits = 0                      

Allocation Status

GAM (1:2) = ALLOCATED                SGAM (1:3) = ALLOCATED              
PFS (1:1) = 0x60 MIXED_EXT ALLOCATED   0_PCT_FULL                         DIFF (1:6) = CHANGED
ML (1:7) = NOT MIN_LOGGED            

Slot 0 Offset 0x1d32 Length 30

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VERSIONING_INFO
Record Size = 30                    
Memory Dump @0x0000000018A4BD32

0000000000000000:   50000d00 01000000 d0530384 9d040000 †P.......ÐS.„....
0000000000000010:   00000000 00000000 47be0500 0000††††††........G¾....  

Version Information =
    Transaction Timestamp: 376391
    Version Pointer: Null


Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 1                              

Slot 0 Column 2 Offset 0x0 Length 0 Length (physical) 0

varchar_value =                      

Slot 0 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 0                        

Slot 0 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 0 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (010086470766)        
Slot 1 Offset 0x1cee Length 38

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS VERSIONING_INFO
Record Size = 38                    
Memory Dump @0x0000000018A4BCEE

0000000000000000:   70000d00 02000000 d1530384 9d040000 †p.......ÑS.„....
0000000000000010:   01001800 54455354 00000000 00000000 †....TEST........
0000000000000020:   47be0500 0000††††††††††††††††††††††††G¾....          

Version Information =
    Transaction Timestamp: 376391
    Version Pointer: Null


Slot 1 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 2                              

Slot 1 Column 2 Offset 0x14 Length 4 Length (physical) 4

varchar_value = TEST                

Slot 1 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 1                        

Slot 1 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 1 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (020068e8b274)        
Slot 2 Offset 0xa4 Length 42

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS VERSIONING_INFO
Record Size = 42                    
Memory Dump @0x0000000018A4A0A4

0000000000000000:   70000d00 03000000 d0530384 9d040000 †p.......ÐS.„....
0000000000000010:   01001c00 54455354 54455354 00000000 †....TESTTEST....
0000000000000020:   00000000 47be0500 0000†††††††††††††††....G¾....      

Version Information =
    Transaction Timestamp: 376391
    Version Pointer: Null


Slot 2 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 3                              

Slot 2 Column 2 Offset 0x14 Length 8 Length (physical) 8

varchar_value = TESTTEST            

Slot 2 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 0                        

Slot 2 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 2 Offset 0x0 Length 0 Length (physical) 0
<snip>
.
.
.
</snip>
KeyHashValue = (3400154ff2a1)        
Slot 52 Offset 0x1bfc Length 242

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS VERSIONING_INFO
Record Size = 242                    
Memory Dump @0x0000000018A4BBFC

0000000000000000:   70000d00 35000000 d0530384 9d040000 †p...5...ÐS.„....
0000000000000010:   0100e400 54455354 54455354 54455354 †..ä.TESTTESTTEST
0000000000000020:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000030:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000040:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000050:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000060:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000070:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000080:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
0000000000000090:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000A0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000B0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000C0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000D0:   54455354 54455354 54455354 54455354 †TESTTESTTESTTEST
00000000000000E0:   54455354 00000000 00000000 47be0500 †TEST........G¾..
00000000000000F0:   0000†††††††††††††††††††††††††††††††††..              

Version Information =
    Transaction Timestamp: 376391
    Version Pointer: Null


Slot 52 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 53                              

Slot 52 Column 2 Offset 0x14 Length 208 Length (physical) 208

varchar_value = TESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTT
ESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTEST

Slot 52 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 0                        

Slot 52 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 52 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (350070284e19)        


DBCC execution completed. If DBCC printed error messages, contact your system administrator.

I am feeling pretty good about my results so far but I want to do a final test to rule out any effect from the use of transactions in my testing. Here is the final test:

1
2
3
4
5
6
7
8
9
DBCC TRACEON(3604)
GO

DELETE  dbo.TestTable
WHERE   id = 1
GO

DBCC PAGE(RecordSizeTest, 1, 156, 3)
GO

The DBCC PAGE results from this test confirm that transactions have not impacted the test cases above. The results below do depend on how the test is run though. I ran both the delete and DBCC PAGE as a single command and was able to see the ghost record in slot 0 and m_freeCnt was still 680. If I had run the statements individually I would most likely have seen that the ghost cleanup process had removed the record in slot 0 with the row that had been in slot 1 now showing in slot 0. The m_freeCnt would also have been updated to be 712, reflecting the removal of the record.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
PAGE: (1:156)


BUFFER:


BUF @0x0000000090FC6080

bpage = 0x0000000090182000           bhash = 0x0000000000000000           bpageno = (1:156)
bdbid = 6                            breferences = 1                      bUse1 = 35886
bstat = 0x6c0000b                    blog = 0x432159bb                    bnext = 0x0000000000000000

PAGE HEADER:


Page @0x0000000090182000

m_pageId = (1:156)                   m_headerVersion = 1                  m_type = 1
m_typeFlagBits = 0x4                 m_level = 0                          m_flagBits = 0x2000
m_objId (AllocUnitId.idObj) = 29     m_indexId (AllocUnitId.idInd) = 256  
Metadata: AllocUnitId = 72057594039828480                                
Metadata: PartitionId = 72057594038910976                                 Metadata: IndexId = 1
Metadata: ObjectId = 2105058535      m_prevPage = (0:0)                   m_nextPage = (1:160)
pminlen = 13                         m_slotCnt = 53                       m_freeCnt = 680
m_freeData = 7504                    m_reservedCnt = 0                    m_lsn = (34:435:43)
m_xactReserved = 0                   m_xdesId = (0:993)                   m_ghostRecCnt = 1
m_tornBits = 0                      

Allocation Status

GAM (1:2) = ALLOCATED                SGAM (1:3) = ALLOCATED              
PFS (1:1) = 0x68 MIXED_EXT ALLOCATED   0_PCT_FULL                         DIFF (1:6) = CHANGED
ML (1:7) = NOT MIN_LOGGED            

Slot 0 Offset 0x1d32 Length 30

Record Type = GHOST_DATA_RECORD      Record Attributes =  NULL_BITMAP VERSIONING_INFO
Record Size = 30                    
Memory Dump @0x0000000018A4BD32

0000000000000000:   5c000d00 01000000 d0530384 9d040000 †.......ÐS.„....
0000000000000010:   d0010000 01000000 81c00500 0000††††††Ð........À....  

Version Information =
    Transaction Timestamp: 376961
    Version Pointer: (file 1 page 464 currentSlotId 0)


Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 1                              

Slot 0 Column 2 Offset 0x0 Length 0 Length (physical) 0

varchar_value =                      

Slot 0 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 0                        

Slot 0 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 0 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (010086470766)        
Slot 1 Offset 0x1cee Length 38

Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS VERSIONING_INFO
Record Size = 38                    
Memory Dump @0x0000000018A4BCEE

0000000000000000:   70000d00 02000000 d1530384 9d040000 †p.......ÑS.„....
0000000000000010:   01001800 54455354 00000000 00000000 †....TEST........
0000000000000020:   47be0500 0000††††††††††††††††††††††††G¾....          

Version Information =
    Transaction Timestamp: 376391
    Version Pointer: Null


Slot 1 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 2                              

Slot 1 Column 2 Offset 0x14 Length 4 Length (physical) 4

varchar_value = TEST                

Slot 1 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 1                        

Slot 1 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 1 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (020068e8b274)  
<snip>
.
.
.
</snip>
Version Information =
    Transaction Timestamp: 376391
    Version Pointer: Null


Slot 52 Column 1 Offset 0x4 Length 4 Length (physical) 4

id = 53                              

Slot 52 Column 2 Offset 0x14 Length 208 Length (physical) 208

varchar_value = TESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTT
ESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTEST

Slot 52 Column 3 Offset 0x8 Length 1 (Bit position 0)

bit_value = 0                        

Slot 52 Column 4 Offset 0x9 Length 4 Length (physical) 4

create_date = 2010-05-28 14:11:00.000                                    

Slot 52 Offset 0x0 Length 0 Length (physical) 0

KeyHashValue = (350070284e19)        


DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Conclusion

I started out with a theory that rebuilding the indexes in a database with ONLINE=ON will prevent unexpected space usage due to large update or delete operations in a database that has Read Committed Snapshot Isolation enabled. I feel that through my tests I have shown that adding the versioning information at the time of reindexing is a viable alternative to keeping extra free space in a database. It is definitely a great alternative to dealing with production issues caused by a full database. Like anything else this solution is not perfect, but if I know there is a big delete coming, I will definitely make sure the indexes on the impacted tables are rebuilt with ONLINE=ON or that I have included extra space via FILL_FACTOR. Most importantly, I know to expect growth from large update or delete operations and can manage accordingly in the way the database is set up and the number of records impacted by each pass over the data.

Thank you for sticking with me to the end of this long post. I hope it was as informative to read as it was to write.

What is a Good Way to Run CheckDB on a VLDB?

Introduction

Today’s script is one that I wrote based on the logic outlined in this post by Paul Randal (Blog|Twitter). This script is written for SQL 2000 but, as Paul notes, the logic will work on SQL 2005.

The Script

This stored procedure stays pretty true to the logic outlined in Paul’s post so I will just cover the differences here. The first thing to notice is that the parameters passed into the procedure are the days of the week that different portions of the check should run, the maximum run time in minutes and whether or not to print debug messages. The stored procedure then parses the input strings and runs CHECKALLOCs and CHECKCATALOGs if requested.

If tables should be checked today a little more work is necessary. I decided to use a utility database to hold work tables for my custom scripts called DBADB. The first part of performing a table check is to see if a work table already exists in the database. If the table does not exist then one is created and loaded with a list of all tables in the database. After the table is loaded, the process begins looping through the table, checking that the run time has not been exceeded then running checktable on each table. This continues until the table list has been processed or time runs out. If time runs out then the process picks up where it left off next time the table check starts to make sure all tables are eventually checked before starting over again.

Here is the script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
CREATE PROCEDURE [dbo].[sp_dba_checkdb_vldb] @days_to_run_checkalloc varchar(15) = '1,4', @days_to_run_checkcatalog varchar(15) = '1', @days_to_run_checktable varchar(15) = '1,2,3,4,5,6,7', @max_minutes_to_run int = 360, @debug_flag bit = 0
AS

 BEGIN

    DECLARE @date_part_search_string    char(3),
            @start_time                 datetime,
            @sql_text                   nvarchar(4000),
            @current_object_id          int

    SELECT  @date_part_search_string = '%' + CAST(DATEPART(dw, GETDATE()) AS VARCHAR) + '%',
            @start_time = GETDATE()

    IF PATINDEX(@date_part_search_string, @days_to_run_checkalloc) > 0
     BEGIN
        IF @debug_flag = 1
            PRINT 'DEBUG: Running DBCC CHECKALLOC'
            DBCC CHECKALLOC
     END

    IF PATINDEX(@date_part_search_string, @days_to_run_checkcatalog) > 0
     BEGIN
        IF @debug_flag = 1
            PRINT 'DEBUG: Running DBCC CHECKCATALOG'
            DBCC CHECKCATALOG
     END
     
    IF PATINDEX(@date_part_search_string, @days_to_run_checktable) > 0
     BEGIN
        DECLARE @control_table  varchar(500)
        SELECT  @control_table = DB_NAME() + '_dbcc_checktable_worklist'
        IF NOT EXISTS(SELECT * FROM [DBADB].[dbo].[sysobjects] WHERE name = @control_table)
         BEGIN
            SELECT  @sql_text = 'SELECT DISTINCT
                                        i.id,
                                        CAST(NULL AS datetime) AS run_date_time
                                INTO    [DBADB].[dbo].'
+ QUOTENAME(@control_table) + '
                                FROM    sysindexes i
                                        INNER JOIN sysobjects o
                                            ON i.id = o.id
                                WHERE   o.type != '
'TF'''

            IF @debug_flag = 1
                PRINT 'DEBUG: Running sql command: [' + @sql_text + ']'
            EXEC sp_executesql  @sql_text

            SELECT  @sql_text = '
CREATE CLUSTERED INDEX IX_' + @control_table + '_id_run_date_time ON [DBADB].[dbo].' + QUOTENAME(@control_table) + ' (id, run_date_time)'
            IF @debug_flag = 1
                PRINT '
DEBUG: Running sql command: [' + @sql_text + ']'
            EXEC sp_executesql  @sql_text
         END

        SELECT  @sql_text = '
  SELECT  TOP 1 @current_object_id = c.id
                                FROM    [DBADB].[dbo].[' + @control_table + '] c
                                        INNER JOIN sysobjects o
                                            ON c.id = o.id
                                WHERE   c.run_date_time IS NULL
                                            AND o.type != ''TF'''
         
        IF @debug_flag = 1
            PRINT '
DEBUG: Running sql command: [' + @sql_text + ']'
           
        EXEC sp_executesql @sql_text, N'
@current_object_id int OUTPUT', @current_object_id = @current_object_id OUTPUT
       
        IF @debug_flag = 1
            PRINT '
DEBUG: @current_object_id = ' + ISNULL(CAST(@current_object_id AS varchar), 'NULL')
       
        WHILE   @current_object_id IS NOT NULL AND DATEADD(mi, @max_minutes_to_run, @start_time) > GETDATE()
         BEGIN
            SELECT @current_object_id = NULL

            SELECT  @sql_text = '
  SELECT  TOP 1 @current_object_id = c.id
                                    FROM    [DBADB].[dbo].[' + @control_table + '] c
                                            INNER JOIN sysobjects o
                                                ON c.id = o.id
                                    WHERE   c.run_date_time IS NULL
                                                AND o.type != ''TF'''

            IF @debug_flag = 1
                PRINT '
DEBUG: Running sql command: [' + @sql_text + ']'
               
            EXEC sp_executesql @sql_text, N'
@current_object_id int OUTPUT', @current_object_id = @current_object_id OUTPUT

            IF @debug_flag = 1
                PRINT '
DEBUG: @current_object_id = ' + ISNULL(CAST(@current_object_id AS varchar), 'NULL')
               
            IF @debug_flag = 1
                PRINT '
DEBUG: Running DBCC CHECKTABLE(' + CAST(@current_object_id AS varchar) + ')'

            DBCC CHECKTABLE(@current_object_id)

            SELECT  @sql_text = '
  UPDATE  [DBADB].[dbo].[' + @control_table + ']
                                    SET     run_date_time = GETDATE()
                                    WHERE   id = @current_object_id'

            IF @debug_flag = 1
                PRINT '
DEBUG: Running sql command: [' + @sql_text + ']'

            EXEC sp_executesql @sql_text, N'
@current_object_id int OUTPUT', @current_object_id = @current_object_id OUTPUT
         END
       
        IF @current_object_id IS NULL
         BEGIN
            PRINT '
Ran out of work to do...cleaning up and shutting down.'
            IF EXISTS(SELECT * FROM [DBADB].[dbo].[sysobjects] WHERE name = @control_table)
             BEGIN
                SELECT  @sql_text = '
DROP TABLE [DBADB].[dbo].' + QUOTENAME(@control_table)
                IF @debug_flag = 1
                    PRINT '
DEBUG: Running sql command: [' + @sql_text + ']'
                EXEC sp_executesql  @sql_text
             END
         END
        ELSE
            PRINT '
Ran out of time...shutting down.'
     END
 END
GO

Conclusion

As usual, I hope you find this script helpful. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

How Do I Move SQL Database Files Around?

Introduction

Today’s script is one that I had not planned on blogging so soon but since Paul Randal just talked about moving SQL Server files around for TechNet Magazine, it seemed like a good time to break this one out.

The Script

This script is a little different in that it is a script that creates a script, a “turducken” script if you will. The idea here is to run this script and let it output to text then take those results, paste them into a new window, review the resulting script and maybe even run it.

The script starts out by getting the default data and log file locations from SQL Server by checking the registry, using a method learned by watching Profiler while checking the location with Management Studio. (I often comment out these lines to change to specific locations.) The script then begins building the string to output by creating the command to turn xp_cmdshell on. A lot of people, including me, have a policy against xp_cmdshell being turned on on their servers but in cases like this where it is turned on to be used and turned right back off I feel I can get away with it. The next step is to create alter database scripts to take the databases offline. Next, the alter database statements and DOS file move commands are created. The command to set the database online is then added and finally, xp_cmdshell is turned back off.

Updated 07/14/2010 to replace move commands with copy to make sure the files are still good before they are deleted. This does add manual cleanup but the trade off is not having to find out how good your last backup is. Thanks to Paul Randal (Blog|Twitter) and Buck Woody (Blog|Twitter) for pointing this out.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
DECLARE @file_path  nvarchar(520),
        @log_path   nvarchar(520)

EXEC master.dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE', N'SoftwareMicrosoftMSSQLServerMSSQLServer', N'DefaultData', @file_path OUTPUT
EXEC master.dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE', N'SoftwareMicrosoftMSSQLServerMSSQLServer', N'DefaultLog', @log_path OUTPUT

SELECT  'EXEC sp_configure ''show advanced options'', ''1''
GO
RECONFIGURE
GO
EXEC sp_configure '
'xp_cmdshell'', ''1''
GO
RECONFIGURE
GO'


SELECT 'ALTER DATABASE [' + DB_NAME(mf.database_id) + '] SET OFFLINE WITH ROLLBACK IMMEDIATE
GO
ALTER DATABASE [' + DB_NAME(mf.database_id) + '] MODIFY FILE (NAME = [' + mf.name + '], FILENAME = ''' + @file_path + '' + mf.name + '.mdf'')
GO
ALTER DATABASE [' + DB_NAME(mf.database_id) + '] MODIFY FILE (NAME = [' + mf2.name + '], FILENAME = ''' + @log_path + '' + mf2.name + '.ldf'')
GO
EXEC xp_cmdshell ''copy /Y "' + mf.physical_name + '" "' + @file_path + '' + mf.name + '.mdf"''
GO
EXEC xp_cmdshell ''copy /Y "' + mf2.physical_name + '" "' + @log_path + '' + mf2.name + '.ldf"''
GO
ALTER DATABASE [' + DB_NAME(mf.database_id) + '] SET ONLINE
GO

'
FROM    sys.master_files mf
        INNER JOIN sys.master_files mf2
            ON mf.database_id = mf2.database_id
WHERE   DB_NAME(mf.database_id) NOT IN ('
master', 'model', 'msdb', 'tempdb') and mf.type_desc = 'ROWS' and mf.file_id = 1 and mf2.type_desc = 'LOG'
AND (mf.physical_name != @file_path + '
' + mf.name + '.mdf' OR mf2.physical_name != @log_path + '' + mf2.name + '.ldf')
order by mf.name

SELECT  '
EXEC sp_configure ''xp_cmdshell'', ''0''
GO
RECONFIGURE
GO
EXEC sp_configure ''show advanced options'', ''0''
GO
RECONFIGURE
GO'

Would this Work for TempDB?

Yes and no. The resulting script can be used to alter the location of TempDB but SQL Server must be stopped and started to move the files. There would also have to be manual cleanup in that the old TempDB files would have to be deleted.

Would this Work for System Databases other than TempDB?

No. There is a lot involved in moving a system database. Detailed instructions can be found here: http://msdn.microsoft.com/en-us/library/ms345408.aspx

Conclusion

As usual, I hope you find this script helpful. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

Where Do I Start with PowerShell?

Introduction

This morning I set out to get some information about getting started in PowerShell for a coworker. Rather than spend a bunch of time searching for different sites I threw the question out to the SQL Community on Twitter via #SqlHelp. The response was so overwhelming that I decided I at least owed it to the SQL Community to get it all written down.

Thank you to everyone who supplied a link, you are what keeps the SQL Community great!

So here is everything that came in broken out by category:

Blogs

Aaron Nelson: http://sqlvariant.com/wordpress/

Aaron Nelson – PowerShell Links: http://sqlvariant.com/wordpress/index.php/powershell-links/

Aaron Nelson – Getting Started with PowerShell: http://sqlvariant.com/wordpress/index.php/2010/02/sqlserversqldatabasestables-dir/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Sqlvariations+%28SQLvariations%3A+SQL+Server%2C+a+little+PowerShell%2C+maybe+some+Hyper-V%29

Aaron Nelson – Get More Done With SQLPSX: http://sqlvariant.com/wordpress/index.php/2010/02/get-more-done-with-sqlpsx/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Sqlvariations+%28SQLvariations%3A+SQL+Server%2C+a+little+PowerShell%2C+maybe+some+Hyper-V%29

Buck Woody – Carpe Datum: http://blogs.msdn.com/buckwoody/archive/tags/PowerShell/default.aspx

Buck Woody – InformIt: http://www.informit.com/guides/content.aspx?g=sqlserver&seqNum=253&rll=1

Buck Woody – Intro to PowerShell Series: http://blogs.technet.com/heyscriptingguy/archive/2009/05/27/how-does-windows-powershell-make-it-easier-to-work-with-sql-server-2008.aspx

Hey, Scripting Guy! Blog: http://blogs.technet.com/heyscriptingguy/archive/tags/getting+started/default.aspx

Laerte Junior – Great Practical Examples on SimpleTalk.com: http://www.simple-talk.com/author/laerte-junior/

Laerte Junior – $hell Your Experience (Portuguese): http://laertejuniordba.spaces.live.com/

Windows PowerShell Blog: http://blogs.msdn.com/powershell/

Community

Powershellcommunity.org: http://www.powershellcommunity.org/

PowerShell.com: http://powershell.com/cs/

eBooks

Master-PowerShell with Dr.Tobias Weltner: http://powershell.com/cs/blogs/ebook/

TechNet

Task-Based Guide to PowerShell: http://technet.microsoft.com/en-us/library/ee332526.aspx

PowerShell Script Center: http://technet.microsoft.com/en-us/scriptcenter/dd742419.aspx

Script Center: http://technet.microsoft.com/en-us/scriptcenter/default.aspx

Windows PowerShell: Survival Guide: http://social.technet.microsoft.com/wiki/contents/articles/windows-powershell-survival-guide.aspx

Applications / GUIs

PowerGUI: http://powergui.org/index.jspa

Videos

Midnight DBA PowerShell Videos: http://midnightdba.itbookworm.com/Admin.aspx

White Papers

Understanding and Using PowerShell Support in SQL Server 2008: http://msdn.microsoft.com/en-us/library/dd938892.aspx

Conclusion

So there you have it, a pretty substantial list of resources to get started in PowerShell.These all look like great resources and I can see I have a ton of reading to do.