How Can I Tell if SQL Agent is Running via SQL?

Introduction

Today’s post is a continuation in my on-going effort to document all of the scripts I use to manage my environment. To date it has been a great exercise both in getting things written down and in getting feedback from others on better ways I could be doing things. To that end, here is a stored procedure that I use to tell if SQL Agent is running before trying to start a job programmatically.

The Script

This script opens with some trickery that I stole from Management Studio using Profiler. This is my new favorite way to determine if a server is a named instance and handle it accordingly. I will be going back to retrofit all of my scripts to use this method.

The next step is to declare a temporary table and pull in the results of xp_servicecontrol to be able to use them later. The xp_servicecontrol extended procedure is undocumented and unsupported but still seems like the best option in this case. Alternatives I have seen include using xp_cmdshell (NO!) or building a CLR stored procedure to call WMI and get this information.

Finally, the service status is selected into the output variable for programmatic use by a caller and the flag is checked to see if the service status should also be returned as a result set for debug reasons.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
CREATE PROCEDURE [dbo].[sp_dba_GetSqlAgentStatus] @display_results bit = 0, @service_state varchar(100) = null OUTPUT

AS

 BEGIN

    DECLARE @agent_string   nvarchar(256)
         
    IF (SERVERPROPERTY('INSTANCENAME') IS NOT NULL)
        SELECT @agent_string = @agent_string + N'SQLAgent$' + CONVERT (sysname, SERVERPROPERTY('INSTANCENAME'))
    ELSE
        SELECT @agent_string = @agent_string + N'SQLServerAgent'

    CREATE TABLE #service_state
    (
        current_service_state   varchar(50)
    )
   
    INSERT INTO #service_state
        EXEC master.dbo.xp_servicecontrol 'QueryState', 'SQLServerAgent', @service_state OUTPUT

    SELECT  @service_state = current_service_state
    FROM    #service_state

    DROP TABLE #service_state

    IF @display_results = 1
        SELECT @service_state AS current_service_state

 END
GO

Conclusion

As usual, I hope you find this script helpful. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

What is a Good Way to Get the SQL Server Log Directory?

Introduction

Today’s post is going to be a quick one to get back in the saddle again. I have been on an involuntary hiatus from blogging due to production issues. Things are getting better so it seems like time to get back to blogging.

The script for today is one that I use in all of my job creation scripts as well as a script I have to go back and clean up job output file paths. It gets the location of master on a SQL 2005 or SQL 2008 server, using that to find the Log directory.

The Script

This script is quite simple so I will not spend a ton of time here. It just looks at the location of master, goes up 1 level and swaps in “Log” for “Data” in the directory name. Please note that this may not work well in your environment if you have moved master after installing SQL. Thanks to Argenis Fernandez (Blog|Twitter) for pointing out the ineffcient way I was doing this before, this script now just looks at the location of the SQL Server Error Log and uses the same directory. Warning: This is using undocumented functionality, but the best things in SQL tend to be undocumented.

1
2
3
4
5
6
7
8
9
10
11
12
13
CREATE PROCEDURE [dbo].[sp_dba_GetJobOutputFileDirectory] @log_directory varchar(400) OUTPUT, @showResults BIT = 1

AS

 BEGIN

    SELECT  @log_directory = REPLACE(CAST(SERVERPROPERTY('ErrorLogFileName') AS nvarchar(512)), 'ERRORLOG', '')

    IF @showResults = 1
        SELECT @log_directory

 END
GO

Conclusion

As usual, I hope you find this script helpful. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

What is a Good Way to Quickly Disable SQL Agent Job Schedules?

Introduction

I recently blogged about a Stored Procedure to Sequentially Run SQL Agent Jobs and have been meaning to blog about this script as a follow-up because this is very helpful for converting from SQL job schedules to sequential job schedules while still having a back out path. The question of how to disable SQL jobs came up on #SqlHelp yesterday so I figured it was time to cover this script.

Background

When I looked at switching over to running my maintenance jobs in a sequential manner rather than fussing with individual job schedules it seemed like a daunting task. I had to have a way to cover a lot of ground fast so manually via Management Studio was out and I needed to be able to quickly rollback so a flexible script was definitely a must.

I decided to write a script to disable the job schedules on all of my maintenance jobs. I figured this was the easiest way to cover the most ground and if things went bad I could just change the script to turn the enabled flag back on and re-run it.

The Script

This script has 2 major parts, disable the job schedules then notify SQL Agent of the change. The script begins by declaring a table variable (NOTE: I started out with a regular #temp table here but got column definition errors that were corrected by changing to a table variable. I have not had time to research why this happened.) to store the changed records in, allowing SQL Agent to be notified of only the job schedules that have been changed.

I could have just notified on all jobs but this invites the possibility of “fixing” the schedule on a job that had been broken by some other scripted update. Not wanting to take the chance of starting SQL Agent jobs that have not run in a while I opted to carefully limit the scope of the notifications.

Once I have the results it is as simple as opening a cursor and calling sp_sqlagent_notify for each job schedule that has been updated.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
USE msdb

DECLARE @updated_schedules table
(
    schedule_id int,
    job_id      uniqueidentifier,
    job_name    sysname
)

UPDATE  ss
SET     ss.[enabled] = 0
OUTPUT  INSERTED.schedule_id,
        sj.job_id,
        sj.name
INTO    @updated_schedules
FROM    msdb.dbo.sysjobs sj
        INNER JOIN msdb.dbo.sysjobschedules sjs
            ON sj.job_id = sjs.job_id
        INNER JOIN msdb.dbo.sysschedules ss
            ON sjs.schedule_id = ss.schedule_id
WHERE   sj.name LIKE '<pattern>'
           
DECLARE @schedule_id    int,
        @job_id         uniqueidentifier,
        @job_name       sysname

DECLARE csr CURSOR FAST_FORWARD READ_ONLY
FOR
    SELECT  schedule_id,
            job_id,
            job_name
    FROM    @updated_schedules

OPEN csr

FETCH NEXT FROM csr INTO @schedule_id, @job_id, @job_name

WHILE @@FETCH_STATUS <> -1
 BEGIN
   
    EXECUTE msdb.dbo.sp_sqlagent_notify @op_type     = N'S',
                                        @job_id      = @job_id,
                                        @schedule_id = @schedule_id,
                                        @action_type = N'U'
   
    PRINT 'Called sp_sqlagent_notify for ' + @job_name
   
    FETCH NEXT FROM csr INTO @schedule_id, @job_id, @job_name
   
 END

CLOSE csr
DEALLOCATE csr

Conclusion

As usual, I hope you find this script helpful. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

Whats a Good Rule for Max Degree of Parallelism?

Introduction

I am one of those people that believe that anything worth doing is worth having a script to do it. Setting the Max Degree of Parallelism is no exception. In this post I will go through the script I use as well as the metrics I watch to make sure the setting is correct.

Reasoning

Max Degree of Parallelism (MAXDOP) is a setting that many people do not get right and for good reason it is tricky. The easiest scenario is a server with NUMA architecture. For NUMA servers the stepping off point is the number of cores per NUMA node. For other servers the answer is often “It Depends” followed by “It Changes Over Time”. The good news here is that the setting is fairly easy to tune by watching wait stats.

The rule of thumb I use for gauging whether or not my settings are correct is to make sure that CXPACKET waits are slightly higher than SOS_SCHEDULER_YIELD. My goal is to get them as close as possible to each other without SOS_SCHEDULER_YIELD waits exceeding CXPACKET waits. Depending on your workload and the amount of tuning you are doing you may need to evaluate your settings more often.

The Script

This script is dirt simple but do not be deceived, this is not a change to take lightly. It is a great starting off point for a new server but careful research should be done before running on any existing production systems. The basic logic is enable ‘show advanced options’ then get the hyperthread_ratio from sys.dm_os_sys_info. This will give you the number of cores per socket and that is typically equal to the number of cores on a NUMA node. Hyperthreading is not the same as multiple cores, I am not sure I would use this script on a hyperthreaded server. Finally the script sets ‘max degree of parallelism’ to the hyperthread ratio and turns off ‘show advanced options’.

1
2
3
4
5
6
7
8
9
10
11
12
13
EXEC sys.sp_configure N'show advanced options', '1'
RECONFIGURE WITH OVERRIDE

DECLARE @hyperthread_ratio  INT

SELECT  @hyperthread_ratio = hyperthread_ratio
FROM    sys.dm_os_sys_info

EXEC sys.sp_configure N'max degree of parallelism', @hyperthread_ratio
RECONFIGURE WITH OVERRIDE

EXEC sys.sp_configure N'show advanced options', '0'
RECONFIGURE WITH OVERRIDE

Conclusion

As usual, I hope you find this script helpful. This script and the basic rules I have laid out work well for me in my environment. Test it carefully in yours. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

How Do I Find SSIS Packages that Exist in Multiple Folders?

Introduction

If you are using SQL Storage for your SSIS packages and have multiple folders to make life easier then there is no doubt you have run into this before. A package gets moved to production in what appears to be the right folder but the requestor reports that the old package is still running. You move the package again and still no change. Eventually you realize that you have duplicate packages in different folders on your server and set out about the painful task of opening each folder, hunting for other copies of the package. This is not so bad if you have 2-3 folders but the more complex your environment the more painful this becomes. Having run into this on more than one occasion myself I decided to write a script to do the work for me.

The Script

This is another easy one so I will not spend too much time on the mechanics of it. Get all packages, grouped by name where COUNT(*) > 1 then dig for the folder names and concatenate them into a nice string. I chose to create a folder name list because it makes it easy to create a report that can be emailed from the server. This script does not help if the packages are not named the same but at that point the problem should be pretty obvious to the person trying to run it.

1
2
3
4
5
6
7
8
9
10
11
12
13
SELECT      p.name AS package_name,
            REPLACE(RTRIM((
                SELECT      pf2.foldername + ' '
                FROM        msdb.dbo.sysdtspackagefolders90 pf2
                            INNER JOIN msdb.dbo.sysdtspackages90 p2
                                ON pf2.folderid = p2.folderid
                WHERE       p2.name = p.name
                ORDER BY    pf2.foldername
                FOR XML PATH('')
            )), ' ', ', ') AS folder_names
FROM        msdb.dbo.sysdtspackages90 p
GROUP BY    p.name
HAVING      COUNT(*) > 1

Conclusion

As usual, I hope you find this script helpful. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

How Do I Kill All Sessions from a Windows Group?

Introduction

The stored procedure I am posting today will kill all sessions for users that are members of a given domain group. I use this stored procedure to keep ad-hoc (write down Access) users out of the way of nightly builds on my data warehouse. I have created two jobs for each group of users that I want to keep out of the way. The first job denies connect to the Windows Group then calls this stored procedure to kick all of the users off. I run this job just before the build begins. The second job grants connect to the group after the nightly build finishes.

The Script

The stored procedure gets all users logged in via Windows Authentication by looking for the slash in their login name, cursoring through the list of users getting all windows groups for each user. If any windows groups that user is a member of match the one passed in then the session is killed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
CREATE PROCEDURE dbo.sp_dba_kill_spids_for_domain_group(@domain_group sysname)

AS

SET NOCOUNT ON

DECLARE @spid           smallint,
        @loginame       nvarchar(128),
        @command        nvarchar(max)

CREATE TABLE #Logins
(
    account_name        sysname NULL,
    type                char(8) NULL,
    privilege           char(9) NULL,
    mapped_login_name   sysname NULL,
    permission_path     sysname NULL
)

DECLARE csr CURSOR FAST_FORWARD READ_ONLY FOR
    SELECT  RTRIM(login_name),
            session_id
    FROM    sys.dm_exec_sessions
    WHERE   login_nameLIKE '%%' --Look for the slash between domain and login

OPEN csr

FETCH NEXT FROM csr INTO @loginame, @spid

WHILE @@FETCH_STATUS <> -1
 BEGIN
    INSERT #Logins
        EXEC xp_logininfo @loginame, 'all'
   
    IF EXISTS(  SELECT  *
                FROM    #Logins
                WHERE   permission_path = @domain_group    )
     BEGIN
        SELECT  @command = 'KILL ' + CAST(@spid AS nvarchar) + ' --' + @loginame
        PRINT @command
        EXEC sp_executesql @command
     END

    TRUNCATE TABLE #Logins

    FETCH NEXT FROM csr INTO @loginame, @spid
 END

CLOSE csr

DEALLOCATE csr

DROP TABLE #Logins

SET NOCOUNT OFF

GO

Conclusion

As usual, I hope you find this stored procedure helpful. Be careful that you know who is in a group before killing the sessions. I have seen situations where people found out they were in the wrong group because their sessions kept getting killed. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

T-SQL Tuesday #005 – Self Service Performance Information

Introduction

Today I am taking a little detour from the scripts I have been posting to take part in “T-SQL Tuesday #005 – Reporting“.

My T-SQL Tuesday #005 entry allows selected users of a server to see what is going on with it and react accordingly, freeing members of the DBA team from constant questions about why the server is running slow. Admittedly it does often just lead to more informed complaints but at least the time to resolution is quicker.

The Reports

I am a sucker for free stuff so I was extremely happy when I discovered the SQL Server 2005 Performance Dashboard Reports. The reports are a package of Reporting Services reports that put a nice user interface over all of the DMVs that were new in SQL 2005. The Performance Dashboard allows you to see CPU History, Waits, Active Sessions, Waits By CPU, IO etc. They are a powerful tool that will quickly guide users directly to whatever issue it is they are looking to solve.

The one drawback to the the Performance Dashboard Reports is that they run as a custom report in Management Studio so if you wanted to distribute them to non-administrators then a whole bunch of people are going to get way more access than you would ever want them to have. To solve this problem I converted the reports to reporting services.

The change to reporting services was really quite straightforward. The big problem was sorting out the connection strings. I ended up passing an extra parameter, ServerName, to each of the reports then using that server name to put together a connection string for that server. The beauty of this approach is that it allows me to create a linked report for every server I want people to see. To keep it user friendly the linked report will call the reports for the appropriate server using the correct server name. Here is how the connection string looks:

1
2
3
4
5
    <ConnectionProperties>
        <DataProvider>SQL</DataProvider>
        <ConnectString>="data source=" &amp; Parameters!ServerName.Value</ConnectString>
        <IntegratedSecurity>true</IntegratedSecurity>
      </ConnectionProperties>

What About Security?

If you are anything like me you are shaking your head and thinking this is a security nightmare. Well it could be if not implemented correctly but properly thought through is reasonably safe.

Security works in layers here. First of all I made sure the reports are running under a specific account that is unique to them. This account only has the exact rights needed to get the job done.

The next layer of security is access to the linked reports themselves. Not everyone gets access to the linked reports. Only people in certain groups or roles should even be considered for access to these reports. By no means would I recommend leaving the reports open to your entire organization.

Lastly, the catch-all for security is that I have added a call to “How Can I Tell if a Windows Login has Rights to My Server?“, set to raise an exception if the report requestor does not have access to that server, to every report in the package. Even if I screwed up somewhere and left a big hole that someone could exploit, short of letting anyone edit the reports, they will not be able to see any pertinent information because the first thing they see will be an exception. I also include the group that is allowing the user to see the report, allowing me to quickly see where any surprises are coming from if I get emailed a screenshot of a report from someone unexpected.

Conclusion

So there you have it, my entry for T-SQL Tuesday #005. I hope to go into these reports further in future posts, sharing some of the modifications I have made. Some of the changes include bug fixes and changes to make the reports keep working in SQL 2008, but others are new reports. One of the more useful new reports looks at executions per second to identify logic bugs and caching opportunities.

Apologies for not including lots of fancy screenshots, but I am unable to show pictures of my work without showing pictures of the work of others. After including callouts to show what is mine and what is not the pictures just would not be fancy anymore.

How Can I Tell if a Windows Login has Rights to My Server?

Introduction

Here is another utility stored procedure that I use. This stored procedure will tell you how a user has access to your server. This is a great stored procedure to use for things like server dashboards where you only want people to see the dashboard for servers that they have access to without granting them all the rights that would go with the dashboard.

The Script

This stored procedure is quite simple. For a given login name call xp_logininfo, passing in the login name, piping the results into a temporary table. Once the table is built check to see if it has any rows and whether or not we should raise an exception based on the @hide_exceptions flag. If yes then raise an exception else just move along. Lastly, return any information returned about the login to the caller.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
CREATE PROCEDURE dbo.sp_dba_logininfo @loginame nvarchar(128), @hide_exceptions bit = 0

AS

 BEGIN

    CREATE TABLE #Logins
    (
        account_name        sysname NULL,
        type                char(8) NULL,
        privilege           char(9) NULL,
        mapped_login_name   sysname NULL,
        permission_path     sysname NULL
    )

        INSERT #Logins
            EXEC xp_logininfo @loginame

    IF (SELECT COUNT(*) FROM #Logins) = 0 AND @hide_exceptions = 0
     BEGIN
        RAISERROR('Specified user does not have access to this server.', 14, 1)
     END

    SELECT  account_name,
            type,
            privilege,
            mapped_login_name,
            permission_path
    FROM    #Logins

    DROP TABLE #Logins
 
 END

GO

Conclusion

As usual, I hope you find this stored procedure helpful. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

Script to Create A TempDB File Per Processor

Introduction

Trying to keep the streak alive so here is my 5th blog post in 5 days. This one is another script and by now you are probably wondering if I am either really incredibly lazy or just hate screwing up. The answer to both is an unqualifed YES.

The Need

The best practice from Microsoft is to have 1 TempDB file per processor on a dedicated disk. The files should be set large enough so that they do not need to grow. You can read more here. I should warn you that there is some debate on this recommendation and many people run .25 TempDB files per processor or even just 1 big file on less busy servers.

Update: There is not a lot of good information on this subject outside of the best practices recommendations and I am not trying to fill that void here. I am not advocating a certain number of files for TempDB in this post, that would probably occupy an entire series of blog posts and require intimate knowledge of the internals of the storage engine.

This post is a good start at dispelling some of the myths around how many files you should have for TempDB. Most noticeably, it only recommends multiple files when you are seeing latch contention in TempDB.

I plan to update this post directly as new information is discovered to get the best information out there so please either check back or subscribe to the feed to stay informed.

April 12, 2010 – Paul Randal has just blogged “A SQL Server DBA myth a day: (12/30) tempdb should always have one data file per processor core”. Please make sure to read that post before going down the path of 1 TempDB file per processor.

For me, on my big servers, I like to run with 1 TempDB file per core / processor. Unfortunately being big servers, these machines have a lot of cores. The last server I built had 32 cores. Creating individual files via the GUI would have taken 30 minutes to an hour assuming I did not get something wrong and have to redo it.

The Script

To satisfy my need to be lazy and protect me from myself I have written a script to add a TempDB file per processor. The script is easy enough to use. Step 1 is to figure out how much space you have in total on your TempDB data drive. I specifically say data because logs should be on another drive. Step 2 is to divide the size of your drive by the number of cores / processors in your server and make that the size of your current single TempDB file. To give an example: Your new server has 32 cores and a 66 GB TempDB drive. 32 neatly goes into 64 2 times so you would set the size of your TempDB file to 2 GB, with maybe a slightly higher maximum size if you prefer to leave autogrow on. Next you would simply double check your math and settings then run the script and 31 exact copies of that file would be created. The whole thing should take less than 5 minutes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
USE [master]
GO
DECLARE @cpu_count      int,
        @file_count     int,
        @logical_name   sysname,
        @file_name      nvarchar(520),
        @physical_name  nvarchar(520),
        @size           int,
        @max_size       int,
        @growth         int,
        @alter_command  nvarchar(max)

SELECT  @physical_name = physical_name,
        @size = size / 128,
        @max_size = max_size / 128,
        @growth = growth / 128
FROM    tempdb.sys.database_files
WHERE   name = 'tempdev'

SELECT  @file_count = COUNT(*)
FROM    tempdb.sys.database_files
WHERE   type_desc = 'ROWS'

SELECT  @cpu_count = cpu_count
FROM    sys.dm_os_sys_info

WHILE @file_count < @cpu_count -- Add * 0.25 here to add 1 file for every 4 cpus, * .5 for every 2 etc.
 BEGIN
    SELECT  @logical_name = 'tempdev' + CAST(@file_count AS nvarchar)
    SELECT  @file_name = REPLACE(@physical_name, 'tempdb.mdf', @logical_name + '.ndf')
    SELECT  @alter_command = 'ALTER DATABASE [tempdb] ADD FILE ( NAME =N''' + @logical_name + ''', FILENAME =N''' +  @file_name + ''', SIZE = ' + CAST(@size AS nvarchar) + 'MB, MAXSIZE = ' + CAST(@max_size AS nvarchar) + 'MB, FILEGROWTH = ' + CAST(@growth AS nvarchar) + 'MB )'
    PRINT   @alter_command
    EXEC    sp_executesql @alter_command
    SELECT  @file_count = @file_count + 1
 END

Conclusion

I hope you find this script helpful. Please let me know if you run into any issues with it or if it makes your life easier. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

What Do I Need to Do After Moving a Database to SQL 2005/2008?

Introduction

I recently moved a database from SQL 2000 to SQL 2005. It was really just a matter of detaching the database, copying the files, attaching it on the new server, flip the compatibility mode, fix the user accounts, and run a few queries. Thats it? Could it have been so easy? It turns out the answer is no. There are some steps that need to be taken after moving a database to SQL 2005 or 2008 from SQL 2000.

The Script

This is an easy one so I am skipping straight to the meat. This is a simple script that runs DBCC UPDATEUSAGE, sp_updatestats, then runs a DBCC CHECKDB with DATA_PURITY.

Running update usage is something that everyone was supposed to do on a regular basis on SQL 2000. In SQL 2005 and up the usage is maintained using a different algorithm and *should* never need to be updated again. This BOL entry explains the functionality in detail.

There are a number of changes to the handling of statistics in SQL 2005 that are outlined here so the next step is to update statistics. Even if there were no changes in functionality I would probably update statistics here anyway. I cannot count the number of times updating statistics has yielded an unexpected boost in performance on high volume days. The new algorithm in 2005 and up first checks to see if statistics are out of date before updating them so it is also much less intrusive.

Finally, the last step is to run a DBCC CheckDB with the DATA_PURITY flag. I always like to make sure a database has not picked up any corruption on it’s journey between servers so I would typically run a CheckDB here anyway, but in this case it is more important. Data purity checks were introduced in SQL 2005 to make sure the data in columns conforms to the definition of the columns, like making sure a smalldatetime column did not contain a value like ‘0001-01-01 12:32:20.023’. This checking is not turned on by default when you move an older database to SQL 2005 / 2008, so to complete the migration process it is necessary to run a CheckDB with the DATA_PURITY flag turned on. Once the check is run and the database passes a flag will be set on the database and every future CheckDB will do the data purity checks. In my minde this is an essential step to fully completing the migration to SQL 2005 / 2008.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
--Set Connection Settings to Avoid CheckDB Failures
SET ARITHABORT ON
SET QUOTED_IDENTIFIER ON
SET CONCAT_NULL_YIELDS_NULL ON
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
SET NUMERIC_ROUNDABORT OFF

DECLARE @db_name    sysname

SELECT  @db_name = DB_NAME()

DBCC UPDATEUSAGE(@db_name)

EXEC (@db_name +'..sp_updatestats ''RESAMPLE''')

DBCC CHECKDB (@db_name) WITH ALL_ERRORMSGS, DATA_PURITY

Conclusion

As usual, I hope you find this script helpful. Please let me know if you run into any issues with it. As always scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.