What is a Good Way to Quickly Disable SQL Agent Job Schedules?

Introduction

I recently blogged about a Stored Procedure to Sequentially Run SQL Agent Jobs and have been meaning to blog about this script as a follow-up because this is very helpful for converting from SQL job schedules to sequential job schedules while still having a back out path. The question of how to disable SQL jobs came up on #SqlHelp yesterday so I figured it was time to cover this script.

Background

When I looked at switching over to running my maintenance jobs in a sequential manner rather than fussing with individual job schedules it seemed like a daunting task. I had to have a way to cover a lot of ground fast so manually via Management Studio was out and I needed to be able to quickly rollback so a flexible script was definitely a must.

I decided to write a script to disable the job schedules on all of my maintenance jobs. I figured this was the easiest way to cover the most ground and if things went bad I could just change the script to turn the enabled flag back on and re-run it.

The Script

This script has 2 major parts, disable the job schedules then notify SQL Agent of the change. The script begins by declaring a table variable (NOTE: I started out with a regular #temp table here but got column definition errors that were corrected by changing to a table variable. I have not had time to research why this happened.) to store the changed records in, allowing SQL Agent to be notified of only the job schedules that have been changed.

I could have just notified on all jobs but this invites the possibility of “fixing” the schedule on a job that had been broken by some other scripted update. Not wanting to take the chance of starting SQL Agent jobs that have not run in a while I opted to carefully limit the scope of the notifications.

Once I have the results it is as simple as opening a cursor and calling sp_sqlagent_notify for each job schedule that has been updated.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
USE msdb

DECLARE @updated_schedules table
(
    schedule_id int,
    job_id      uniqueidentifier,
    job_name    sysname
)

UPDATE  ss
SET     ss.[enabled] = 0
OUTPUT  INSERTED.schedule_id,
        sj.job_id,
        sj.name
INTO    @updated_schedules
FROM    msdb.dbo.sysjobs sj
        INNER JOIN msdb.dbo.sysjobschedules sjs
            ON sj.job_id = sjs.job_id
        INNER JOIN msdb.dbo.sysschedules ss
            ON sjs.schedule_id = ss.schedule_id
WHERE   sj.name LIKE '<pattern>'
           
DECLARE @schedule_id    int,
        @job_id         uniqueidentifier,
        @job_name       sysname

DECLARE csr CURSOR FAST_FORWARD READ_ONLY
FOR
    SELECT  schedule_id,
            job_id,
            job_name
    FROM    @updated_schedules

OPEN csr

FETCH NEXT FROM csr INTO @schedule_id, @job_id, @job_name

WHILE @@FETCH_STATUS <> -1
 BEGIN
   
    EXECUTE msdb.dbo.sp_sqlagent_notify @op_type     = N'S',
                                        @job_id      = @job_id,
                                        @schedule_id = @schedule_id,
                                        @action_type = N'U'
   
    PRINT 'Called sp_sqlagent_notify for ' + @job_name
   
    FETCH NEXT FROM csr INTO @schedule_id, @job_id, @job_name
   
 END

CLOSE csr
DEALLOCATE csr

Conclusion

As usual, I hope you find this script helpful. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

Whats a Good Rule for Max Degree of Parallelism?

Introduction

I am one of those people that believe that anything worth doing is worth having a script to do it. Setting the Max Degree of Parallelism is no exception. In this post I will go through the script I use as well as the metrics I watch to make sure the setting is correct.

Reasoning

Max Degree of Parallelism (MAXDOP) is a setting that many people do not get right and for good reason it is tricky. The easiest scenario is a server with NUMA architecture. For NUMA servers the stepping off point is the number of cores per NUMA node. For other servers the answer is often “It Depends” followed by “It Changes Over Time”. The good news here is that the setting is fairly easy to tune by watching wait stats.

The rule of thumb I use for gauging whether or not my settings are correct is to make sure that CXPACKET waits are slightly higher than SOS_SCHEDULER_YIELD. My goal is to get them as close as possible to each other without SOS_SCHEDULER_YIELD waits exceeding CXPACKET waits. Depending on your workload and the amount of tuning you are doing you may need to evaluate your settings more often.

The Script

This script is dirt simple but do not be deceived, this is not a change to take lightly. It is a great starting off point for a new server but careful research should be done before running on any existing production systems. The basic logic is enable ‘show advanced options’ then get the hyperthread_ratio from sys.dm_os_sys_info. This will give you the number of cores per socket and that is typically equal to the number of cores on a NUMA node. Hyperthreading is not the same as multiple cores, I am not sure I would use this script on a hyperthreaded server. Finally the script sets ‘max degree of parallelism’ to the hyperthread ratio and turns off ‘show advanced options’.

1
2
3
4
5
6
7
8
9
10
11
12
13
EXEC sys.sp_configure N'show advanced options', '1'
RECONFIGURE WITH OVERRIDE

DECLARE @hyperthread_ratio  INT

SELECT  @hyperthread_ratio = hyperthread_ratio
FROM    sys.dm_os_sys_info

EXEC sys.sp_configure N'max degree of parallelism', @hyperthread_ratio
RECONFIGURE WITH OVERRIDE

EXEC sys.sp_configure N'show advanced options', '0'
RECONFIGURE WITH OVERRIDE

Conclusion

As usual, I hope you find this script helpful. This script and the basic rules I have laid out work well for me in my environment. Test it carefully in yours. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

How Do I Find SSIS Packages that Exist in Multiple Folders?

Introduction

If you are using SQL Storage for your SSIS packages and have multiple folders to make life easier then there is no doubt you have run into this before. A package gets moved to production in what appears to be the right folder but the requestor reports that the old package is still running. You move the package again and still no change. Eventually you realize that you have duplicate packages in different folders on your server and set out about the painful task of opening each folder, hunting for other copies of the package. This is not so bad if you have 2-3 folders but the more complex your environment the more painful this becomes. Having run into this on more than one occasion myself I decided to write a script to do the work for me.

The Script

This is another easy one so I will not spend too much time on the mechanics of it. Get all packages, grouped by name where COUNT(*) > 1 then dig for the folder names and concatenate them into a nice string. I chose to create a folder name list because it makes it easy to create a report that can be emailed from the server. This script does not help if the packages are not named the same but at that point the problem should be pretty obvious to the person trying to run it.

1
2
3
4
5
6
7
8
9
10
11
12
13
SELECT      p.name AS package_name,
            REPLACE(RTRIM((
                SELECT      pf2.foldername + ' '
                FROM        msdb.dbo.sysdtspackagefolders90 pf2
                            INNER JOIN msdb.dbo.sysdtspackages90 p2
                                ON pf2.folderid = p2.folderid
                WHERE       p2.name = p.name
                ORDER BY    pf2.foldername
                FOR XML PATH('')
            )), ' ', ', ') AS folder_names
FROM        msdb.dbo.sysdtspackages90 p
GROUP BY    p.name
HAVING      COUNT(*) > 1

Conclusion

As usual, I hope you find this script helpful. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

How Do I Kill All Sessions from a Windows Group?

Introduction

The stored procedure I am posting today will kill all sessions for users that are members of a given domain group. I use this stored procedure to keep ad-hoc (write down Access) users out of the way of nightly builds on my data warehouse. I have created two jobs for each group of users that I want to keep out of the way. The first job denies connect to the Windows Group then calls this stored procedure to kick all of the users off. I run this job just before the build begins. The second job grants connect to the group after the nightly build finishes.

The Script

The stored procedure gets all users logged in via Windows Authentication by looking for the slash in their login name, cursoring through the list of users getting all windows groups for each user. If any windows groups that user is a member of match the one passed in then the session is killed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
CREATE PROCEDURE dbo.sp_dba_kill_spids_for_domain_group(@domain_group sysname)

AS

SET NOCOUNT ON

DECLARE @spid           smallint,
        @loginame       nvarchar(128),
        @command        nvarchar(max)

CREATE TABLE #Logins
(
    account_name        sysname NULL,
    type                char(8) NULL,
    privilege           char(9) NULL,
    mapped_login_name   sysname NULL,
    permission_path     sysname NULL
)

DECLARE csr CURSOR FAST_FORWARD READ_ONLY FOR
    SELECT  RTRIM(login_name),
            session_id
    FROM    sys.dm_exec_sessions
    WHERE   login_nameLIKE '%%' --Look for the slash between domain and login

OPEN csr

FETCH NEXT FROM csr INTO @loginame, @spid

WHILE @@FETCH_STATUS <> -1
 BEGIN
    INSERT #Logins
        EXEC xp_logininfo @loginame, 'all'
   
    IF EXISTS(  SELECT  *
                FROM    #Logins
                WHERE   permission_path = @domain_group    )
     BEGIN
        SELECT  @command = 'KILL ' + CAST(@spid AS nvarchar) + ' --' + @loginame
        PRINT @command
        EXEC sp_executesql @command
     END

    TRUNCATE TABLE #Logins

    FETCH NEXT FROM csr INTO @loginame, @spid
 END

CLOSE csr

DEALLOCATE csr

DROP TABLE #Logins

SET NOCOUNT OFF

GO

Conclusion

As usual, I hope you find this stored procedure helpful. Be careful that you know who is in a group before killing the sessions. I have seen situations where people found out they were in the wrong group because their sessions kept getting killed. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

T-SQL Tuesday #005 – Self Service Performance Information

Introduction

Today I am taking a little detour from the scripts I have been posting to take part in “T-SQL Tuesday #005 – Reporting“.

My T-SQL Tuesday #005 entry allows selected users of a server to see what is going on with it and react accordingly, freeing members of the DBA team from constant questions about why the server is running slow. Admittedly it does often just lead to more informed complaints but at least the time to resolution is quicker.

The Reports

I am a sucker for free stuff so I was extremely happy when I discovered the SQL Server 2005 Performance Dashboard Reports. The reports are a package of Reporting Services reports that put a nice user interface over all of the DMVs that were new in SQL 2005. The Performance Dashboard allows you to see CPU History, Waits, Active Sessions, Waits By CPU, IO etc. They are a powerful tool that will quickly guide users directly to whatever issue it is they are looking to solve.

The one drawback to the the Performance Dashboard Reports is that they run as a custom report in Management Studio so if you wanted to distribute them to non-administrators then a whole bunch of people are going to get way more access than you would ever want them to have. To solve this problem I converted the reports to reporting services.

The change to reporting services was really quite straightforward. The big problem was sorting out the connection strings. I ended up passing an extra parameter, ServerName, to each of the reports then using that server name to put together a connection string for that server. The beauty of this approach is that it allows me to create a linked report for every server I want people to see. To keep it user friendly the linked report will call the reports for the appropriate server using the correct server name. Here is how the connection string looks:

1
2
3
4
5
    <ConnectionProperties>
        <DataProvider>SQL</DataProvider>
        <ConnectString>="data source=" &amp; Parameters!ServerName.Value</ConnectString>
        <IntegratedSecurity>true</IntegratedSecurity>
      </ConnectionProperties>

What About Security?

If you are anything like me you are shaking your head and thinking this is a security nightmare. Well it could be if not implemented correctly but properly thought through is reasonably safe.

Security works in layers here. First of all I made sure the reports are running under a specific account that is unique to them. This account only has the exact rights needed to get the job done.

The next layer of security is access to the linked reports themselves. Not everyone gets access to the linked reports. Only people in certain groups or roles should even be considered for access to these reports. By no means would I recommend leaving the reports open to your entire organization.

Lastly, the catch-all for security is that I have added a call to “How Can I Tell if a Windows Login has Rights to My Server?“, set to raise an exception if the report requestor does not have access to that server, to every report in the package. Even if I screwed up somewhere and left a big hole that someone could exploit, short of letting anyone edit the reports, they will not be able to see any pertinent information because the first thing they see will be an exception. I also include the group that is allowing the user to see the report, allowing me to quickly see where any surprises are coming from if I get emailed a screenshot of a report from someone unexpected.

Conclusion

So there you have it, my entry for T-SQL Tuesday #005. I hope to go into these reports further in future posts, sharing some of the modifications I have made. Some of the changes include bug fixes and changes to make the reports keep working in SQL 2008, but others are new reports. One of the more useful new reports looks at executions per second to identify logic bugs and caching opportunities.

Apologies for not including lots of fancy screenshots, but I am unable to show pictures of my work without showing pictures of the work of others. After including callouts to show what is mine and what is not the pictures just would not be fancy anymore.

How Can I Tell if a Windows Login has Rights to My Server?

Introduction

Here is another utility stored procedure that I use. This stored procedure will tell you how a user has access to your server. This is a great stored procedure to use for things like server dashboards where you only want people to see the dashboard for servers that they have access to without granting them all the rights that would go with the dashboard.

The Script

This stored procedure is quite simple. For a given login name call xp_logininfo, passing in the login name, piping the results into a temporary table. Once the table is built check to see if it has any rows and whether or not we should raise an exception based on the @hide_exceptions flag. If yes then raise an exception else just move along. Lastly, return any information returned about the login to the caller.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
CREATE PROCEDURE dbo.sp_dba_logininfo @loginame nvarchar(128), @hide_exceptions bit = 0

AS

 BEGIN

    CREATE TABLE #Logins
    (
        account_name        sysname NULL,
        type                char(8) NULL,
        privilege           char(9) NULL,
        mapped_login_name   sysname NULL,
        permission_path     sysname NULL
    )

        INSERT #Logins
            EXEC xp_logininfo @loginame

    IF (SELECT COUNT(*) FROM #Logins) = 0 AND @hide_exceptions = 0
     BEGIN
        RAISERROR('Specified user does not have access to this server.', 14, 1)
     END

    SELECT  account_name,
            type,
            privilege,
            mapped_login_name,
            permission_path
    FROM    #Logins

    DROP TABLE #Logins
 
 END

GO

Conclusion

As usual, I hope you find this stored procedure helpful. Please let me know if you run into any issues with it or know a better way to do the same thing. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

Script to Create A TempDB File Per Processor

Introduction

Trying to keep the streak alive so here is my 5th blog post in 5 days. This one is another script and by now you are probably wondering if I am either really incredibly lazy or just hate screwing up. The answer to both is an unqualifed YES.

The Need

The best practice from Microsoft is to have 1 TempDB file per processor on a dedicated disk. The files should be set large enough so that they do not need to grow. You can read more here. I should warn you that there is some debate on this recommendation and many people run .25 TempDB files per processor or even just 1 big file on less busy servers.

Update: There is not a lot of good information on this subject outside of the best practices recommendations and I am not trying to fill that void here. I am not advocating a certain number of files for TempDB in this post, that would probably occupy an entire series of blog posts and require intimate knowledge of the internals of the storage engine.

This post is a good start at dispelling some of the myths around how many files you should have for TempDB. Most noticeably, it only recommends multiple files when you are seeing latch contention in TempDB.

I plan to update this post directly as new information is discovered to get the best information out there so please either check back or subscribe to the feed to stay informed.

April 12, 2010 – Paul Randal has just blogged “A SQL Server DBA myth a day: (12/30) tempdb should always have one data file per processor core”. Please make sure to read that post before going down the path of 1 TempDB file per processor.

For me, on my big servers, I like to run with 1 TempDB file per core / processor. Unfortunately being big servers, these machines have a lot of cores. The last server I built had 32 cores. Creating individual files via the GUI would have taken 30 minutes to an hour assuming I did not get something wrong and have to redo it.

The Script

To satisfy my need to be lazy and protect me from myself I have written a script to add a TempDB file per processor. The script is easy enough to use. Step 1 is to figure out how much space you have in total on your TempDB data drive. I specifically say data because logs should be on another drive. Step 2 is to divide the size of your drive by the number of cores / processors in your server and make that the size of your current single TempDB file. To give an example: Your new server has 32 cores and a 66 GB TempDB drive. 32 neatly goes into 64 2 times so you would set the size of your TempDB file to 2 GB, with maybe a slightly higher maximum size if you prefer to leave autogrow on. Next you would simply double check your math and settings then run the script and 31 exact copies of that file would be created. The whole thing should take less than 5 minutes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
USE [master]
GO
DECLARE @cpu_count      int,
        @file_count     int,
        @logical_name   sysname,
        @file_name      nvarchar(520),
        @physical_name  nvarchar(520),
        @size           int,
        @max_size       int,
        @growth         int,
        @alter_command  nvarchar(max)

SELECT  @physical_name = physical_name,
        @size = size / 128,
        @max_size = max_size / 128,
        @growth = growth / 128
FROM    tempdb.sys.database_files
WHERE   name = 'tempdev'

SELECT  @file_count = COUNT(*)
FROM    tempdb.sys.database_files
WHERE   type_desc = 'ROWS'

SELECT  @cpu_count = cpu_count
FROM    sys.dm_os_sys_info

WHILE @file_count < @cpu_count -- Add * 0.25 here to add 1 file for every 4 cpus, * .5 for every 2 etc.
 BEGIN
    SELECT  @logical_name = 'tempdev' + CAST(@file_count AS nvarchar)
    SELECT  @file_name = REPLACE(@physical_name, 'tempdb.mdf', @logical_name + '.ndf')
    SELECT  @alter_command = 'ALTER DATABASE [tempdb] ADD FILE ( NAME =N''' + @logical_name + ''', FILENAME =N''' +  @file_name + ''', SIZE = ' + CAST(@size AS nvarchar) + 'MB, MAXSIZE = ' + CAST(@max_size AS nvarchar) + 'MB, FILEGROWTH = ' + CAST(@growth AS nvarchar) + 'MB )'
    PRINT   @alter_command
    EXEC    sp_executesql @alter_command
    SELECT  @file_count = @file_count + 1
 END

Conclusion

I hope you find this script helpful. Please let me know if you run into any issues with it or if it makes your life easier. Please keep in mind that scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

What Do I Need to Do After Moving a Database to SQL 2005/2008?

Introduction

I recently moved a database from SQL 2000 to SQL 2005. It was really just a matter of detaching the database, copying the files, attaching it on the new server, flip the compatibility mode, fix the user accounts, and run a few queries. Thats it? Could it have been so easy? It turns out the answer is no. There are some steps that need to be taken after moving a database to SQL 2005 or 2008 from SQL 2000.

The Script

This is an easy one so I am skipping straight to the meat. This is a simple script that runs DBCC UPDATEUSAGE, sp_updatestats, then runs a DBCC CHECKDB with DATA_PURITY.

Running update usage is something that everyone was supposed to do on a regular basis on SQL 2000. In SQL 2005 and up the usage is maintained using a different algorithm and *should* never need to be updated again. This BOL entry explains the functionality in detail.

There are a number of changes to the handling of statistics in SQL 2005 that are outlined here so the next step is to update statistics. Even if there were no changes in functionality I would probably update statistics here anyway. I cannot count the number of times updating statistics has yielded an unexpected boost in performance on high volume days. The new algorithm in 2005 and up first checks to see if statistics are out of date before updating them so it is also much less intrusive.

Finally, the last step is to run a DBCC CheckDB with the DATA_PURITY flag. I always like to make sure a database has not picked up any corruption on it’s journey between servers so I would typically run a CheckDB here anyway, but in this case it is more important. Data purity checks were introduced in SQL 2005 to make sure the data in columns conforms to the definition of the columns, like making sure a smalldatetime column did not contain a value like ‘0001-01-01 12:32:20.023’. This checking is not turned on by default when you move an older database to SQL 2005 / 2008, so to complete the migration process it is necessary to run a CheckDB with the DATA_PURITY flag turned on. Once the check is run and the database passes a flag will be set on the database and every future CheckDB will do the data purity checks. In my minde this is an essential step to fully completing the migration to SQL 2005 / 2008.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
--Set Connection Settings to Avoid CheckDB Failures
SET ARITHABORT ON
SET QUOTED_IDENTIFIER ON
SET CONCAT_NULL_YIELDS_NULL ON
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
SET NUMERIC_ROUNDABORT OFF

DECLARE @db_name    sysname

SELECT  @db_name = DB_NAME()

DBCC UPDATEUSAGE(@db_name)

EXEC (@db_name +'..sp_updatestats ''RESAMPLE''')

DBCC CHECKDB (@db_name) WITH ALL_ERRORMSGS, DATA_PURITY

Conclusion

As usual, I hope you find this script helpful. Please let me know if you run into any issues with it. As always scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

How Do I Identify Invalid or Unneeded Logins?

Introduction

I was looking at one of my servers and was caught off-guard by how many individual users had been granted rights. To make it worse I recognized a few user names as former coworkers. Clearly time for some house cleaning, now time to find a script.

The Process

I went looking in my script library for anything that might help me do this analysis faster and found a good start, a script to validate user logins. Well more like a note really, based on a conversation between Thomas LaRock (Blog|Twitter) and Tim Ford (Blog|Twitter) on Twitter about a stored procedure called sp_validatelogins that makes sure a windows login on your SQL instance matches a valid login on your domain. Thomas LaRock does a great job of documenting it here.

Next I needed to figure out what users should have been in groups. My first answer is all of them, but being realistic I decided I would identify all users that have access to the server by both individual login and group then see if the group already has the same rights the user needs or if it could / should. It turns out this was easy, a cursor of windows users with a call to xp_logininfo to check for group membership will tell me exactly what to look at, including the groups and the command to get them because I often end up pasting that into an IM.

As I started working through test data I began to notice users in databases that no longer exist on the server. Again, this one was pretty easy. A query against sys.server_principals with a right outer join to sys.database_principals quickly shows all users that exist in a database that do not have logins on the server. Adding a select to a temp table and wrapping it all in sp_MSforeachdb put it all together into a nice recordset that I could then work off of.

The Script

Here is the script I ended up with, run it in text mode rather than grid for better results. I would advise against trying to automate this any further or even taking any actions without carefully researching the impact of each change before making it. Messing with security is a quick way to get a lot of attention.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
SET NOCOUNT ON

CREATE TABLE #invalid_logins
(
[sid] varbinary(85),
[name] sysname
)

CREATE TABLE #Logins
(
account_name sysname NULL,
type char(8) NULL,
privilege char(9) NULL,
mapped_login_name sysname NULL,
permission_path sysname NULL
)

INSERT #invalid_logins
EXEC sp_validatelogins

DECLARE @name sysname

DECLARE csr CURSOR FAST_FORWARD READ_ONLY FOR
SELECT sl.name
FROM sys.syslogins sl
LEFT OUTER JOIN #invalid_logins il
ON sl.name = il.name
WHERE sl.isntname = 1
AND sl.isntgroup = 0
AND il.name IS NULL

OPEN csr

FETCH NEXT FROM csr INTO @name

WHILE @@FETCH_STATUS <> -1
BEGIN
INSERT #Logins
EXEC xp_logininfo @acctname=@name, @option='all'
FETCH NEXT FROM csr INTO @name
END

CLOSE csr
DEALLOCATE csr

PRINT 'The following logins are no longer valid and should be removed from this server:'
SELECT name
FROM #invalid_logins

PRINT 'The following logins have access by login and by group and might not need access by login:'
SELECT COUNT(*) AS num_of_groups,
account_name,
REPLACE(RTRIM(( SELECT permission_path + ' '
FROM #Logins
WHERE account_name = l.account_name
FOR XML PATH('')
)), ' ', ', ') AS group_names,
'EXEC xp_logininfo @acctname=' + QUOTENAME(account_name) + ', @option=''all''' AS command_to_see_groups
FROM #Logins l
GROUP BY account_name,
[type]
HAVING COUNT(*) > 1
ORDER BY num_of_groups DESC

DROP TABLE #invalid_logins
DROP TABLE #Logins

PRINT 'The following result set shows users that were deleted from the server but not the individual databases'
CREATE TABLE #databases_with_orphan_users
(
db_with_orphan_login sysname,
orphan_login sysname,
permissions_count int
)

EXEC sp_MSforeachdb 'USE ?
INSERT #databases_with_orphan_users
SELECT '
'?'' AS db_with_orphan_login,
dp.name,
(SELECT COUNT(*) FROM sys.database_permissions WHERE grantee_principal_id = sp.principal_id) AS permissions_count
FROM sys.server_principals sp
RIGHT OUTER JOIN sys.database_principals dp
ON sp.sid = dp.sid
WHERE sp.sid IS NULL
AND dp.type NOT IN ('
'R'')
AND dp.name NOT IN ('
'guest'', ''sys'', ''INFORMATION_SCHEMA'')
AND dp.type_desc NOT IN ('
'APPLICATION_ROLE'')'

SELECT db_with_orphan_login,
orphan_login,
permissions_count
FROM #databases_with_orphan_users

DROP TABLE #databases_with_orphan_users

Conclusion

I hope you find this script helpful. I have a feeling this script will be one of my favorites because it takes a lot of tedious analysis and boils it down to a quick hit list. Please let me know if you run into any issues with it. As always scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.

Is Anybody Using That Stored Procedure?

Introduction

From time to time I get asked to check if a stored procedure is still used in preparation for dropping it. Last week I tweeted my wish for a DMV that would give me the usage statistics for stored procedures to which Aaron Bertrand (Blog|Twitter) quickly replied that SQL 2008 has sys.dm_exec_procedure_stats that does exactly what I need. Unfortunately the server I needed to do the analysis on was running SQL 2005.

Where It Went So Wrong

Rather than digging into my script library to see if I had anything that might help do the analysis I fell into classic tunnel vision, cursing myself for not pushing harder to get this application on SQL 2008. I decided that the only way to figure out if these stored procedures were still used was to fire up Profiler and trace for the stored procedure names. I was not looking forward to doing this because it is easy to get a trace wrong, not capturing any data or bogging down the server, but I pushed on. I built my server-side trace script, trying it out on a less used server then fired it up on my production server. The production server immediately went to 100% CPU, completely killing all applications that depended on it. The only way to get the get the applications back online was to restart the SQL Service. This was absolutely the worst I have ever been burned by Profiler.

A Better Way

Last night I decided I was done replaying the events of the day over and over in my head and threw out what had happened to #sqlhelp on Twitter to get advice on what I could have done better. The basic feedback was try not to run traces on multiple strings and if you do have to then try multiple traces. There really was not magic tipping point that anybody could point to as to why doing something in one place would have such disastrous results while doing it somewhere else would not. This really points to what I have known for a long time: Profiler is not your friend. Profiler is more like that former coworker that you could not trust but had to work with on some of your most important projects.

Talking to Andrew Kelly (Blog|Twitter) led to one of my more pronounced face-palm moments, ever. He pointed out that “2005 has similar dmv’s but if its a seldom run query the plan may not be in cache most of the time.” In iteration 1 of the same project I ran into less pronounced issues tracing stored procedures to make sure they were no longer used. At that time I wrote a script to check whether or not an object had a plan in the cache or not. That very script was sitting out in my common scripts directory the whole time. Ultimate Face-Palm.

The Script

Here is the script I ultimately used to figure out if the stored procedures were used. Pulling back the plan is optional but I find it helpful because it will tell you what parameters an object was compiled with. This can often point to where it was called from or even who the user is that called it. Careful with this query if you choose to run it on a production server you may see a performance impact.

Note that I explicitly specify the database name rather than rely on the USE statement. When the absence of results is what you are looking for it is too easy to confuse a mistake for success.

Updated to correct bug caused by improper usage of OBJECT_ID, pointed out by Simon Sabin (blog). I opted to keep the CROSS APPLY rather than going with an OUTER APPLY because I only want to return results for plans that are cached. The OBJECT_ID syntax is taken directly from here: http://msdn.microsoft.com/en-us/library/ms190328.aspx. I could not make it work by passing DB_ID as a parameter.

1
2
3
4
5
6
7
SELECT  q.text,
        p.query_plan, *
FROM    sys.dm_exec_query_stats qs
        CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) q
        CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) p
WHERE   q.dbid = DB_ID('[ database_name ]')
            AND q.objectid = OBJECT_ID('[ database_name . [ schema_name ] . | schema_name . ] object_name ]')

Conclusion

I learned a lot of tough lessons on this one. I suppose even tougher because they were lessons I already should have learned. Hopefully this blog and the included script will prevent someone else from falling into the same trap. Please let me know if you run into any issues with the script or if it bails you out of any jams. As always scripts from the internet are like Halloween candy, inspect before consumption. I offer no warranty beyond a sympathetic ear if you should run into any issues.