Category Archives: dba

T-SQL Tuesday #028 – Jack of All Trades ie Integration Specialist

Argenis Fernandez (blog | twitter) is hosting the latest T-SQL Tuesday and asking about specialization.

For the earlier part of my career I was doing application development while really wanting to do database work. I also learned plenty about systems administration while in college. The variety of the tasks made me a Jack of All Trades with a lean toward databases. I wanted to specialize as a DBA or DB developer, just the jobs were never there until I moved to a bigger market.

When I was able to specialize, I was happy I could focus on one thing and get really deep with it. Now I am able to get deep with SQL Server, but I find myself constantly recalling information I learned in my previous positions to help with current issues. CLR, PowerShell, and little custom written utilities all draw from my .NET experience, which I am grateful for. I run into other DBAs that have no clue what goes on outside of their world. Having multiple skills allows me to create better solutions as I don’t just consider what I’m responsible for, but all the connections to my piece as well.

Being a generalist early in my career has allowed me better understanding of the whole stack of any given solution. It is easier for me to spot an integration problem or see a problem with an application’s implementation than it would have been without that variety of experience. Jack of All Trades tend to be better at seeing the big picture as they don’t focus on just one area. They can be the ones that help everything fit together better. Specialists are an important part of our ecosystem, but without someone to put the pieces together, you would have perfect components in a flawed system. Solutions really are greater than the sum of their parts.

Extract CLR Assemblies from SQL Server

I’ve run into a few situations that required examining the existing CLR assemblies on a server. Whether I needed to do a comparison between two versions to make sure they are the same or confirm something in the assembly itself, this script has come in handy. Point it at a database, give it an output path and it will save all the assemblies in that database to dlls in the given folder. You can then use a .NET disassembler to confirm suspicions or a binary comparison to make sure the dll matches what it should.

        Extracts CLR Assemblies from a SQL 2005+ database.
        Extracts CLR Assemblies from a SQL 2005+ database.
    .PARAMETER  ServerInstance
        The Server\Instance to connect to
    .PARAMETER  Database
        The Database to extract assemblies from
    .PARAMETER  OutPath
        The path to output the assemblies
        PS C:\> .\Get-SqlAssemblies.ps1 -ServerInstance 'MyServer\MyInstance' -Database 'MyDatabase'
        This example shows how to call Get-SqlAssemblies with named parameters.
        For more information about advanced functions, call Get-Help with any
        of the topics in the links listed below.
    [string]$ServerInstance = 'LOCALHOST',
    [string]$OutPath = '.'
#Correct for variations of incoming ServerInstance names
if(-not $ServerInstance.Contains('\')) {$ServerInstance += '\DEFAULT'}
if($ServerInstance.Contains(',') -and -not $ServerInstance.Contains('`,')) {$ServerInstance = $ServerInstance.Replace(',', '`,')}
dir SQLSERVER:\SQL\$ServerInstance\Databases\$Database\Assemblies | %{
    $_.SqlAssemblyFiles | %{
        $str = $
        $path = Join-Path $OutPath ($str.Substring($str.LastIndexOf('\')+1))
        Set-Content -Path $path -Value $_.GetFileBytes() -Encoding byte;

Split Typeperf Output in More Managable Chunks

I really like using typeperf for perfmon counter collection. It allows me to save a collection of counters to monitor and store those readings in a csv file for later analysis. Sometimes I end up running the output through the PAL tool. Unfortunately, the PAL tool generates graphs that are fairly narrow. Monitoring sessions of long duration causes these graphs to be really cramped. I wanted a way to split the typeperf output to get a reasonable amount of data points in these graphs. A side benefit is the processing per file is a lot quicker.

The script takes a filepath as its only argument. It splits by the hour and copies the header row to each new file.

param (
	[string]$filepath #incoming file
#Does the file exist
if (Test-Path $filepath) {
	$infile = Get-Item $filepath
	$data = Get-Content $infile
#if not, exit the script
else {
	Write-Warning "Failed to find $filepath"
#Get the header to be able to repeat it in each file
$header = $data | Select-Object -First 1
$lastHour = $null
$outFile = $null
#Loop through the data, skipping the header line.
$data | Select-Object -Skip 1 | ForEach-Object {
	$date = [DateTime]::Parse([string]$_.Substring(1, [string]$_.IndexOf('"',1)-1))
	if($lastHour -eq $null -or $date.Hour -ne $lastHour.Hour -or $outFile -eq $null) {
		$lastHour = $date.AddMinutes(-$date.Minute).AddSeconds(-$date.Second)
		$outFile = Join-Path $infile.Directory ("{0}_{1}{2}" -f $infile.BaseName, $lastHour.ToString('yyyyMMdd_HHmmss'), $infile.extension)
		$header | Out-File $outFile -Encoding UTF8
	$_ | Out-File $outFile -Encoding UTF8 -Append

Update: I cleaned up the script a little based on Jeff’s comments below.

Automate CPU-Z Capture to Check for Throttled Processors

Several professionals have posted about checking whether or not your processors are running at full speed by using CPU-Z. Some recommendations are to check your servers every couple of months or, if virtual, every time your guest moves. Me being lazy efficient, I’d rather automate having these servers send me their info on a scheduled basis.

First things, first. Be sure to copy CPU-Z out to all your servers, preferably in a consistent folder. Edit the cpuz.ini file to turn off extra scanning (we just need the CPU info).


Save the following as a script and schedule it to run on a monthly or so basis using your favorite scheduler. The email portion was taken from this post.

& '.\cpuz64.exe' "-txt=$env:COMPUTERNAME"
Start-Sleep -Seconds 15 #Give CPUZ enough time to generate output
$smtpServer = ""
$msg = new-object Net.Mail.MailMessage
$smtp = new-object Net.Mail.SmtpClient($smtpServer)
$msg.From = ""
$msg.Subject = "[CPU-Z] $env:COMPUTERNAME"
$msg.Body = gc "$env:COMPUTERNAME.txt" | ?{$_ -match "(Specification|Core Speed|Stock frequency)"}

Script level upgrade for database ‘master’ failed

A few weeks ago one of our clustered server nodes blue-screened and when it came back on SQL refused to start. I started digging in and this is what I found in the Event Log: “Script level upgrade for database ‘master’ failed because upgrade step ‘sqlagent100_msdb_upgrade.sql’ encountered error 200, state 7, severity 25.” Immediately I started to suspect a corrupt master database. And I found this in the SQL Error Log:

Creating procedure sp_sqlagent_get_perf_counters…
Error: 468, Severity: 16, State: 9.
Cannot resolve the collation conflict between “SQL_Latin1_General_CP1_CI_AS” and “Latin1_General_CI_AS” in the equal to operation.
Error: 912, Severity: 21, State: 2.
Script level upgrade for database ‘master’ failed because upgrade step ‘sqlagent100_msdb_upgrade.sql’ encountered error 200, state 7, severity 25. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the ‘master’ database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.
Error: 3417, Severity: 21, State: 3.
Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.
SQL Trace was stopped due to server shutdown. Trace ID = ‘1’. This is an informational message only; preser action is required.

It turns out that the collation of msdb did not match master. Master was using SQL_Latin1_General_CP1_CI_AS while only msdb and one other database was using  Latin1_General_CI_AS. The upgrade script that SQL Server ran at startup performed a join across those two databases resulting in the error. I’m still not sure about why this script got run as no service packs / cumulative updates had been applied to this server recently and it was originally installed as 2008, not an upgrade from 2005. With the upgrade script interrupted, it left the master db unusable.

I started going through the steps to restore master. I was able to start the server in single user mode, yet I could never get in as the single user. Something seemed to always be taking the connection before I could get in.

I had a long, long conversation with the #sqlhelp folks on Twitter about this. All of these guys and gals were tossing out help: @sqlinsaneo @kbriankelly @darrelllandrum @rusanu @DBArgenis @Kendra_Little @mrdenny @Bugboi @SQLSoldier. We stopped all services that were known to connect, stopped the browser service, changed the port, only allow IP connections, paused the cluster node I was working on, and still I was getting an error that only one admin was allowed at a time in single user mode.

Since I was getting nowhere fast, I decided on a different tact, namely to rebuild master then recover from backup. Rebuilding master was a fairly quick process. An important note about rebuilding master is that is also rebuilds msdb. Now I had two databases to recover from backup. Once the rebuild was complete, I was successfully able to start the service in single user mode  and was able to connect (yay!!!). Now I could restore a backup of master, restart SQL normally and restore msdb. I finally had a running instance 2.5 hours later.

Important lessons learned from this experience:

  • If single user mode isn’t working and you have a good backup of msdb, rebuild then recover. Rebuilding might fix some issues and get you on the road to recovery quicker.
  • Collations of all the system databases really should match. This whole issue was because of a collation conflict between master and msdb.
Thanks again to:

Sync SSMS Tools Pack Across Computers Using SyncToy

My favorite and most used feature of the SSMS Tools Pack is the query history. This feature has saved me a lot of time and effort. DBAs where I work are assigned both a desktop for normal day to day operations and a laptop for when we’re at home, meetings, travel, etc. Today I ran into a scenario that made me want to keep the query history of both machines in sync. I asked Mladen Prajdic, the creator, about syncing across computers and he directed me to a blog post written by REAL SQL Guy that does just that. Unfortunately his method uses Dropbox which is blocked at work, so I needed an alternative. I use SyncToy from Microsoft regularly to keep other folders in sync, so I decided to do the same for this project. Using SyncToy manually means that both machines need to be powered on and can see each other on the network since there is no intermediary.

The first thing I did was to share the C:\SSMSTools folder on each machine so I can trigger the sync from whichever computer I’m sitting at. I then created a folder pair on each machine specifying the “Left” folder as the remote shared folder and the “Right” folder as the local one. I made sure to use the “Synchronize” setting so that changes can go both ways. Give it a name, click Preview and Run. Now I have my entire history from both machines available from either one.

Link Recap:

Prevent SQL Logins from using SSMS … or any other app

In a continuation of my previous LOGON TRIGGER post.

A downside to SQL logins is that anyone with the username / password combination can login as that user. This prevents us from knowing who truly logged in and performed some action. It is possible to trace and log everything by IP address and correlate back to that.

Another downside is these accounts are usually service accounts, at least in our environment. These SQL logins may have more permissions than they should because they may be part of a vendors product, and the vendor’s app requires them to have sysadmin on the server. We have an issue with some of our users logging in as these SQL logins and running queries or manipulating data that they shouldn’t be.

So how can we allow the login to still exist, but deny normal users from logging in under those credentials? There’s a few different ways, one could be isolating these users to specific endpoints that would allow only that user from a given ip range. Another option, and the one that made sense here, was to use a LOGON TRIGGER. This allows us a great amount of flexibility on deciding when to prevent a connection from happening.

In this case, we wanted any SQL login using Management Studio to be denied access. Any of the other apps that log in with this account will still work. This includes SqlCmd, Access, etc. Our main problem, though, were devs coming in under SSMS. We also wanted to log any login attempts using this method.

This script creates a logging table in our management database (T1001), creates a view to make things nicer then creates the logon trigger to reject any connections that match our criteria.

-- Create table to hold rejected login attempts
USE T1001;
CREATE TABLE T1001.SQLTrace.loginData (
	data XML,
	program_name sysname
-- Create view to make querying the table a little nicer
CREATE VIEW SQLTrace.loginDataView
      ,data.value('(/EVENT_INSTANCE/EventType)[1]', 'sysname') AS EventType
      ,data.value('(/EVENT_INSTANCE/PostTime)[1]', 'datetime') AS PostTime
      ,data.value('(/EVENT_INSTANCE/SPID)[1]', 'int') AS SPID
      ,data.value('(/EVENT_INSTANCE/ServerName)[1]', 'nvarchar(257)') AS ServerName
      ,data.value('(/EVENT_INSTANCE/LoginName)[1]', 'sysname') AS LoginName
      ,data.value('(/EVENT_INSTANCE/LoginType)[1]', 'sysname') AS LoginType
      ,data.value('(/EVENT_INSTANCE/SID)[1]', 'nvarchar(85)') AS SID
      ,data.value('(/EVENT_INSTANCE/ClientHost)[1]', 'sysname') AS ClientHost
      ,data.value('(/EVENT_INSTANCE/IsPooled)[1]', 'bit') AS IsPooled
FROM SQLTrace.loginData
-- Create logon trigger
USE [master];
/****** Object: DdlTrigger [Deny_SQLLogin_SSMS_Trigger] ******/
	SELECT * FROM master.sys.server_triggers
	WHERE parent_class_desc = 'SERVER'
		AND name = N'Deny_SQLLogin_SSMS_Trigger')
DECLARE @AppName sysname
       ,@LoginName sysname
       ,@LoginType sysname
SELECT @AppName = [program_name]
FROM sys.dm_exec_sessions
WHERE session_id = @data.value('(/EVENT_INSTANCE/SPID)[1]', 'int')
SELECT @LoginName = @data.value('(/EVENT_INSTANCE/LoginName)[1]', 'sysname')
      ,@LoginType = @data.value('(/EVENT_INSTANCE/LoginType)[1]', 'sysname')
IF @AppName LIKE ('Microsoft SQL Server Management Studio%') --If it's SSMS
   AND @LoginName &lt;&gt; 'sa'
   AND @LoginType = 'SQL Login' --('SQL Login' | 'Windows (NT) Login')
    ROLLBACK; --Disconnect the session
    --Log the exception to our table
    INSERT INTO T1001.SQLTrace.loginData(data, program_name)
	VALUES(@data, @AppName)

Don’t forget to create a job to purge historical log data at sufficient intervals for you.

Locking While Dropping or Altering an Index

Yesterday I was trying to drop some hypothetical indexes in production against a fairly active table. I started causing blocking so I had the bright idea of disabling the indexes first, then dropping. Well, that didn’t help, even when setting DEADLOCK_PRIORITY to LOW. I ended up waiting until early morning to remove those indexes to prevent from blocking other users.

Finding no info on the web about the locks taken during the process of dropping or disabling an index, I set about doing this small bit of research.

The queries I used:
In Window 1 (Execute first)


* I use TABLOCKX to simulate many updates going to this table.

In Window 2

DROP INDEX [ix_myTable_testIndex] ON myTable

Using Adam Machanic’s (blog | twitter) sp_WhoIsActive I was able to get the lock details easily.

sp_whoisactive @get_locks = 1

When you disable an index, you end up trying to acquire a Sch-M or Schema Modification lock.

<Database name="Sandbox">
    <Lock request_mode="S" request_status="GRANT" request_count="1" />
    <Object name="myTable" schema_name="dbo">
        <Lock resource_type="OBJECT" request_mode="Sch-M" request_status="WAIT" request_count="1" />

Then the next select query comes along and trys to grab an IS or Intent Shared lock

<Database name="Sandbox">
    <Lock request_mode="S" request_status="GRANT" request_count="1" />
    <Object name="myTable" schema_name="dbo">
        <Lock resource_type="OBJECT" request_mode="IS" request_status="WAIT" request_count="1" />

So at this point yesterday, I was thinking, maybe I can disable the index and then drop it. Well, ALTER INDEX … DISABLE also waits on a Sch-M lock before it can proceed. I’m not any better off. Even after the disabling the index, SQL Server still tries to grab a Sch-M lock to drop the disabled index. All these rules also seem to apply to hypothetical indexes. I’m sure there’s some internals reason why it does this, but why worry about locks when dropping a disabled or hypothetical index since neither are used by active queries?

Lesson learned, drop indexes when the table usage is low.

[qtweet 68491945105821696]

Disable SQL Agent Jobs with PowerShell

I had a need today to disable all of our admin jobs while I moved our admin database to another file location. We didn’t want to get a lot of alerts or job failures during the move, even though it took less than 5 minutes. This script runs best in sqlps or by adding the appropriate snap-ins to your regular console.

cd SQLSERVER:\SQL\[server]\[instance]\JobServer\Jobs
dir | ?{$_.Name.StartsWith('!Admin')} | %{$_.IsEnabled = $false; $_.Alter()} #rerun with $true to enable

Get all SERVERPROPERTY values for SQL Server

A quick script to grab all the SERVERPROPERTY() values from a SQL instance in a table. Useful when you forget what the propertyname options are. Properties obtained from

DECLARE @props TABLE (propertyname sysname PRIMARY KEY)
INSERT INTO @props(propertyname)
SELECT 'BuildClrVersion'
SELECT 'Collation'
SELECT 'CollationID'
SELECT 'ComparisonStyle'
SELECT 'ComputerNamePhysicalNetBIOS'
SELECT 'Edition'
SELECT 'EditionID'
SELECT 'EngineEdition'
SELECT 'InstanceName'
SELECT 'IsClustered'
SELECT 'IsFullTextInstalled'
SELECT 'IsIntegratedSecurityOnly'
SELECT 'IsSingleUser'
SELECT 'LicenseType'
SELECT 'MachineName'
SELECT 'NumLicenses'
SELECT 'ProcessID'
SELECT 'ProductVersion'
SELECT 'ProductLevel'
SELECT 'ResourceLastUpdateDateTime'
SELECT 'ResourceVersion'
SELECT 'ServerName'
SELECT 'SqlCharSet'
SELECT 'SqlCharSetName'
SELECT 'SqlSortOrder'
SELECT 'SqlSortOrderName'
SELECT 'FilestreamShareName'
SELECT 'FilestreamConfiguredLevel'
SELECT 'FilestreamEffectiveLevel'
SELECT propertyname, SERVERPROPERTY(propertyname) FROM @props