Category Archives: sql

Display All SQL DateTime Conversions

If you want a quick way to determine which format, or style, to specify when converting a datetime value in SQL to a character data type, you can use this simple PowerShell script. Or you can look at the MSDN page.

0..200 | %{
    Invoke-Sqlcmd -ServerInstance . -Query ("SELECT num = {0}, CONVERT(varchar(100), GETDATE(), {0})" -f $_)
} | ft -AutoSize

Get All SESSIONPROPERTY Values for Your Session

As a follow-up to my SERVERPROPERTY() post, this is a quick script to grab all the SESSIONPROPERTY() values from your current session in a table. Useful when you forget what the propertyname options are. Properties obtained from

DECLARE @props TABLE (propertyname sysname PRIMARY KEY)
INSERT INTO @props(propertyname)
SELECT propertyname, SESSIONPROPERTY(propertyname) FROM @props

Prevent SQL Logins from using SSMS … or any other app

In a continuation of my previous LOGON TRIGGER post.

A downside to SQL logins is that anyone with the username / password combination can login as that user. This prevents us from knowing who truly logged in and performed some action. It is possible to trace and log everything by IP address and correlate back to that.

Another downside is these accounts are usually service accounts, at least in our environment. These SQL logins may have more permissions than they should because they may be part of a vendors product, and the vendor’s app requires them to have sysadmin on the server. We have an issue with some of our users logging in as these SQL logins and running queries or manipulating data that they shouldn’t be.

So how can we allow the login to still exist, but deny normal users from logging in under those credentials? There’s a few different ways, one could be isolating these users to specific endpoints that would allow only that user from a given ip range. Another option, and the one that made sense here, was to use a LOGON TRIGGER. This allows us a great amount of flexibility on deciding when to prevent a connection from happening.

In this case, we wanted any SQL login using Management Studio to be denied access. Any of the other apps that log in with this account will still work. This includes SqlCmd, Access, etc. Our main problem, though, were devs coming in under SSMS. We also wanted to log any login attempts using this method.

This script creates a logging table in our management database (T1001), creates a view to make things nicer then creates the logon trigger to reject any connections that match our criteria.

-- Create table to hold rejected login attempts
USE T1001;
CREATE TABLE T1001.SQLTrace.loginData (
	data XML,
	program_name sysname
-- Create view to make querying the table a little nicer
CREATE VIEW SQLTrace.loginDataView
      ,data.value('(/EVENT_INSTANCE/EventType)[1]', 'sysname') AS EventType
      ,data.value('(/EVENT_INSTANCE/PostTime)[1]', 'datetime') AS PostTime
      ,data.value('(/EVENT_INSTANCE/SPID)[1]', 'int') AS SPID
      ,data.value('(/EVENT_INSTANCE/ServerName)[1]', 'nvarchar(257)') AS ServerName
      ,data.value('(/EVENT_INSTANCE/LoginName)[1]', 'sysname') AS LoginName
      ,data.value('(/EVENT_INSTANCE/LoginType)[1]', 'sysname') AS LoginType
      ,data.value('(/EVENT_INSTANCE/SID)[1]', 'nvarchar(85)') AS SID
      ,data.value('(/EVENT_INSTANCE/ClientHost)[1]', 'sysname') AS ClientHost
      ,data.value('(/EVENT_INSTANCE/IsPooled)[1]', 'bit') AS IsPooled
FROM SQLTrace.loginData
-- Create logon trigger
USE [master];
/****** Object: DdlTrigger [Deny_SQLLogin_SSMS_Trigger] ******/
	SELECT * FROM master.sys.server_triggers
	WHERE parent_class_desc = 'SERVER'
		AND name = N'Deny_SQLLogin_SSMS_Trigger')
DECLARE @AppName sysname
       ,@LoginName sysname
       ,@LoginType sysname
SELECT @AppName = [program_name]
FROM sys.dm_exec_sessions
WHERE session_id = @data.value('(/EVENT_INSTANCE/SPID)[1]', 'int')
SELECT @LoginName = @data.value('(/EVENT_INSTANCE/LoginName)[1]', 'sysname')
      ,@LoginType = @data.value('(/EVENT_INSTANCE/LoginType)[1]', 'sysname')
IF @AppName LIKE ('Microsoft SQL Server Management Studio%') --If it's SSMS
   AND @LoginName <> 'sa'
   AND @LoginType = 'SQL Login' --('SQL Login' | 'Windows (NT) Login')
    ROLLBACK; --Disconnect the session
    --Log the exception to our table
    INSERT INTO T1001.SQLTrace.loginData(data, program_name)
	VALUES(@data, @AppName)

Don’t forget to create a job to purge historical log data at sufficient intervals for you.

Locking While Dropping or Altering an Index

Yesterday I was trying to drop some hypothetical indexes in production against a fairly active table. I started causing blocking so I had the bright idea of disabling the indexes first, then dropping. Well, that didn’t help, even when setting DEADLOCK_PRIORITY to LOW. I ended up waiting until early morning to remove those indexes to prevent from blocking other users.

Finding no info on the web about the locks taken during the process of dropping or disabling an index, I set about doing this small bit of research.

The queries I used:
In Window 1 (Execute first)


* I use TABLOCKX to simulate many updates going to this table.

In Window 2

DROP INDEX [ix_myTable_testIndex] ON myTable

Using Adam Machanic’s (blog | twitter) sp_WhoIsActive I was able to get the lock details easily.

sp_whoisactive @get_locks = 1

When you disable an index, you end up trying to acquire a Sch-M or Schema Modification lock.

<Database name="Sandbox">
    <Lock request_mode="S" request_status="GRANT" request_count="1" />
    <Object name="myTable" schema_name="dbo">
        <Lock resource_type="OBJECT" request_mode="Sch-M" request_status="WAIT" request_count="1" />

Then the next select query comes along and trys to grab an IS or Intent Shared lock

<Database name="Sandbox">
    <Lock request_mode="S" request_status="GRANT" request_count="1" />
    <Object name="myTable" schema_name="dbo">
        <Lock resource_type="OBJECT" request_mode="IS" request_status="WAIT" request_count="1" />

So at this point yesterday, I was thinking, maybe I can disable the index and then drop it. Well, ALTER INDEX … DISABLE also waits on a Sch-M lock before it can proceed. I’m not any better off. Even after the disabling the index, SQL Server still tries to grab a Sch-M lock to drop the disabled index. All these rules also seem to apply to hypothetical indexes. I’m sure there’s some internals reason why it does this, but why worry about locks when dropping a disabled or hypothetical index since neither are used by active queries?

Lesson learned, drop indexes when the table usage is low.

[qtweet 68491945105821696]

Get all SERVERPROPERTY values for SQL Server

A quick script to grab all the SERVERPROPERTY() values from a SQL instance in a table. Useful when you forget what the propertyname options are. Properties obtained from

DECLARE @props TABLE (propertyname sysname PRIMARY KEY)
INSERT INTO @props(propertyname)
SELECT 'BuildClrVersion'
SELECT 'Collation'
SELECT 'CollationID'
SELECT 'ComparisonStyle'
SELECT 'ComputerNamePhysicalNetBIOS'
SELECT 'Edition'
SELECT 'EditionID'
SELECT 'EngineEdition'
SELECT 'InstanceName'
SELECT 'IsClustered'
SELECT 'IsFullTextInstalled'
SELECT 'IsIntegratedSecurityOnly'
SELECT 'IsSingleUser'
SELECT 'LicenseType'
SELECT 'MachineName'
SELECT 'NumLicenses'
SELECT 'ProcessID'
SELECT 'ProductVersion'
SELECT 'ProductLevel'
SELECT 'ResourceLastUpdateDateTime'
SELECT 'ResourceVersion'
SELECT 'ServerName'
SELECT 'SqlCharSet'
SELECT 'SqlCharSetName'
SELECT 'SqlSortOrder'
SELECT 'SqlSortOrderName'
SELECT 'FilestreamShareName'
SELECT 'FilestreamConfiguredLevel'
SELECT 'FilestreamEffectiveLevel'
SELECT propertyname, SERVERPROPERTY(propertyname) FROM @props

NOLOCK for Developers

What is NOLOCK?

NOLOCK is a table hint for SQL Server that tells the engine to not take locks and ignore any locks when querying that table. This is equivalent to the READUNCOMMITTED table hint.

From MSDN:

Specifies that dirty reads are allowed. No shared locks are issued to prevent other transactions from modifying data read by the current transaction, and exclusive locks set by other transactions do not block the current transaction from reading the locked data. Allowing dirty reads can cause higher concurrency, but at the cost of reading data modifications that then are rolled back by other transactions. This may generate errors for your transaction, present users with data that was never committed, or cause users to see records twice (or not at all). For more information about dirty reads, nonrepeatable reads, and phantom reads, see Concurrency Effects.

So you might be thinking that this NOLOCK thing is great and will solve all your performance problems…

There’s a Trade-off…

What’s the cost of using the NOLOCK hint or setting the Isolation Level to READ UNCOMMITTED everywhere? Well, you cannot guarantee the accuracy of your queries anymore. If the queries can read data in flight, you might read data that never gets committed. You might read data twice. You might read data that’s in the process of being updated or deleted. There’s a lot of mights in there. You need to ask yourself and the person that depends on this query, do they value accuracy or speed.

NOLOCK is a Last Resort

Many of the problems cause by locking and blocking, which NOLOCK gets around, can be solved by basic performance tuning. Limiting the data you are working with by using appropriate WHERE clauses is a good first step. SQL Server by default tries to lock data at the row level and only escalates to the table level if it determines it would be cheaper to do so. It might escalate quicker than you think though. If  two queries are operating on different rows in the same table, they will not block each other unless escalation has occurred. Note that I mention rows, not columns. If two queries are making changes to different columns in the same row, one will block the other until done. Adding appropriate indexes or modifying existing indexes would be a second step. These can speed up readers which let writers in quicker.

Another alternative to using NOLOCK is to use some level of row versioning within SQL Server such as READ_COMMITTED_SNAPSHOT and / or ALLOW_SNAPSHOT_ISOLATION. This allows more concurrency at the cost of increased tempdb usage. It does this by storing old version(s) of a row in tempdb until it is no longer needed. If you’re doing a quick read with no other connection trying to alter that data, then the lifespan in tempdb is very short. If more concurrent operations happen on that row, the lifespan can grow. Before implementing either of these levels, be sure to have a good understanding of the trade-offs of row versioning as it might cause more problems that it solves.

The NOLOCK Resort

Earlier I said to use NOLOCK as a last resort… in transactional workloads. NOLOCK might make a good fit for when you’re reporting out of your transactional database. Reporting is better done from a reporting copy of the data. Sometimes we don’t have that luxury and reporting must be done against the live data. In most cases, we want the transactional workload to take priority over the reports. Telling our reports not to take locks allows the transactions to continue unhindered by locks, though they still might be fighting for CPU and I/O resources. The same rules apply about speed vs accuracy, but usually reports don’t have to be to the second or penny accurate. In this case, NOLOCK may make sense. Just make sure your report consumers are aware that there might be a margin of error.

Further Reading:

Kendra Little (blog | twitter) made up a poster that helps visualize the different isolation levels. She also put together a great list of links about NOLOCK and the other isolation levels.

Paul Randal (blog | twitter) has some great content on locking. He also does a good job of explaining locking & blocking in this video.

Kimberly Tripp (blog | twitter) has more info about indexing.

PowerShell Script for Paul’s Survey: how is your tempdb configured?

Paul Randal (b | t) posted a survey how is your tempdb configured? Since I just happen to be at a SQLskills event this week I went ahead and created a PowerShell script to get the info across all your servers.

You’ll need a list of servers in a text file ($serverfile), one per line. It will output the list to ‘sqlskills – tempdb proc, file count results.txt’ in the directory from which it ran. Send that file, or the results to Paul.

$serverfile = 'SQL_Servers_PROD.txt'
$query = "SELECT os.Cores, df.Files
   (SELECT COUNT(*) AS Cores FROM sys.dm_os_schedulers WHERE status = 'VISIBLE ONLINE') AS os,
   (SELECT COUNT(*) AS Files FROM tempdb.sys.database_files WHERE type_desc = 'ROWS') AS df;"
$dt = $null
#Get list of servers from SQL_Servers_PROD.txt
#Exclude any lines that start with # or are blank
gc $serverfile | ?{-not $_.StartsWith("#") -and $_ -ne "" -and $x -lt 2 } | %{
    $tempdbstats = Invoke-Sqlcmd -Query $query -ServerInstance $_ -SuppressProviderContextWarning
    if ($dt -eq $null) {
        $dt = , $tempdbstats
    } else {
        $dt += $tempdbstats
$dt | ft -AutoSize | Out-File 'sqlskills - tempdb proc, file count results.txt' -Encoding UTF8

The output looks like this:

Cores Files
----- -----
   16     1
   16     8

SQL Saturday #57 Slides, Code & Video

This past weekend I spoke at SQL Saturday #57 in Houston, TX.

I want to thank the local PASS chapter HASSUG, the organizers Nancy Wilson (blog | twitter), Jonathan Gardner (blog | twitter), Malik Al-Amin (twitter), the volunteers, the other speakers, and the attendees. It was a great event with a great venue.

I especially want to thank those attendees who chose my sessions out of the many other great ones.

Slides, Code & Video

So What Can I Do With PowerShell?

ORMs for the DBA

Download Video (mp4)
*Thanks to the MidnightDBAs for lending me their mic for this recording.

A Seat at Two Masters’ Table

Paul Randal (blog | twitterblogged about winning a free seat at a SQLskills Master Immersion Event on Internals and Performance in Dallas.

Since this is Paul and Kimberly judging, I’ll be writing it explicitly for them as “It Depends.”

So, if their judging style is based on humor along with Paul / Kimberly / Brent / SQL memes, then here is that entry.

Come on, it’s Paul and Kimberly, one of the greatest sheep herders ever along with the greatest sheep herder herder. Wouldn’t you want to take the opportunity to learn how sheep can be trained to process that much data that fast. There might even be a green fisted Lady Gaga impersonator punching the bacon harder.

After the class, I’d love to go around helping others get the best performance out of their livestock. So many have problems with corrupt sheep or dead sheep and there’s just not enough of the snippy Scotsman to go around. I’d like to also use it as a starting point to becoming a Certified Master in Sheep Query Lawn Server.

Otherwise, if they are judging on technical merit:

I want to learn everything I can about the tool I chose to base my profession on. Paul & Kimberly are recognized as some of the most knowledgeable persons about the internals of that tool. The pair are also very good at teaching and spreading the wisdom and experience they have gained. If I can gleam a portion of that, then I can enhance my performance as a SQL Server professional. This would also be a large first step on becoming an MCM for SQL Server.

One thing I really like doing is helping people. I like solving peoples’ troubles who cannot do so themselves. It’s something I’ve done as long as I can remember. I want to take the knowledge gained and use it to help people. I know that companies are usually the ones that require the technical help. Those companies are made up of people trying to do what they do best, but maybe can’t learn this material at the depth we can. The people suffer if the company does.

Another way to help is to teach others. Professionals like to learn. Teaching a professional or an eager student is a wonderful experience. Knowing that you taught them something new, clarified a portion of thought, or brought a new perspective is very rewarding. This training would help me be a deeper tome from which others can learn.

So Paul, Kim… I’d like a seat at your table when you come to Dallas.