An Amusing New Chapter of My Career

March 29th, 2012 1 comment

My New Office

Starting next week, that’s the first week of April, this will be my new office. Well, not really, but I’ll be working for the company that hosts this coaster, Six Flags. I’ll be overseeing all the SQL Server stuff for the entire organization. That means the back office stuff, plus what it takes to run all 19 parks. This will offer an interesting set of challenges that I haven’t had to face before. One example is up-time. During the day, availability must be 100% as many times we won’t get a second chance to work with a customer. They’ve gone home, which could be in a different state. Even though we must be up 100% of the time the park is open, the parks don’t usually stay open 24 hours which helps with maintenance windows.

This will be an exciting change of pace for me and working with an organization whose purpose is to entertain and amuse. I look forward to what this change will bring. Also, if I ever have a stressful day, I can go to the park that is nearby and ride a coaster to chill out. I’m sure I’ll be taking advantage of this a lot.

Categories: job Tags:

T-SQL Tuesday #028 – Jack of All Trades ie Integration Specialist

March 13th, 2012 1 comment

Argenis Fernandez (blog | twitter) is hosting the latest T-SQL Tuesday and asking about specialization.

For the earlier part of my career I was doing application development while really wanting to do database work. I also learned plenty about systems administration while in college. The variety of the tasks made me a Jack of All Trades with a lean toward databases. I wanted to specialize as a DBA or DB developer, just the jobs were never there until I moved to a bigger market.

When I was able to specialize, I was happy I could focus on one thing and get really deep with it. Now I am able to get deep with SQL Server, but I find myself constantly recalling information I learned in my previous positions to help with current issues. CLR, PowerShell, and little custom written utilities all draw from my .NET experience, which I am grateful for. I run into other DBAs that have no clue what goes on outside of their world. Having multiple skills allows me to create better solutions as I don’t just consider what I’m responsible for, but all the connections to my piece as well.

Being a generalist early in my career has allowed me better understanding of the whole stack of any given solution. It is easier for me to spot an integration problem or see a problem with an application’s implementation than it would have been without that variety of experience. Jack of All Trades tend to be better at seeing the big picture as they don’t focus on just one area. They can be the ones that help everything fit together better. Specialists are an important part of our ecosystem, but without someone to put the pieces together, you would have perfect components in a flawed system. Solutions really are greater than the sum of their parts.

Categories: dba Tags:

Extract CLR Assemblies from SQL Server

March 2nd, 2012 4 comments

I’ve run into a few situations that required examining the existing CLR assemblies on a server. Whether I needed to do a comparison between two versions to make sure they are the same or confirm something in the assembly itself, this script has come in handy. Point it at a database, give it an output path and it will save all the assemblies in that database to dlls in the given folder. You can then use a .NET disassembler to confirm suspicions or a binary comparison to make sure the dll matches what it should.

<#
    .SYNOPSIS
        Extracts CLR Assemblies from a SQL 2005+ database.
 
    .DESCRIPTION
        Extracts CLR Assemblies from a SQL 2005+ database.
 
    .PARAMETER  ServerInstance
        The Server\Instance to connect to
    .PARAMETER  Database
        The Database to extract assemblies from
    .PARAMETER  OutPath
        The path to output the assemblies
 
    .EXAMPLE
        PS C:\> .\Get-SqlAssemblies.ps1 -ServerInstance 'MyServer\MyInstance' -Database 'MyDatabase'
        This example shows how to call Get-SqlAssemblies with named parameters.
 
    .INPUTS
        System.String
 
    .NOTES
        For more information about advanced functions, call Get-Help with any
        of the topics in the links listed below.
 
#>
param(
    [string]$ServerInstance = 'LOCALHOST',
 
    [Parameter(Mandatory=$true)]
    [string]$Database,
 
    [string]$OutPath = '.'
)
 
#Correct for variations of incoming ServerInstance names
if(-not $ServerInstance.Contains('\')) {$ServerInstance += '\DEFAULT'}
if($ServerInstance.Contains(',') -and -not $ServerInstance.Contains('`,')) {$ServerInstance = $ServerInstance.Replace(',', '`,')}
 
dir SQLSERVER:\SQL\$ServerInstance\Databases\$Database\Assemblies | %{
    $_.SqlAssemblyFiles | %{
        $str = $_.name
        $path = Join-Path $OutPath ($str.Substring($str.LastIndexOf('\')+1))
        Set-Content -Path $path -Value $_.GetFileBytes() -Encoding byte;
    }
}
Categories: clr, dba, powershell Tags:

Split Typeperf Output in More Managable Chunks

February 17th, 2012 4 comments

I really like using typeperf for perfmon counter collection. It allows me to save a collection of counters to monitor and store those readings in a csv file for later analysis. Sometimes I end up running the output through the PAL tool. Unfortunately, the PAL tool generates graphs that are fairly narrow. Monitoring sessions of long duration causes these graphs to be really cramped. I wanted a way to split the typeperf output to get a reasonable amount of data points in these graphs. A side benefit is the processing per file is a lot quicker.

The script takes a filepath as its only argument. It splits by the hour and copies the header row to each new file.

param (
	[string]$filepath #incoming file
)
 
#Does the file exist
if (Test-Path $filepath) {
	$infile = Get-Item $filepath
	$data = Get-Content $infile
}
#if not, exit the script
else {
	Write-Warning "Failed to find $filepath"
	exit
}
 
#Get the header to be able to repeat it in each file
$header = $data | Select-Object -First 1
$lastHour = $null
$outFile = $null
 
#Loop through the data, skipping the header line.
$data | Select-Object -Skip 1 | ForEach-Object {
	$date = [DateTime]::Parse([string]$_.Substring(1, [string]$_.IndexOf('"',1)-1))
	if($lastHour -eq $null -or $date.Hour -ne $lastHour.Hour -or $outFile -eq $null) {
		$lastHour = $date.AddMinutes(-$date.Minute).AddSeconds(-$date.Second)
		$outFile = Join-Path $infile.Directory ("{0}_{1}{2}" -f $infile.BaseName, $lastHour.ToString('yyyyMMdd_HHmmss'), $infile.extension)
		$header | Out-File $outFile -Encoding UTF8
	}
	$_ | Out-File $outFile -Encoding UTF8 -Append
}

Update: I cleaned up the script a little based on Jeff’s comments below.

Categories: dba, powershell Tags:

Automate CPU-Z Capture to Check for Throttled Processors

February 16th, 2012 No comments

Several professionals have posted about checking whether or not your processors are running at full speed by using CPU-Z. Some recommendations are to check your servers every couple of months or, if virtual, every time your guest moves. Me being lazy efficient, I’d rather automate having these servers send me their info on a scheduled basis.

First things, first. Be sure to copy CPU-Z out to all your servers, preferably in a consistent folder. Edit the cpuz.ini file to turn off extra scanning (we just need the CPU info).

ACPI=0
PCI=0
DMI=0
Sensor=0
SMBus=0
Display=0

Save the following as a script and schedule it to run on a monthly or so basis using your favorite scheduler. The email portion was taken from this post.

& '.\cpuz64.exe' "-txt=$env:COMPUTERNAME"
Start-Sleep -Seconds 15 #Give CPUZ enough time to generate output
 
$smtpServer = "127.0.0.1"
$msg = new-object Net.Mail.MailMessage
$smtp = new-object Net.Mail.SmtpClient($smtpServer)
$msg.From = "emailadmin@test.com"
$msg.To.Add("administrator1@test.com")
$msg.To.Add("administrator2@test.com")
 
$msg.Subject = "[CPU-Z] $env:COMPUTERNAME"
$msg.Body = gc "$env:COMPUTERNAME.txt" | ?{$_ -match "(Specification|Core Speed|Stock frequency)"}
 
$smtp.Send($msg)
Categories: dba, powershell Tags:

Script level upgrade for database ‘master’ failed

November 9th, 2011 No comments

A few weeks ago one of our clustered server nodes blue-screened and when it came back on SQL refused to start. I started digging in and this is what I found in the Event Log: “Script level upgrade for database ‘master’ failed because upgrade step ‘sqlagent100_msdb_upgrade.sql’ encountered error 200, state 7, severity 25.” Immediately I started to suspect a corrupt master database. And I found this in the SQL Error Log:

Creating procedure sp_sqlagent_get_perf_counters…
Error: 468, Severity: 16, State: 9.
Cannot resolve the collation conflict between “SQL_Latin1_General_CP1_CI_AS” and “Latin1_General_CI_AS” in the equal to operation.
Error: 912, Severity: 21, State: 2.
Script level upgrade for database ‘master’ failed because upgrade step ‘sqlagent100_msdb_upgrade.sql’ encountered error 200, state 7, severity 25. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the ‘master’ database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.
Error: 3417, Severity: 21, State: 3.
Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.
SQL Trace was stopped due to server shutdown. Trace ID = ’1′. This is an informational message only; preser action is required.

It turns out that the collation of msdb did not match master. Master was using SQL_Latin1_General_CP1_CI_AS while only msdb and one other database was using  Latin1_General_CI_AS. The upgrade script that SQL Server ran at startup performed a join across those two databases resulting in the error. I’m still not sure about why this script got run as no service packs / cumulative updates had been applied to this server recently and it was originally installed as 2008, not an upgrade from 2005. With the upgrade script interrupted, it left the master db unusable.

I started going through the steps to restore master. I was able to start the server in single user mode, yet I could never get in as the single user. Something seemed to always be taking the connection before I could get in.

I had a long, long conversation with the #sqlhelp folks on Twitter about this. All of these guys and gals were tossing out help: @sqlinsaneo @kbriankelly @darrelllandrum @rusanu @DBArgenis @Kendra_Little @mrdenny @Bugboi @SQLSoldier. We stopped all services that were known to connect, stopped the browser service, changed the port, only allow IP connections, paused the cluster node I was working on, and still I was getting an error that only one admin was allowed at a time in single user mode.

Since I was getting nowhere fast, I decided on a different tact, namely to rebuild master then recover from backup. Rebuilding master was a fairly quick process. An important note about rebuilding master is that is also rebuilds msdb. Now I had two databases to recover from backup. Once the rebuild was complete, I was successfully able to start the service in single user mode  and was able to connect (yay!!!). Now I could restore a backup of master, restart SQL normally and restore msdb. I finally had a running instance 2.5 hours later.

Important lessons learned from this experience:

  • If single user mode isn’t working and you have a good backup of msdb, rebuild then recover. Rebuilding might fix some issues and get you on the road to recovery quicker.
  • Collations of all the system databases really should match. This whole issue was because of a collation conflict between master and msdb.
Thanks again to:
Categories: clusters, dba Tags:

Sync SSMS Tools Pack Across Computers Using SyncToy

October 6th, 2011 2 comments

My favorite and most used feature of the SSMS Tools Pack is the query history. This feature has saved me a lot of time and effort. DBAs where I work are assigned both a desktop for normal day to day operations and a laptop for when we’re at home, meetings, travel, etc. Today I ran into a scenario that made me want to keep the query history of both machines in sync. I asked Mladen Prajdic, the creator, about syncing across computers and he directed me to a blog post written by REAL SQL Guy that does just that. Unfortunately his method uses Dropbox which is blocked at work, so I needed an alternative. I use SyncToy from Microsoft regularly to keep other folders in sync, so I decided to do the same for this project. Using SyncToy manually means that both machines need to be powered on and can see each other on the network since there is no intermediary.

The first thing I did was to share the C:\SSMSTools folder on each machine so I can trigger the sync from whichever computer I’m sitting at. I then created a folder pair on each machine specifying the “Left” folder as the remote shared folder and the “Right” folder as the local one. I made sure to use the “Synchronize” setting so that changes can go both ways. Give it a name, click Preview and Run. Now I have my entire history from both machines available from either one.

Link Recap:

Categories: dba Tags:

Get All SESSIONPROPERTY Values for Your Session

August 23rd, 2011 No comments

As a follow-up to my SERVERPROPERTY() post, this is a quick script to grab all the SESSIONPROPERTY() values from your current session in a table. Useful when you forget what the propertyname options are. Properties obtained from http://msdn.microsoft.com/en-us/library/ms175001.aspx.

DECLARE @props TABLE (propertyname sysname PRIMARY KEY)
INSERT INTO @props(propertyname)
SELECT 'ANSI_NULLS'
UNION ALL
SELECT 'ANSI_PADDING'
UNION ALL
SELECT 'ANSI_WARNINGS'
UNION ALL
SELECT 'ARITHABORT'
UNION ALL
SELECT 'CONCAT_NULL_YIELDS_ NULL'
UNION ALL
SELECT 'NUMERIC_ROUNDABORT'
UNION ALL
SELECT 'QUOTED_IDENTIFIER'
 
SELECT propertyname, SESSIONPROPERTY(propertyname) FROM @props
Categories: sql Tags:

Dallas Tech Fest – Demo Files for SQL Server CLR: An Introduction

August 12th, 2011 No comments

Demo files from my presentation at Dallas Tech Fest 2011.

SQL Server CLR – An Introduction – Dallas Tech Fest 2011

*Updated 8/13: presentation with link to the MCM Readiness Video on SQLCLR

Categories: .net, c#, clr, development, speaking, sql Tags:

Prevent SQL Logins from using SSMS … or any other app

June 17th, 2011 No comments

In a continuation of my previous LOGON TRIGGER post.

A downside to SQL logins is that anyone with the username / password combination can login as that user. This prevents us from knowing who truly logged in and performed some action. It is possible to trace and log everything by IP address and correlate back to that.

Another downside is these accounts are usually service accounts, at least in our environment. These SQL logins may have more permissions than they should because they may be part of a vendors product, and the vendor’s app requires them to have sysadmin on the server. We have an issue with some of our users logging in as these SQL logins and running queries or manipulating data that they shouldn’t be.

So how can we allow the login to still exist, but deny normal users from logging in under those credentials? There’s a few different ways, one could be isolating these users to specific endpoints that would allow only that user from a given ip range. Another option, and the one that made sense here, was to use a LOGON TRIGGER. This allows us a great amount of flexibility on deciding when to prevent a connection from happening.

In this case, we wanted any SQL login using Management Studio to be denied access. Any of the other apps that log in with this account will still work. This includes SqlCmd, Access, etc. Our main problem, though, were devs coming in under SSMS. We also wanted to log any login attempts using this method.

This script creates a logging table in our management database (T1001), creates a view to make things nicer then creates the logon trigger to reject any connections that match our criteria.

--------------------------------------------------------------------------------
-- Create table to hold rejected login attempts
--------------------------------------------------------------------------------
 
USE T1001;
GO
 
CREATE TABLE T1001.SQLTrace.loginData (
	id INT IDENTITY PRIMARY KEY,
	data XML,
	program_name sysname
)
GO
 
--------------------------------------------------------------------------------
-- Create view to make querying the table a little nicer
--------------------------------------------------------------------------------
CREATE VIEW SQLTrace.loginDataView
AS
SELECT id
      ,data.value('(/EVENT_INSTANCE/EventType)[1]', 'sysname') AS EventType
      ,data.value('(/EVENT_INSTANCE/PostTime)[1]', 'datetime') AS PostTime
      ,data.value('(/EVENT_INSTANCE/SPID)[1]', 'int') AS SPID
      ,data.value('(/EVENT_INSTANCE/ServerName)[1]', 'nvarchar(257)') AS ServerName
      ,data.value('(/EVENT_INSTANCE/LoginName)[1]', 'sysname') AS LoginName
      ,data.value('(/EVENT_INSTANCE/LoginType)[1]', 'sysname') AS LoginType
      ,data.value('(/EVENT_INSTANCE/SID)[1]', 'nvarchar(85)') AS SID
      ,data.value('(/EVENT_INSTANCE/ClientHost)[1]', 'sysname') AS ClientHost
      ,data.value('(/EVENT_INSTANCE/IsPooled)[1]', 'bit') AS IsPooled
      ,program_name
FROM SQLTrace.loginData
GO
 
--------------------------------------------------------------------------------
-- Create logon trigger
--------------------------------------------------------------------------------
USE [master];
GO
 
/****** Object: DdlTrigger [Deny_SQLLogin_SSMS_Trigger] ******/
IF EXISTS(
	SELECT * FROM master.sys.server_triggers
	WHERE parent_class_desc = 'SERVER'
		AND name = N'Deny_SQLLogin_SSMS_Trigger')
DROP TRIGGER [Deny_SQLLogin_SSMS_Trigger] ON ALL SERVER
GO
 
CREATE TRIGGER Deny_SQLLogin_SSMS_Trigger
ON ALL SERVER WITH EXECUTE AS 'sa'
FOR LOGON
AS
BEGIN
DECLARE @data XML
SET @data = EVENTDATA()
 
DECLARE @AppName sysname
       ,@LoginName sysname
       ,@LoginType sysname
SELECT @AppName = [program_name]
FROM sys.dm_exec_sessions
WHERE session_id = @data.value('(/EVENT_INSTANCE/SPID)[1]', 'int')
 
SELECT @LoginName = @data.value('(/EVENT_INSTANCE/LoginName)[1]', 'sysname')
      ,@LoginType = @data.value('(/EVENT_INSTANCE/LoginType)[1]', 'sysname')
 
IF @AppName LIKE ('Microsoft SQL Server Management Studio%') --If it's SSMS
   AND @LoginName &lt;&gt; 'sa'
   AND @LoginType = 'SQL Login' --('SQL Login' | 'Windows (NT) Login')
BEGIN
    ROLLBACK; --Disconnect the session
 
    --Log the exception to our table
    INSERT INTO T1001.SQLTrace.loginData(data, program_name)
	VALUES(@data, @AppName)
END
 
END;
GO

Don’t forget to create a job to purge historical log data at sufficient intervals for you.

Categories: dba, sql, sql triggers Tags: