# Category Archives: sql

I was playing around with BITS transferring backups between servers and discovered that BITS has PowerShell cmdlets. I was previously using the BITSAdmin Tool which was easy, but I wanted to find out more about the PoSh cmdlets. After reading the docs, I decided to use these cmdlets in my MCM video download script.

BITS stands for Background Intelligent Transfer Service. It is a service built into windows since XP designed to transfer files in the background without interrupting normal user network usage. Windows Update uses BITS to download all the patches that need to be applied to your computer. As stated above, you too can also use BITS to transfer files around. If you regularly transfer large files over the network and get annoyed by how it affects your computer experience, look into replacing the normal copy, xcopy, robocopy commands with it.

My first attempt queued all the files into one BITS transfer. The upside is only one BITS job to manage. The downside is the filenames aren’t written until the whole job completes. Long download jobs often get cancelled and the user would want to be able to listen to or watch the files that had already finished. I wanted the opposite, so I went for one job per file.

The BITS cmdlets require PowerShell v2.0

# Splitting SQL Server Logs by ProcessInfo

I wanted a way to parse through my SQL logs and be able to look at one particular spid or ProcessInfo class such as Server, Backup, Logon, etc. I came up with the following PowerShell script: *You must have either trace flag 1204 and/or 1222 turned on. (More info)

param($path) if($path -eq $null) {exit} if(Test-Path$path) { "Parsing $path"$spid = $null gc$path | %{ #if ($_.Length -gt 35 -and$_.Substring(24, 12).StartsWith("spid")) { if($_ -match "^\d{4}") {$spid = $_.Substring(24, 12).Trim().PadRight(12, '_') } else { if($_.Trim().Length -lt 1 -and $spid -ne$null) {} else { $spid = "Default".PadRight(12, '_') } }$outfile = $path.Substring(0,$path.LastIndexOf(".")) + [string]::Format(".{0}.txt", $spid)$_ | Out-File -FilePath $outfile -Encoding "ASCII" -Append } } else {"Bad path given"} If you dump your log file via the following command, this works really well. sqlcmd -S MyServer\Inst1 -Q "xp_ReadErrorLog" -o myserver.log Once you have it, just call the script and specify the -Path parameter as such: & '.\Split SQL Log by ProcessInfo.ps1' -Path C:\path\to\myserver.log It looks at each line and figures either the spid or processinfo class and puts the line into a file with that string in its name. If it sees a blank line, it will put it into the previous file. WARNING: This could result any many files. Yes, you could dump this to SQL or parse this some other way, but I wanted to split it into files so I can run it through another process to get all deadlock information that I’ll be posting later. Download script # Download the MCM Readiness Videos with PowerShell Microsoft has changed their SQL MCM program to allow more people access to the course and certification. In doing so, they are recording much of the classroom training that was previously required and putting it up on TechNet. We’re talking close to 60 videos already. Sure, you could stream them one at a time, but I wanted to be able to download them all and watch them offline without a bunch of clicking. So I built the below PowerShell scripts. They’re really simple to use, and only have one external requirement that makes dealing with web pages much easier. You’ll need to download the Html Agility Pack library from here. Get the script Usage: Get urls of all WMV files >& ‘.\Download MCM Videos.ps1’ Download all WMV files >& ‘.\Download MCM Videos.ps1’ -download Download all iPod files >& ‘.\Download MCM Videos.ps1’ -download -type ipod # Getting VLF stats for multiple databases After watching Paul Randal’s (blog | twitter) MCM video on log files, I wanted to find a quick way to get VLF (Virtual Log File) statistics across multiple databases and servers. I went two routes, PowerShell and T-SQL. Our internal monitoring uses T-SQL already, so I wanted to be able to easily work with the existing framework. I wanted to create a PowerShell version, as some others might find it useful and I just like working with PowerShell. T-SQL Version I wound up making my table structure the same as David Levy in this blog post. Michelle Ufford has a very similar script at her blog post. PowerShell Version Thanks to Aaron Nelson (blog | twitter) for giving me an idea to tweak this. More on VLFs: # Active / Active Cluster Server Memory Allocation We have several active / active clusters with multiple instances per node in our environment. We want to ensure the instances are spread as evenly as possible across the nodes. I wrote a quick PowerShell script to go through a text file and get the physical node where the instance is running, how much physical memory the node has and how much memory the instance is configured to use. Just create a text file “servers.txt” with the list of instances on a cluster and run the following. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | Out-Null [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null$servers = get-content "servers.txt" foreach($server in$servers) { $sqlServer = New-Object Microsoft.SqlServer.Management.Smo.Server$server [String]::Format("{0} uses {1}MB RAM. Resides on {2}.", $server,$sqlServer.Configuration.MaxServerMemory.ConfigValue, $sqlServer.ComputerNamePhysicalNetBIOS)$objCS = Get-WmiObject Win32_ComputerSystem -ComputerName $server.Substring(0,$server.IndexOf("\")) [String]::Format("{0}\{1}: {2}MB", $objCS.Domain,$objCS.Name, [Math]::Round(\$objCS.TotalPhysicalMemory / 1mb)) }
The formatting leaves much to be desired, but it’s a quick check and that was what I was interested in. Excel can do all the formatting and calculating later.

# Upcoming Series on Source Controlling your Databases

I’m working on a series of posts to walk through how to setup an environment from scratch using free tools to get your databases under source control.

If you already have an environment setup with PowerShell, SMO, & Subversion you can go ahead and download the PowerShell script from here and get started.

# Using .NET’s TryParse() in SQL CLR

In my current position, I deal with a lot of data quality issues. This column should be a date, but we get an integer, string, NULLs, etc. As a rule of our processing, if we cannot convert the value to the appropriate data type, then we use a default value, or choice of values depending on the situation.

SQL Server already has the CAST and CONVERT functions built in. There is a shortcoming of these functions though. If the value cannot be converted, you get an error and your processing stops. For converting values, you have to pass through the table twice. Once to manually convert any known values that cannot convert into something that can or set them to NULL. Then you can run CAST or CONVERT on the values. We wanted to be able to do this in one step and this led us to .NET’s TryParse() method.

Each basic data type in .NET has a TryParse() method that attempts to parse the value and returns a boolean if successful. The parsed value is passed back through an output variable. In my case, I don’t really care about the boolean return value. I want a value if it was successful, otherwise give me a NULL. This is the syntax for creating the SQL CLR version.

And the syntax for using it within SQL.

Why would I want NULL on failure you might ask. Well, if I get a NULL back, then I can easily check against NULL for success and use functions such as COALESCE to substitute a default value if the parsing failed. A good case for us to use this is when we have blank strings that need to convert into a default date. CONVERT cannot cast the value ” to a datetime and thus throws an error.

Now I can do things like the following to allow processing to complete if CONVERT hits a bad value.

Now the drawbacks of using this method. If you need to make sure you have all valid values beforehand, this will cause those to fail silently. An alternative would be to use this to check for invalid values. You can do something like this to find problematic values:

This allows you to find all unconvertible values without SQL throwing an error as it would if you had tested with CONVERT(). We mainly use this function in the second situation. We want to know all values that would cause problems.

I have also made versions for Int, BigInt and Money following the same basic pattern. As with any CLR functions, make sure you know the security and performance implications when using it. Hopefully if you do a lot of data staging then converting this could make your life easier.

# Recreate Foreign Keys in PostgreSQL

Function I devised to allow recreating all foreign keys and switching if they cascade:

CREATE OR REPLACE FUNCTION util_recreate_fks (allow_cascade bool) RETURNS bool AS $$DECLARE constraint_cursor CURSOR FOR SELECT t.constraint_name, t.table_schema || '.' || t.table_name as table_name, --t.constraint_type, c1.table_schema || '.' || c1.table_name as table_name_pk, c1.column_name as column_name_pk, c2.table_name as table_name_fk, c2.column_name as column_name_fk FROM information_schema.table_constraints t, information_schema.constraint_column_usage c1, information_schema.key_column_usage c2 WHERE t.constraint_name = c1.constraint_name AND t.constraint_name = c2.constraint_name AND t.constraint_type = 'FOREIGN KEY'; _constraint RECORD; cascade_setting VARCHAR;BEGIN IF (allow_cascade) THEN cascade_setting := 'CASCADE'; ELSE cascade_setting := 'NO ACTION'; END IF; OPEN constraint_cursor; LOOP FETCH constraint_cursor INTO _constraint; EXIT WHEN NOT FOUND; EXECUTE 'ALTER TABLE ' || _constraint.table_name || ' DROP CONSTRAINT ' || _constraint.constraint_name; EXECUTE ' ALTER TABLE ' || _constraint.table_name || ' ADD CONSTRAINT ' || _constraint.constraint_name || ' FOREIGN KEY (' || _constraint.column_name_fk || ') REFERENCES ' || _constraint.table_name_pk || '(' || _constraint.column_name_pk || ') ON UPDATE ' || cascade_setting || ' ON DELETE ' || cascade_setting || ' '; END LOOP; CLOSE constraint_cursor; RETURN allow_cascade;END;$$ LANGUAGE 'plpgsql';

# Return value or 0 if negative

If you ever run across a situation that requires you to subtract two numbers and return the value if positive or 0 if negative, here are a couple of solutions.

1) Use IF:

value = x - y;if value < 0    return 0;else return value;

2) Use UNION:

select max(value)from (    select x - y as value    union select 0) x

I like this way since IF requires plpgsql in Postgres whereas UNION can be used with plain sql.