Category Archives: learning

SQLskills Immersion Event on Internals and Performance – Day 5


Today was the fifth and final day of the Internals & Performance immersion event. I have really enjoyed learning from not only Paul and Kimberly, but my classmates as well. There were many smart people in that room and hearing each others’ perspectives is a part of the class that I’m glad is there. I want to thank Paul and Kimberly for doing these events. I also want to thank my employer for sending me.

We spent the morning finishing up the Statistics module. More interesting stuff about helping the query optimizer pick the best plan. Indexing strategies was even more info about how to provide the optimizer the best info to get a good plan. It’s amazing how much difference this can make.

The last part of the day was Paul rushing through Index Fragmentation. Sadly, the Indexing for Performance modules took up a lot of time, thus this module was shorten. Fortunately, I had seen the videos and what was skipped is in the practice virtual machine image that we got on the first day.

To finish up, Kimberly did a quick presentation on the affects GUIDs have on clustered indexes. If you haven’t seen this presentation or read the blog post, do so. You’ll thank yourself if you are able to make the changes or prevent implementation in the first place.

Tonight was another night at the hotel bar, but it was my first time staying for it and socializing. It’s great when instructors will stick around and socialize. You learn more about their experiences that way and you get to tell your own stories. Now I regret not sticking around the previous nights.

There’s another event in Dallas in March which I probably won’t get to go to. I really want to go to their HA & DR class in Bellvue in August. Hopefully that will work out.

Indexing for Performance (Statistics)

  • As of 2005, stat updates are based on column change counter. This means that a column has to change 20% before stats are auto updated.
  • (col = @v1 OR @v1 IS NULL) is bad. WITH RECOMPILE won’t help
  • Stored-Procedure-Parameters-giving-you-grief-in-a-multi-purpose-procedure.aspx
  • sp_executesql = force statement caching
  • Inaccurate stats can cause problems, filtered stats over ranges can help
  • Have jobs to run scheduled maintenance on filtered stats
  • Steve Kass has some interesting math background info for stats
  • Beware uneven data distributions
  • Don’t trust the query estimate cost comparisons

Indexing for Performance (Indexing Strategies)

  • Don’t index to the query plan, index to the query
  • OR is similar to UNION… you may be able to get better performance with UNION, UNION ALL even better if you can ensure no dups
  • Hash => tempdb
  • sp_updatestats will update stats if even 1 row has changed
  • Relational data warehouse, heavily consider indexed views

Index Fragmentation

  • Clustered / Non-clustered fragmentation is the same
  • Binary search used for singleton lookup on a page
  • Readaheads need logically ordered pages
  • 4MB largest readahead possible in 2008
  • Narrow index keys = shallow index tree = faster lookups
  • Intra query parallelism deadlock

SQLskills Immersion Event on Internals and Performance – Day 4


We just finished the fourth day of SQLskills learning. Today was all on indexes and statistics. We learned about choosing a clustering key, what makes a good one, what makes a bad one. We also have to keep in mind the consequences to non-clustered indexes when choosing the clustered index. We learned about why SQL will choose to use a certain index and why it might use one we wouldn’t expect.

Tipping points are interesting. These are the point at which SQL says, it’s cheaper to do a scan with sequential IOs instead of a lookup with random IOs. I was amazed at how small a percentage is required before it tips to a scan. The narrower the table, the quicker the tip. This isn’t necessarily a bad thing, but you have to be aware of it. Covering indexes aren’t affected by tipping points.

Which brings up covering indexes. Be careful of creating too many and too narrow of covering indexes. You really need to consider the entire workload before creating any new indexes. You should also periodically review indexes for consolidation.

There seems to be many misconceptions hanging around or old best practices that have been superseded when 7.0 / 2000 came out. For example, you don’t necessarily have to list the most selective column in a NC index. List in order of use frequency.

Tonight Paul & Kimberly opened the floor to anyone wanting to present a 20 minute topic. Kendra Little (blog | twitter), Brian D. Knight (twitter), Trevor Barkhouse (blog | twitter), Martin Hill (blog | twitter) and myself presented. I know I flubbed my presentation by going down a rabbit hole. Twenty minutes isn’t enough to go down those holes. I need better presentation discipline and posture.

Today was a bit more difficult staying focused. I think it’s just being in the same environment for almost a week. Hopefully I’ll be more focused tomorrow.

Indexing for Performance (Index Internals)

Indexing for Performance (Internals & Data Access)

  • Have an idea of each table’s tipping points, they can be smaller than you think
  • Dense tables will have smaller tipping points, possibly < 1%
  • Forced parameterization could conflict with filtered indexes / stats
  • Always explicitly specify session options
  • Filtering indexes / stats can offer widely covering indexes when highly selective
  • Most selective column shouldn’t necessarily be first in index
  • If equality based, order columns in index by frequency used

Indexing for Performance (Statistics)

  • Query predicates can be affected by order, for large numbers of predicates
  • Histograms are usually key
  • Update stats before rebuilding indexes as it affects parallelization
  • sp_createstats ‘indexonly’ to create stats on non-statted columns of indexes
  • Out of date stats can affect query plan query cost relative to batch

SQLskills Immersion Event on Internals and Performance – Day 3


Today was mostly about Table Design, Partitioning, with a little of Indexing. Partitioning is a big, complex topics with a lot of intricate details and interactions. If you are thinking about implementing partitioning using either partitioned tables or partitioned views, do your homework. Don’t just implement either without knowing the background of both. This is not a simple subject.

Another key take away is, be smart when choosing between heaps and clustered indexes. Heaps have their place, but good CIs are the better choice in most cases. Also understand the effect that choosing certain CIs will have on your non-clustered indexes. Beware choosing a bad CI.

After hours, SQL Sentry did a demo on their Event Manager & Performance Adviser products. Practical experience behind the products. The disk info they give is amazing. Tracking down blocking and deadlocks look really easy using their tool.

Table Design Strategies & Partitioning

  • Partitioned views more likely than partitioned tables
  • Neither solves all problems by itself
  • A single large table toes NOT have to be one table
  • Partitioned tables – EE only
  • Partitioned views – Any edition
  • Staging filegroups, once clean, Rebuild Clustered Index on destination FG
  • PV: check constraints after load
  • PV: nothing protects you from creating different indexes on participating tables
  • PV: make sure indexes match
  • Re-enabling constraints should be done WITH CHECK
  • PT Function = Logical, Scheme = Physical
  • Do Partition Lab, then read Kim’s 2005 WP, then read Ron’s 2008 WP using DATE, then Online Ops lab
  • Using functions in defining the partition function is a one time calculation upon creation
  • Switching out tables is easier than switching in
  • Requirements are checked when switching in
    • Set next used file group
    • Constraint
    • SPLIT Function
    • SWITCH

Indexing for Performance

  • SQL Server has an error 666, if you exhaust a uniquifier (INT) on a poorly chosen clustered index

SQLskills Immersion Event on Internals and Performance – Day 2


Day two of the SQLskills Immersion Event is over and I have listed the interesting parts below. The food, again, was great, except for the lack of danishes during breakfast (I really like those). Today I got see Kimberly’s teaching style. She’s a fast talker, so you better keep up. Sadly I was a bit more sleepy in class today, though that’s not a reflection on Kimberly. I keep waiting for Kimberly to go into a Joan Cusack type rant, she just has those mannerisms. To tell you how much this couple cares about teaching and the value of whiteboards… they shipped their own 2 boards down here from Washington state, even through what seemed like a pretty big hassle.

Tonight also saw SQL Sentry demo their free Plan Explorer tool. A really cool tool that’s worth checking out. I learned a few tips for using it that I didn’t know before.

Logging, Recovery, and Log File Maintenance

  • Updates request update locks on objects, when all locks are granted locks change to eXclusive as they are updated.
  • There is no such thing as a non-logged operation (in a user database)
  • fn_dblog()
  • Crash recovery is logged as well (compensation records)
  • Fixed width columns are grouped in a log record
  • Variable width columns get individual log record
  • *Committed* transaction log records are always written to disk
  • Uncommitted transaction log records *could* be buffered. Checkpoint causes these to flush to disk.
  • Nested COMMIT TRANSACTION does nothing other than decrement @@TRANCOUNT
  • Nested ROLLBACK rolls back everything
  • Don’t use nested transactions, use save points instead
  • BULK_LOGGED recovery model is kind of useless, your save log space, but not log backup space.
  • If you want to use it, BACKUP LOG, switch to BULK_LOGGED, do op, switch back to FULL, BACKUP  LOG.

Locking & Blocking

  • Fix your code
  • Optimize your indexes
  • Watch out for more than one incompatible lock on an object.
  • UPDATE … @var = col1 = col1-@debit (added in 6.0) Really neat way of updating a row and getting a value back in one query
      INSERT INTO testNum VALUES(5)
      DECLARE @id INT
      UPDATE testNum
      SET @id = id = id + 5
      WHERE id = 5
      PRINT @id
  • UPDATE … OUTPUT (to table variables)
  • No such thing as a nested transaction
  • KB 271509 – sp_blocker_pss08

Nested transactions (no such thing. Though you’ll want to learn about save points)

        SAVE TRAN T3
    COMMIT TRAN T2 ==&gt;@@TRANCOUNT = 1

Snapshot Isolation

  • If you use row versioning, use a FILLFACTOR to prevent page splits
  • Versioning will double your writes (once to data file, once to tempdb) and likely increase tempdb space, although not necessarily.

Table Design Strategies & Partitioning

  • Vertical partitioning could be useful depending on the access patterns and page density
  • Get your data types right
  • If LOBS aren’t used often, considering pushing them off row

SQLskills Immersion Event on Internals and Performance – Day 1

Sonny Bryans Smokehouse Bbq would make an interesting classroom

Today was the first day of my first SQLskills event. The event is 5 8 hour days of intense, master level SQL training. This particular event is on Internals & Performance. Lucky for me, it’s being held in Dallas so I don’t have to travel for it.

I plan on blogging my impressions for the day every evening. We’ll see if I can stick to that. But without further adieu, my first day.

General Impressions

Paul (blogtwitter) and Kimberly (blog | twitter) are great facilitators. Paul taught all of today, so I cannot really talk to Kimberly’s teaching… yet. Kimberly handled logistics today.  I was having admin issues with the hotel wireless that she helped solve. Paul & Kimberly both would field any question that was presented… as long as it involved sheep. I was also curious if Paul ever tired of the sheep jokes. He doesn’t, so keep ’em coming.

Session Impressions

Breakfast was great. So many conferences or events are catered mediocre at best. The selection for breakfast was a nice variety of food… including the all important bacon. They also more than just coffee and water which is nice since I don’t drink coffee. Thanks to SQL Sentry for sponsoring the food.

The first half of the day was all about data structures. It had a good, quick pace, until we got into viewing hex dumps of data pages. I know it’s important to learn, just tedious. Data structures are all about how data rows are physically manifested on disk. Also how they are logically built. What the structure hierarchy is. How data compression / sparse columns work.

Lunch was again good food. I hope the food stays this good for the rest of the week. The quality of the food makes a difference in how you feel during the next few hours.

The afternoon was a little more data structures and then data file internals. Again, good quick pace. More about why shrink is bad. Suggestions on where to place data / log files. How SQL Server allocates space. And tempdb. Interesting thing I learned this afternoon was that there are objects in SQL Server that you can only query if you are using the dedicated admin connection (DAC).


Having already viewed the related MCM videos, today was a lot of refreshment / reinforcment of what I had previously watched. The difference being the depth and time taken on each topic. I really like rapid pace classes. It gives me less chance to get sleepy, which is usually a problem for me in a classroom setting.  As Brian D Knight (twitter), not that Brian Knight, said: “The whiteboard instruction alone at the #sqlskills Immersion Event is worth the price of admission over just watching the MCM videos.”

A Half-Price Seat at the Masters’ Table

Well, it looks like I half-won the SQLskills competition for a free seat at a SQLskills Master Immersion Event on Internals and Performance in Dallas.

Congrats to the two winners who won a free seat: Sankar Reddy (blog | twitter) and Martin Catherall (blog | twitter)

And the three others, besides myself, that won a half-price seat: Samson Loo (blog | twitter), Jason Brimhall (blog | twitter) and Kendra Little (blog | twitter)

I look forward to seeing Paul, Kim, and the rest of the winners, along with the other students next month.

A Seat at Two Masters’ Table

Paul Randal (blog | twitterblogged about winning a free seat at a SQLskills Master Immersion Event on Internals and Performance in Dallas.

Since this is Paul and Kimberly judging, I’ll be writing it explicitly for them as “It Depends.”

So, if their judging style is based on humor along with Paul / Kimberly / Brent / SQL memes, then here is that entry.

Come on, it’s Paul and Kimberly, one of the greatest sheep herders ever along with the greatest sheep herder herder. Wouldn’t you want to take the opportunity to learn how sheep can be trained to process that much data that fast. There might even be a green fisted Lady Gaga impersonator punching the bacon harder.

After the class, I’d love to go around helping others get the best performance out of their livestock. So many have problems with corrupt sheep or dead sheep and there’s just not enough of the snippy Scotsman to go around. I’d like to also use it as a starting point to becoming a Certified Master in Sheep Query Lawn Server.

Otherwise, if they are judging on technical merit:

I want to learn everything I can about the tool I chose to base my profession on. Paul & Kimberly are recognized as some of the most knowledgeable persons about the internals of that tool. The pair are also very good at teaching and spreading the wisdom and experience they have gained. If I can gleam a portion of that, then I can enhance my performance as a SQL Server professional. This would also be a large first step on becoming an MCM for SQL Server.

One thing I really like doing is helping people. I like solving peoples’ troubles who cannot do so themselves. It’s something I’ve done as long as I can remember. I want to take the knowledge gained and use it to help people. I know that companies are usually the ones that require the technical help. Those companies are made up of people trying to do what they do best, but maybe can’t learn this material at the depth we can. The people suffer if the company does.

Another way to help is to teach others. Professionals like to learn. Teaching a professional or an eager student is a wonderful experience. Knowing that you taught them something new, clarified a portion of thought, or brought a new perspective is very rewarding. This training would help me be a deeper tome from which others can learn.

So Paul, Kim… I’d like a seat at your table when you come to Dallas.

    Download the MCM Readiness Videos with PowerShell: Now with BITS

    I was playing around with BITS transferring backups between servers and discovered that BITS has PowerShell cmdlets. I was previously using the BITSAdmin Tool which was easy, but I wanted to find out more about the PoSh cmdlets. After reading the docs, I decided to use these cmdlets in my MCM video download script.

    BITS stands for Background Intelligent Transfer Service. It is a service built into windows since XP designed to transfer files in the background without interrupting normal user network usage. Windows Update uses BITS to download all the patches that need to be applied to your computer. As stated above, you too can also use BITS to transfer files around. If you regularly transfer large files over the network and get annoyed by how it affects your computer experience, look into replacing the normal copy, xcopy, robocopy commands with it.

    My first attempt queued all the files into one BITS transfer. The upside is only one BITS job to manage. The downside is the filenames aren’t written until the whole job completes. Long download jobs often get cancelled and the user would want to be able to listen to or watch the files that had already finished. I wanted the opposite, so I went for one job per file.

    The BITS cmdlets require PowerShell v2.0

    Download the script

    Download the MCM Readiness Videos with PowerShell

    Microsoft has changed their SQL MCM program to allow more people access to the course and certification. In doing so, they are recording much of the classroom training that was previously required and putting it up on TechNet. We’re talking close to 60 videos already. Sure, you could stream them one at a time, but I wanted to be able to download them all and watch them offline without a bunch of clicking. So I built the below PowerShell scripts. They’re really simple to use, and only have one external requirement that makes dealing with web pages much easier.

    You’ll need to download the Html Agility Pack library from here.

    Get the script


    Get urls of all WMV files
    >& ‘.\Download MCM Videos.ps1’

    Download all WMV files
    >& ‘.\Download MCM Videos.ps1’ -download

    Download all iPod files
    >& ‘.\Download MCM Videos.ps1’ -download -type ipod