Adding caching to your PowerShell scripts

Most of the time when some part of a script takes a long time to run and you want to re-use the result you’ll store it in a variable:

$databases = Get-ListOfDatabases # assume this is an expensive/long-running query
Invoke-SomeOperation -Server serverOne -Databases $databases
Invoke-SomeOtherOperation -Server serverOne -Databases $databases

As long as the function you’re calling allows you to pass in the ‘expensive’ argument you’re fine - but what about cases when the computation takes place inside the function:

Invoke-SomeOperation -Server serverOne # internally calls Get-ListOfDatabases
Invoke-SomeOtherOperation -Server serverOne # internally calls Get-ListOfDatabases

Sometimes it makes sense to rework the script to accept the argument, though other times it can be cleaner to modify the call site to cache the result, rather than changing every function to accept and potentially pass through that argument.

The rest of this post will go through a specific example that motivated caching, share a generic function that implements scriptblock-based caching, and call out a few and gotchas.

[Read More]
Tags: PowerShell

Resumable Online Index Rebuilds - SQL 2017's Silver Bullet

Every new version of SQL Server comes with a whole grab-bag of new features that open up exciting possibilities, or obviate the need for workarounds (or dirty hacks). In the runup to deploying SQL 2017 I thought that automatic plan correction was going to be the closest thing to a silver bullet I’d seen in any release so far (in terms of value added vs. effort to implement), but it has been eclipsed for us by the awesomeness of Resumable Online Index Rebuilds (ROIR).

In this post I’ll talk through a few of the scenarios where this feature really shines, and how it has transformed the way we think about index maintenance. If you’d like more details about how ROIR is implemented I’d encourage you to read through the excellent paper detailing ROIR - this covers how the online index rebuild algorithm was updated, and also demonstrates how in most cases ROIR outperforms the non-resumable version in terms of performance.

[Read More]
Tags: SQL

SQL Managed Backups and Operating System Error 87

We use SQL Managed backups for our on-premises SQL Servers, and have been very impressed with it (from a speed, management, and cost perspective). Shortly after deploying the solution though the SQL error logs started to log errors when attempting to read managed backups:

BackupIoRequest::ReportIoError: read failure on backup device
Operating system error 87(The parameter is incorrect.).

Nothing suggested there were any issues - backups were still being taken, our backup chain wasn’t broken (restores were fine) - but this error was being logged all the time.

Understanding where the error came from and how to fix it required a better understanding of exactly how managed backup works.

[Read More]
Tags: SQL Azure


Our production environment recently started generating alerts on huge blocking chains (100s of requests), which were accompanied by increased database response times for various procedures. The blocking chains all had the latch wait SERVICE_BROKER_WAITFOR_MANAGER in common, some of which were blocking for seconds (adding significant overhead to operations that would normally complete in a few milliseconds).

In this post I’ll walk through what an environmentin leveraging service broker might look like, show you how to reproduce the issue, and offer some mitigation strategies/general advice for service broker at scale.


[Read More]
Tags: SQL

Saving and reporting on Pester Infrastructure test results in Log Analytics

Pester is a fantastic tool to test and monitor the status of your infrastructure. There are several libraries which leverage this directly (e.g. SQLChecks, DBAChecks, Operation Validation Framework), and a growing set of resources discussing how to leverage Pester in this fashion (including a Pluralsight course and a chapter in the Pester book). If you don’t already have some tests you can run interactively (is my environment correct right now?) I’d suggest you start there first.

Once you’ve got your infrastructure tests running interactively you’ll probably want to start automating them (is my environment correct every day?). And then at some point you’ll probably find yourself asking questions like:

  • How long has this test been failing for?
  • How long does this test normally take to run?
  • When was the last time this test failed?
  • How is the health of my estate trending over time?

Once you’ve invested time into building out a library of Pester tests for your infrastructure, what you really want to do is analyse the Pester test results. There are various examples out there that discuss how to persist results to files, XML, SQL databases - but none of these options have the advantages that shipping to Log Analytics provides - which is what we’ll discuss today. A few reasons why I think think sending your results to Log Analytics is the superior choice:

  • Extremely powerful query language (KQL)
  • Can be queried & consumed by a web browser, Power BI, REST API, Flow, Azure Monitor (for alerting)
  • Lives with the rest of your infrastructure logs (if you use the machine agents/Azure connectors/etc.)
  • Simplified security story (everywhere you run a tests posts to an HTTP endpoint - no fileshares/SQL databases/etc.)
  • No impact on the systems under test (how many DBAs have spent time performance tuning their monitoring databases :)?)
  • Great operating cost (free tier lets you log a lot of Pester results every month)

If you don’t already have a Log Analytics workspace I did a fairly detailed writeup last month: Getting started with Log Analytics and PowerShell logging.

[Read More]