Mitigating SERVICE_BROKER_WAITFOR_MANAGER latch waits

Our production environment recently started generating alerts on huge blocking chains (100s of requests), which were accompanied by increased database response times for various procedures. The blocking chains all had the latch wait SERVICE_BROKER_WAITFOR_MANAGER in common, some of which were blocking for seconds (adding significant overhead to operations that would normally complete in a few milliseconds).

In this post I’ll walk through what an environmentin leveraging service broker might look like, show you how to reproduce the issue, and offer some mitigation strategies/general advice for service broker at scale.

Blocking

[Read More]
Tags: SQL

Saving and reporting on Pester Infrastructure test results in Log Analytics

Pester is a fantastic tool to test and monitor the status of your infrastructure. There are several libraries which leverage this directly (e.g. SQLChecks, DBAChecks, Operation Validation Framework), and a growing set of resources discussing how to leverage Pester in this fashion (including a Pluralsight course and a chapter in the Pester book). If you don’t already have some tests you can run interactively (is my environment correct right now?) I’d suggest you start there first.

Once you’ve got your infrastructure tests running interactively you’ll probably want to start automating them (is my environment correct every day?). And then at some point you’ll probably find yourself asking questions like:

  • How long has this test been failing for?
  • How long does this test normally take to run?
  • When was the last time this test failed?
  • How is the health of my estate trending over time?

Once you’ve invested time into building out a library of Pester tests for your infrastructure, what you really want to do is analyse the Pester test results. There are various examples out there that discuss how to persist results to files, XML, SQL databases - but none of these options have the advantages that shipping to Log Analytics provides - which is what we’ll discuss today. A few reasons why I think think sending your results to Log Analytics is the superior choice:

  • Extremely powerful query language (KQL)
  • Can be queried & consumed by a web browser, Power BI, REST API, Flow, Azure Monitor (for alerting)
  • Lives with the rest of your infrastructure logs (if you use the machine agents/Azure connectors/etc.)
  • Simplified security story (everywhere you run a tests posts to an HTTP endpoint - no fileshares/SQL databases/etc.)
  • No impact on the systems under test (how many DBAs have spent time performance tuning their monitoring databases :)?)
  • Great operating cost (free tier lets you log a lot of Pester results every month)

If you don’t already have a Log Analytics workspace I did a fairly detailed writeup last month: Getting started with Log Analytics and PowerShell logging.

[Read More]

Getting started with Log Analytics and PowerShell logging

Log Analytics is a fantastic place to ship, store, and analyse your logs. Whether they’re coming from a linked Azure resource, machine agents, or you’re posting them from your own applications and services, Log Analytics is a key part of Azure Management & Monitoring. Whether you’re an IT Pro, working in devops, or an application developer - this platform and its capabilities are worth exploring and understanding.

Log Analytics was previously offered as part of the Operations Management Suite (OMS) bundling, though that labelling is in the process of being retired.

To get started you can create a free workspace which lets you ingest up to 5GB of data per month. Once the data is loaded there is no cost to query it, and it’ll be retained for 31 days (you can up retention and ingestion limits as part of paid plan later).

This post will walk through creating a Log Analytics workspace, uploading some logs with PowerShell, and then querying them via the portal. To follow along you’ll need an Azure subscription and the AzureRM PowerShell module - for installation instructions see the prerequisites section at the end of this post.

[Read More]

Troubleshooting a slow application with Application Insights

Application Insights (AppInsights) is a fantastic instrumentation framework that with minimal/zero configuration will start giving you rich data about your application’s performance.

We recently got some reports that one of our website solutions was ‘slow’ when developing locally, and as much as we’d like to turn to the DBA (you know what DBA stands for, right? I like Database Blamed Always…), with AppInsights we can be a little more rigorous.

From our starting point of ‘it runs slow locally, I think it is the database’ we’ll figure out precisely how slow it is, and whether it really is the database or not.

[Read More]

Ensuring your Describe Tags are unique in Pester tests

The name of each test in SQLChecks is used as both the setting name in the configuration files, and to tag the Describe block. After seeing the benefit of fine-grained control over test execution (from Claudio Silva’s post dbachecks - a different approach…) this method of test invocation became the preferred way to leverage the SQLChecks library:

$config = Read-SqlChecksConfig -Path $sqlChecksConfigPath
foreach($check in (Get-SqlChecksFromConfig $config)) {
  # Note we invoke by -Tag $check - a test with no tag will never get invoked
  Invoke-SqlChecks -Config $config -Tag $check
}

There isn’t yet a convention for how to name a test, and we’ve already had some tests built with similar sounding names - it is only a matter of time before we get a duplicate. To prevent duplicate tests accidentally getting checked in (and causing unusual/broken behaviour for consumers), I recently added a test that parses the test files and ensures that each tag is not only unique within the file, but globally within SQLChecks.

Describe tags test on AppVeyor

You can find the full test on GitHub, or read on for an explanation of how it is implemented. For a more thorough exploration of tests you can run on Describe blocks see SQLDBAWithABeard’s blog Using the AST in Pester for dbachecks (which inspired this test), or the TechNet post learn how it pros can use the PowerShell AST.

[Read More]