Setting sp_configure values with SQLChecks

As of v1.0 SQLChecks now contains the Set-SpConfig command that allows you to take a file that documents a server configuration (specifically sp_configure values) and apply that configuration to a server. The configuration file is the same one used by Pester tests (perhaps in combination with something like dbachecks), which means you now have a mechanism to document, test, and set your server’s configuration.

In order to apply the configuration to a single server you would run the following PowerShell (note that SQLChecks configuration files contain the instance name, which is why we don’t have to specify a server):

  Read-SqlChecksConfig "c:\configs\localhost.config.json" `
  | Set-SpConfig -Verbose

Running in verbose means that it’ll output progress as it changes a value, as well as a summary as it finishes (x/y config values updated).

Note the command compares the configured value against the expected value - if the configured value is correct but the runtime value is wrong then this will neither fail the Pester tests, nor update the value when using Set-SpConfig.

[Read More]

Cleaning up database mail profiles and accounts

You know that dev server you’ve got lying around that has about half a dozen database mail profiles on? Maybe one account for every provider you’ve tested? Or maybe it’s even a production server that has had its profile faithfully updated to a new account each time your SMTP server moves but never had the old ones cleared out?

After recently moving all of our servers to use SendGrid SMTP for sending out database mail I decided to perform some long overdue spring cleaning. In our case that included a couple of dev servers, as well as a production server from which you could divine the history of the company from the various database mail profiles and accounts it had.

Your servers might have a legitimate reason to contain multiple mail profiles/accounts, so only run the below script on a server if you are sure you want to remove every account/profile except the default.

declare @defaultProfileId int;
select @defaultProfileId = pp.profile_id
from dbo.sysmail_principalprofile as pp
where pp.principal_sid = 0x0 /* Guest */
and pp.is_default = 1;

if @defaultProfileId is null
begin
	;throw 50000, 'No default profile set', 1;
	return;
end

/* Delete non-default profiles and their account mappings */
delete from dbo.sysmail_profileaccount where profile_id <> @defaultProfileId;
delete from dbo.sysmail_profile where profile_id <> @defaultProfileId;

/* Remove orphaned accounts */
with cte as (
	select a.account_id
	from dbo.sysmail_account as a
	where not exists (
		select *
		from dbo.sysmail_profileaccount as pa
		where pa.account_id = a.account_id
	)
)
delete from cte;

For an example script which configures the default profile and account to use a SendGrid SMTP profile, see this gist. Plug in in your SendGrid API key) and you’re good to go.

[Read More]

Better expenses with Monzo and PowerShell

While making submitting expenses fun is probably impossible, we can leverage Monzo and PowerShell to start automating away some of the manual steps involved. In my specific case I need to submit monthly expenses with receipt support and categorise each expense. Combine Monzo’s tagging, image support, and CSV export with PowerShell’s ability to…well do pretty much anything, and we have a workable solution!

Assuming I’ve correctly tagged all my transactions with the #expense tag I can then take the monthly export (available from the spending tab) and get my expense submission mostly prepared with a single PowerShell command:

. c:\src\blog-monzo-expenses\Expenses.ps1

$monzoExport = "c:\expenses\monzoexports\March2018.csv"
$exportFolder = "c:\expenses\submissions\March2018"

Export-ExpenseFromMonzo -MonzoExport $monzoExport -ExportFolder $exportFolder

This creates a CSV that contains transactions tagged with #expense in the Monzo account currency, downloads the first image associated with each transaction (and names it to match the line in the CSV), and maps the Monzo category to the internal expense reporting category.

Expense Folder

A little copy-paste later and I’m done. Read on for details on how it works, and where you can customise the data you return. If you want to use this on your own data (or try it out on some sample data) you can get everything you need in this example GitHub repo.

[Read More]

Improving database Pester tests - per-database checks

When we first started putting tests together for SQLChecks we naively/optimistically thought we’d mostly be seeing a sea of green, with failures being rare. This influenced the way we developed ‘database’ tests, so that when you test an instance for ‘databases with files too full’, the test gives you a pass/fail for the entire instance.

This is fine when the test passes, but as soon as it fails it is spectacularly unhelpful in figuring out what broke.

Something is wrong

Arriving in the morning to discover one (or more) databases on an instance have a problem isn’t particularly actionable, and so I’ve recently started to move all SQLChecks tests over to per-database, which is a lot more helpful.

Some specific database is wrong

The rest of this post covers what the changes looked like, and talk a little more about the benefits of structuring tests this way.

[Read More]

Pattern - SQL Server as a shared cache for expensive stored procedures

The following scaling rules will take you a long way if you are supporting an environment with a SQL Server datastore:

  • Cache as much as you can in the application tier
  • Offload as much computation into the application as possible
  • Minimise the work high-volume queries have to do
  • Limit the number of transactions you do, and the work each transaction does

Knowing when to ignore (or even break) these rules is what keeps the job interesting.

The rest of the post will walk through a generic pattern to cache stored procedure results that vary by parameter, including the logic needed to expire, cleanup, and evaluate the cache. This solution has been battle tested in production with a fairly expensive procedure called concurrently from multiple application nodes (for dozens of different parameter combinations).

The environment that motivated this work already aggressively caches results in the application tier - the specific motivation to cache the results in SQL came from the number of application nodes increasing. Building a shared cache service [or introducing something like Redis/Memcached] is a non-trivial engineering project, and the SQL CPU pressure this proc caused was significant. Other options (e.g. incremental cache updates/aggregation in the application tier) were also judged to be significant projects (or at least, more significant than caching it in the database!).

Cache Control

If you want to see the full example, you can check the complete source on GitHub. Before deploying into production I strongly suggest reading through the entire post for caveats and tradeoffs.

[Read More]
Tags: SQL