Allowing only specified users to access Cloud Firestore

I’ve been building a few apps recently that leverage Cloud Firestore for data storage. These are personal apps and don’t store anything particularly sensitive, though that is no reason to leave them in the default development configuration that let’s anyone read/write everything.

Although in many projects I’m the only user, there are handful of others where a few people are using the app. A fairly flexible configuration approach that I use as my default is to only allow access if the user is in an ‘allow list’.

I’ll show the steps needed to do this below, the pre-requisites are:

  • Cloud Firestore enabled for the project
  • Authentication configured for the project with at least one user authenticated
    • Every user you want to grant access will need to authenticate with the project as we’re using their firebase User UID, which is unique to each project
[Read More]

Keeping Application Insights Costs Under Control

Application Insights (now part of Azure Monitor) uses a pay-per-GB-ingested model, and charges $2.30 per-GB once you exceed the monthly free limit of 5GB. It may surprise you (it certainly surprised me!) to see that by default an Application Insights resource doesn’t deploy with a daily cap of 0.161GB (5GB/month), but actually deploys with a daily cap of 100GB!

Application Insights default cap

Left unchecked each resource like this could end up costing you a cool $7,118.50 per month.

In order to vet your estate and bring it under control, the PowerShell script below will check every Application Insights resource you have deployed against a limit you set, and optionally reduce anything exceeding that limit to a more reasonable cap.

Lower that daily cap

In the above example I ran the script against a newly deployed resource, configured to reduce anything with a cap greater than 10GB down to 1GB.

[Read More]

Should I Automate It?

Whenever you need to do something more than once it’s often tempting to invest in the process - either by making it easier to repeat or fully automating it.

This post isn’t about convincing you that it’s time to automate that thing (if you’re not already ‘automate by default’ go check out XKCD 1205 - once you are fully on the automation train come back here). This post is about giving you another tool to help decide to not automate that thing.

Automation Calculator

Download the Calculator to follow along!

[Read More]

Understanding space usage in Azure Monitor logs

Data ingested to Azure Monitor logs is billed per-Gigabyte ingested. As a workspace will typically grow to have data coming from many different sources and solutions it is helpful to have a set of queries that allow you to quickly drill into where exactly the GBs (or TBs!) of data you have stored comes from.

I’ve found the below queries very helpful starting points for three main scenarios:

  • Regular monitoring (once/month) to see how data volumes are trending
  • Reacting to a monitoring alert based on overall ingestion volumes
  • Testing out a configuration change/new solution and observing the impact on data ingested

The latter is particularly important before rolling a change to a workspace with long retention - you wouldn’t want (hypothetically :)) to accidentally ingest 100GB of IIS logs and then be forced to retain them for 2 years…

[Read More]