Nicholas Whittaker

A good start for October

2021-10-01 // #music

Somehow, we’re now into the final quarter of the year. Here’s a few things that interest me, and may interest you!

Hacktoberfest is back for another year, hopefully the controls they introduced after the terrible spam-fest of 2020 will avoid creating too much work for maintainers. It’s nice to see that they’re making merge requests on GitLab eligible too, and highlighting financial contributions as a way to support open source software.

A friend of mine in the UK is running a Kickstarter campaign to sell enamel Pikachu pins, if pins are your thing.

Today is also Bandcamp Friday, where Bandcamp waives their service fee for sales. It’s an excellent opportunity to support smaller/local artists and independent labels.

Here’s five albums I’ve been listening to recently.

I saw Nadje perform with Luke Howard when we weren’t in lockdown earlier this year. The world needs more gentle trumpet and flugel soloists, her sound is honey for my ears.

Brad’s a Melbourne-based funk keys player, he livestreams a few times each week on Twitch. In fact, he’s live as I write this!

Hello Satellites was one of the performances I attended at Tempo Rubato in Brunswick before the most recent bout of lockdowns. First time I’ve seen a harp live too!

Jordan Rakei’s voice is magical, and his new album sounds incredible. I’ve been thoroughly enjoying it this last week.

I love him on Vulfpeck and I love him as a solo artist, Theo is a powerhouse with his voice and guitar. He’s got three great albums under his belt now, and this live album is great introduction in my opinion.

Happy listening!

Backing up and restoring a self-hosted Plausible instance

2021-08-14 // #docker

I’ve been using Plausible Analytics on this website for a few months now and I’m a fan for three key reasons.

Being able to self-host Plausible gives me ownership of the data it collects, but it also makes me responsible for storing this data and backing it up. I manage backups for my instance with several Ansible playbooks, but the same can be done with plain shell commands.

A self-hosted Plausible instance is run as a collection of Docker containers and volumes managed by Docker Compose. As a result, a full backup of each volume gives you a copy of all of the data you’ll need for a restore.

So let’s dive in! To start, bring down your running Plausible instance.

cd hosting # your clone of github.com/plausible/hosting
docker-compose down

With Plausible stopped, you can take a copy of each Docker volume. I do this by mounting each volume to a plain Ubuntu container and running tar, writing to a mounted directory on the host.

mkdir $HOME/backups/

docker run --rm \
    --mount "source=hosting_db-data,destination=/var/lib/postgresql/data,readonly" \
    --mount "type=bind,source=$HOME/backups,destination=/backups" \
    ubuntu \
    tar --gzip --create --file /backups/plausible-user-data.tar.gz --directory /var/lib/postgresql/data/ .

docker run --rm \
    --mount "source=hosting_event-data,destination=/var/lib/clickhouse,readonly" \
    --mount "type=bind,source=$HOME/backups,destination=/backups" \
    ubuntu \
    tar --gzip --create --file /backups/plausible-event-data.tar.gz --directory /var/lib/clickhouse/ .

To restore a backup, you can remove the old volumes and extract your tarballs into new volumes.

docker volume rm hosting_db-data hosting_event-data

docker run --rm \
    --mount "source=hosting_db-data,destination=/var/lib/postgresql/data" \
    --mount "type=bind,source=$HOME/backups,destination=/backups,readonly" \
    ubuntu \
    tar --extract --file /backups/plausible-user-data.tar.gz --directory /var/lib/postgresql/data/

docker run --rm \
    --mount "source=hosting_event-data,destination=/var/lib/clickhouse" \
    --mount "type=bind,source=$HOME/backups,destination=/backups,readonly" \
    ubuntu \
    tar --extract --file /backups/plausible-event-data.tar.gz --directory /var/lib/clickhouse/

All that’s left after this is to restart the Plausible containers. You may also want to pull changes from Plausible’s hosting repo and the latest Docker images.

# Optionally, update your clone and pull latest images
git pull
docker-compose pull

docker-compose up --detach

From here, it’s up to you what you do with your backups. I’d suggest moving them to an external store, whether that’s your machine via rsync or a storage service with your provider of choice.

Happy coding!

Exporting iCloud reminders to Fastmail

2021-07-24 // #ios-shortcuts

If you’ve snooped the MX records for this site recently, you might have noticed that I’ve moved to Fastmail. In addition to email hosting, Fastmail also offers CalDAV accounts for users, so I’m trying it out for my calendar and reminders.

While the Apple ecosystem supports CalDAV accounts, they don’t make it easy for you to export your reminders from iCloud into your account of choice. A Reddit post points out that it is possible to copy these reminders across with the Shortcuts app though, so I decided to give that a try. Here’s a test run with a simple list of reminders.

Two lists of reminders, one labelled “Test” containing two items, and an empty one label “Test Fastmail”

There’s a few catches with the approach to be aware of, since iCloud reminders have some exclusive (CalDAV-incompatible) features.

  • If the reminder has notes attached, they need to be included in the Notes of the “Create reminder” widget or they won’t be copied
  • Some details can’t be transferred - namely attached photos, due dates and URLs

When selecting the reminders to migrate, you’ll want to filter out completed reminders. When the reminders are created in your new list, they’ll be marked as incomplete.

Interestingly, a confirmation prompt shows up when deleting reminders. If you’re deleting many at once (I had one list of about ~70 reminders), you’ll have to OK it several times before the shortcut proceeds.

A confirmation prompt, reading “Remove 2 reminders? This is a permanent action. Are you sure you want to remove these items?"

Afterwards the shortcut finishes running, all of your reminders will have moved across and you’ll be good to go!

The same list of reminders as before, but the reminders in the “Test” list have moved to the “Test Fastmail” list

Here’s the shortcut I wrote, if you’d like to use it.

A close call with Nginx and the alias directive

2021-07-15 // #nginx

When Nginx is serving this website, it’s usually serving static files from the local machine. One method to accomplish this is with the alias directive, which substitutes the request location for a filepath. I use it to map requests to nicholas.cloud/files/ to a directory for public file-sharing.

One catch with this setup is that you’ll get a 404 if you visit nicholas.cloud/files, as it lacks a trailing slash. Users often overlook and forget this slash, so many websites these days choose to deal with it internally and show the right page.

If this trailing slash is such an encumbrance, why not drop it from my own config? This way, if someone goes to nicholas.cloud/files, they’ll end up on the right path.

I figured it would be nice to have, so I made a small change to my Nginx config.

      # FILES
-     location /files/ {
+     location /files {
          alias /home/nicholas/public-files/;
          add_header "Cache-Control" "public, max-age=0, s-maxage=60";
      }

One reload later, and requests missing the trailing slash were successfully redirected! But while this change was convenient, I felt concerned at the time that it was a tad unsafe. I decided to experiment.

Poor path matching logic is a common (and lucrative) vulnerability for webservers like this. If an attacker can access parent directories above the target folder, all manner of sensitive files on the host can be exposed.

As it turns out, my change created exactly this opening.

A webpage reading “403 Forbidden”, the URL path shows a successful attempt to access parent directory contents

Lo and behold, Nginx was now trying to serve the TLS private key that sits in the root of my user directory. There’s two things to note here.

At that moment, any path starting with /files would follow the alias directive. Nginx was accessing /home/nicholas/public-files/../nicholas.cloud.key, and finding my key.

Thankfully, the fix this time was only a quick revert away. If only it was always that easy. 😓

Next time, I think I’ll stick to writing a workaround rule rather than making such a reckless change. 😅


As a point of reflection, it’s worth noting that the Nginx documentation for alias specifically uses the term “replacement”.

Defines a replacement for the specified location.

If that isn’t a big caution sign, I don’t know what is!

Managing secrets in a Rush monorepo with Pass

2021-06-14

For a while now, I’ve used Rush to manage and deploy a number of Cloudflare Workers from a monorepo. It’s been great for me so far, offering incremental builds and support for alternative package managers like PNPM.

One thing Rush leaves to the discretion of maintainers is secrets management. Given tooling and infrastructure can vary drastically between organisations and even individual projects, there’s nothing wrong with this decision. However, it has lead to me implementing my own less-than-desirable setup.

  • Every project follows its own build.sh script to load secrets, build and deploy
  • Cloudflare-related credentials are read from a shared script
  • Workers that need third-party API tokens read them from their own .env file

This works, but it has a number of shortcomings. What if a worker needs to be deployed to a different Cloudflare zone (website) from every other worker? How do I manage/keep track of all these .env files?

I ended up looking to the pass password manager. I’ve found it convenient for my personal projects, as it leverages my existing GPG setup and makes it easy to store/retrieve secrets from the command line.

A few changes later, and now the build scripts for each project are explicit about what secrets they need! Here’s an abridged example.

- source ../../set-cloudflare-secrets.sh
+ export CF_ACCOUNT_ID=$(pass show workers/cloudflare-account-id)
+ export CF_API_TOKEN=$(pass show workers/cloudflare-api-token)
+ export CF_ZONE_ID=$(pass show workers/cloudflare-zone-id-nicholas.cloud)
- source .env
+ export MAILGUN_API_KEY=$(pass show workers/newsletter-subscription-form/mailgun-api-key)
+ export EMAIL_SIGNING_SECRET=$(pass show workers/newsletter-subscription-form/email-signing-secret)

I did find an interesting interaction between Rush and the GPG agent. Rush attempts to build projects in parallel where possible, and if too many processes are decrypting secrets at once the GPG agent will return a Cannot allocate memory error.

Thankfully this can be fixed by adding the --auto-expand-secmem option to the agent’s config. This allows gcrypt (used by GPG) to allocate secure memory as needed.

# ~/.gnupg/gpg-agent.conf
auto-expand-secmem

With the GPG agent restarted, I can now build many projects with secrets in parallel! It’s also good to have my secrets sitting safely outside source control, stored in a place where I can easily back them up.

Terminal output from a full monorepo rebuild, with ten projects rebuilt successfully in eleven seconds. The slowest project took eight seconds to build.

Using pass to fetch and decrypt secrets does admittedly add a few seconds to each build. Thankfully, Rush’s parallelism keeps the overall build comparatively fast. In my eyes, the tradeoff is worth it.

Older posts