Somehow, we’re now into the final quarter of the year. Here’s a few things that interest me, and may interest you!
Hacktoberfest is back for another year, hopefully the controls they introduced after the terrible spam-fest of 2020 will avoid creating too much work for maintainers. It’s nice to see that they’re making merge requests on GitLab eligible too, and highlighting financial contributions as a way to support open source software.
Jordan Rakei’s voice is magical, and his new album sounds incredible. I’ve been thoroughly enjoying it this last week.
I love him on Vulfpeck and I love him as a solo artist, Theo is a powerhouse with his voice and guitar. He’s got three great albums under his belt now, and this live album is great introduction in my opinion.
Being able to self-host Plausible gives me ownership of the data it collects, but it also makes me responsible for storing this data and backing it up. I manage backups for my instance with several Ansible playbooks, but the same can be done with plain shell commands.
A self-hosted Plausible instance is run as a collection of Docker containers and volumes managed by Docker Compose. As a result, a full backup of each volume gives you a copy of all of the data you’ll need for a restore.
So let’s dive in! To start, bring down your running Plausible instance.
cd hosting # your clone of github.com/plausible/hosting
With Plausible stopped, you can take a copy of each Docker volume. I do this by mounting each volume to a plain Ubuntu container and running tar, writing to a mounted directory on the host.
docker run --rm \
tar --gzip --create --file /backups/plausible-user-data.tar.gz --directory /var/lib/postgresql/data/ .
docker run --rm \
tar --gzip --create --file /backups/plausible-event-data.tar.gz --directory /var/lib/clickhouse/ .
To restore a backup, you can remove the old volumes and extract your tarballs into new volumes.
docker volume rm hosting_db-data hosting_event-data
docker run --rm \
tar --extract --file /backups/plausible-user-data.tar.gz --directory /var/lib/postgresql/data/
docker run --rm \
tar --extract --file /backups/plausible-event-data.tar.gz --directory /var/lib/clickhouse/
All that’s left after this is to restart the Plausible containers. You may also want to pull changes from Plausible’s hosting repo and the latest Docker images.
# Optionally, update your clone and pull latest images
docker-compose up --detach
From here, it’s up to you what you do with your backups. I’d suggest moving them to an external store, whether that’s your machine via rsync or a storage service with your provider of choice.
If you’ve snooped the MX records for this site recently, you might have noticed that I’ve moved to Fastmail. In addition to email hosting, Fastmail also offers CalDAV accounts for users, so I’m trying it out for my calendar and reminders.
While the Apple ecosystem supports CalDAV accounts, they don’t make it easy for you to export your reminders from iCloud into your account of choice. A Reddit post points out that it is possible to copy these reminders across with the Shortcuts app though, so I decided to give that a try. Here’s a test run with a simple list of reminders.
There’s a few catches with the approach to be aware of, since iCloud reminders have some exclusive (CalDAV-incompatible) features.
If the reminder has notes attached, they need to be included in the Notes of the “Create reminder” widget or they won’t be copied
Some details can’t be transferred - namely attached photos, due dates and URLs
When selecting the reminders to migrate, you’ll want to filter out completed reminders. When the reminders are created in your new list, they’ll be marked as incomplete.
Interestingly, a confirmation prompt shows up when deleting reminders. If you’re deleting many at once (I had one list of about ~70 reminders), you’ll have to OK it several times before the shortcut proceeds.
Afterwards the shortcut finishes running, all of your reminders will have moved across and you’ll be good to go!
Here’s the shortcut I wrote, if you’d like to use it.
When Nginx is serving this website, it’s usually serving static files from the local machine. One method to accomplish this is with the alias directive, which substitutes the request location for a filepath. I use it to map requests to nicholas.cloud/files/ to a directory for public file-sharing.
One catch with this setup is that you’ll get a 404 if you visit nicholas.cloud/files, as it lacks a trailing slash. Users often overlook and forget this slash, so many websites these days choose to deal with it internally and show the right page.
If this trailing slash is such an encumbrance, why not drop it from my own config? This way, if someone goes to nicholas.cloud/files, they’ll end up on the right path.
I figured it would be nice to have, so I made a small change to my Nginx config.
One reload later, and requests missing the trailing slash were successfully redirected! But while this change was convenient, I felt concerned at the time that it was a tad unsafe. I decided to experiment.
Poor path matching logic is a common (and lucrative) vulnerability for webservers like this. If an attacker can access parent directories above the target folder, all manner of sensitive files on the host can be exposed.
As it turns out, my change created exactly this opening.
Lo and behold, Nginx was now trying to serve the TLS private key that sits in the root of my user directory. There’s two things to note here.
It is not wise for these senstive files to be sitting in a such an exposed location.
For a while now, I’ve used Rush to manage and deploy a number of Cloudflare Workers from a monorepo. It’s been great for me so far, offering incremental builds and support for alternative package managers like PNPM.
One thing Rush leaves to the discretion of maintainers is secrets management. Given tooling and infrastructure can vary drastically between organisations and even individual projects, there’s nothing wrong with this decision. However, it has lead to me implementing my own less-than-desirable setup.
Every project follows its own build.sh script to load secrets, build and deploy
Cloudflare-related credentials are read from a shared script
Workers that need third-party API tokens read them from their own .env file
This works, but it has a number of shortcomings. What if a worker needs to be deployed to a different Cloudflare zone (website) from every other worker? How do I manage/keep track of all these .env files?
I ended up looking to the pass password manager. I’ve found it convenient for my personal projects, as it leverages my existing GPG setup and makes it easy to store/retrieve secrets from the command line.
A few changes later, and now the build scripts for each project are explicit about what secrets they need! Here’s an abridged example.
- source ../../set-cloudflare-secrets.sh
+ export CF_ACCOUNT_ID=$(pass show workers/cloudflare-account-id)
+ export CF_API_TOKEN=$(pass show workers/cloudflare-api-token)
+ export CF_ZONE_ID=$(pass show workers/cloudflare-zone-id-nicholas.cloud)
- source .env
+ export MAILGUN_API_KEY=$(pass show workers/newsletter-subscription-form/mailgun-api-key)
+ export EMAIL_SIGNING_SECRET=$(pass show workers/newsletter-subscription-form/email-signing-secret)
I did find an interesting interaction between Rush and the GPG agent. Rush attempts to build projects in parallel where possible, and if too many processes are decrypting secrets at once the GPG agent will return a Cannot allocate memory error.
Thankfully this can be fixed by adding the --auto-expand-secmem option to the agent’s config. This allows gcrypt (used by GPG) to allocate secure memory as needed.
With the GPG agent restarted, I can now build many projects with secrets in parallel! It’s also good to have my secrets sitting safely outside source control, stored in a place where I can easily back them up.
Using pass to fetch and decrypt secrets does admittedly add a few seconds to each build. Thankfully, Rush’s parallelism keeps the overall build comparatively fast. In my eyes, the tradeoff is worth it.