reminder to nix-collect-garbage
🐰🐰🐰
🐰🐰🐰
Update on enbi (source) — it now monitors Pods with annotations describing which flake to build and what tag that build should produce. When it notices one failing to start due to a missing image matching the annotations, it creates a NixBuild matching the requirements, which in turn runs the build and loads it into the cluster! Successful builds clean up after themselves, though I’m leaving around the NixBuild objects themselves for now. Failing builds leave the Job/Pod in place for troubleshooting.
Updating the version of one of my apps which use my standard pattern for building Docker images with Nix is now just a matter of changing the tag in one place (e.g.); the cluster figures out building it and moving to the new release without downtime.
This has been a fun one week sojourn into writing Kubernetes operators :) The API is pretty neat, controller-runtime feels clean, and it was enjoyable discovering how many assumptions I had to unlearn while negotiating where the controller was running, where its jobs were to be scheduled, how to move data around, and the like.
Still pre-alpha, but tonight I got the first complete run of a little Kubernetes controller I’ve been wanting!

Wahoo yipee etc.! Right now we have a CRD which triggers a Nix build of a given flake URL, expected to produce a Docker or OCI image — it chooses a node which can build for the target system, spawns a Job which builds the target, and then imports it into the node’s container registry. We assume that something like Spegel is running and so any node that needs the image will pick it up.
The “hard” part (other than writing directly against the k8s API for the first time) was getting the Nix stuff to work well vis-à-vis building in a container while caching everything nicely — the flakes themselves, as well as whatever ends up in the store, as much of it will be reused between versions. Thankfully all the tooling is Cool As Fuck and it was actually really easy. We create a locally-provisioned PersistentVolume per node and stuff $HOME/.cache/nix and the Nix store in there. For now we use a chroot store, but I’d like to try an overlay store in future to avoid potentially duplicating whatever comes along in the nixos/nix image. Importing into the node’s container store is as simple as mounting the host /run and locating containerd’s socket — it differs depending on your k8s distro, and I’m developing on kind while deploying to k3s.
I still have to clean it up in this state, and have plans after this to remove the CustomResourceDefinition and trigger builds automatically when needed, getting the source details from annotations on the Deployment, but I’m happy. I don’t particularly like manually executing builds, nor do I want to stand up a registry and pre-build everything. My cluster runs on two architectures, but whether any given revision of an application will actually ever run on either, both, or any(!) of those is a matter of the particular scheduling constraints for the application and the state of the cluster at any given moment. Rather than waste energy pre-building and storing, let’s build on-demand instead! 💛🤍💜🖤
John Goerzen’s Easily Using SSH with FIDO2/U2F Hardware Security Keys came up yesterday, and I thought it was a good time to fix my mess of private keys. I already own a YubiKey 5C Nano, which sits in my laptop at all times, as well as a 5C NFC, which I figured I could use hopefully with both my phone (NFC) and tablet (USB-C) for SSH when needed.
The ideal was to drop all non-SK keys, and use move to using agent forwarding exclusively when authenticating between hosts — rarely needed, but nice for some remote-to-remote scps or git+ssh pushes. (Agent forwarding is traditionally frowned upon, since someone who has or gains access to your VPS can use your socket to get your agent to auth things, but that issue is greatly reduced when user presence is verified on each use, viz. requiring you to touch your key.)
Turns out all was pretty much that easy! Just two minor hiccups:
Terminus on iOS supports FIDO2 keys, no payment required (despite what some search results say; looks like it was maybe a paid-only feature during beta but since not). Non-resident Ed25519 keys work very well over NFC on iPhone, but not over USB-C on iPad. The only reference I can find is this from their “ideas” page:
Unfortunately, iPads and iPhones with USB-C cannot be compatible with OpenSSH-generate FIDO2-based keys. Please generate new FIDO2-based keys in the Termius app. These keys are supported in OpenSSH and all Termius apps.
Upon testing, Terminus generates a non-resident ECDSA, and that works just great. So, in the end, I have three private keys: an Ed25519 for the 5C Nano, and an Ed25519 and ECDSA for the 5C NFC for use with NFC and USB-C respectively.
The OpenSSH bundled in macOS (at time of writing, OpenSSH_9.9p2, LibreSSL 3.3.6) doesn’t support the use of these keys. I haven’t checked whether it’s non-resident SKs specifically or what, or whether it’s the version or just a matter of what support is compiled in.
NixOS/nix-darwin 25.05 carries an OpenSSH_10.0p2, OpenSSL 3.4.1 11 Feb 2025, and it does!
Using agent forwarding without losing what’s left of one’s humanity implies getting your ssh-agent setup working nicely. How?
I looked into a few different ways, but opted for the simplest: patching OpenSSH (!?). The thought process is as follows:
/System/Library/LaunchAgents/com.openssh.ssh-agent.plist will put an SSH_AUTH_SOCKET in your environment, which launches the system-provided ssh-agent when first addressed.__APPLE_LAUNCHD__ in ssh-agent.c.We apply two patches:
__APPLE_LAUNCHD__ bits into the Nix-provided OpenSSH.SSH_AUTHSOCKET_ENV_NAME to "VYX_SSH_AUTH_SOCK".Finally, we install our own launchd user agent (modelled upon the system one, but with our binary), which puts the socket in the VYX_SSH_AUTH_SOCK env var instead. This means we don’t need to worry about the system launch agent; it’ll only get triggered/used when something calls the system ssh binary.
This is part 1 of x in a series.
I have spent most of my life avoiding DevOps-y type things. At GitHub I got familiar enough with kubectl to help debug the applications I had deployed on it, but that was almost a decade ago and I don’t remember a single bit of it.
Most of the things I run I deploy with a really simple systemd unit definition in the Nix module. Here’s an excerpt from the one for the Elixir app this blog ran on:
{
systemd.services.kv = {
description = "kv";
enableStrictShellChecks = true;
wantedBy = [ "multi-user.target" ];
after = [ "kv-migrations.service" ];
requires = [
"postgresql.service"
"kv-migrations.service"
];
script = ''
export KV_STORAGE_ROOT="$STATE_DIRECTORY"
${envVarScript}
${cfg.package}/bin/kv-server
'';
serviceConfig = {
User = cfg.user;
ProtectSystem = "strict";
PrivateTmp = true;
UMask = "0007";
Restart = "on-failure";
RestartSec = "10s";
StateDirectory = "kv";
StateDirectoryMode = "0750";
};
inherit environment;
};
}
It’s very basic, and it worked beautifully! I love that, with NixOS, you can package a reproducible build (with all its dependencies), deployment strategy, and configuration schema all in one place. It’s so damn clean, and it works wonderfully for homelab- or personal services-scale systems. (For more, try Xe’s All Systems Go! talk, Writing your own NixOS modules for fun and (hopefully) profit.)
The downside is that this is not exactly a high-availability setup. When any of the dependencies of a service like this change — such as a new cfg.package, or change in environment — the result is that the existing service is stopped, the service is swapped out, and then the new one is started.
There can often be 10–30 seconds between the stop and start, depending on how much else the nixos-rebuild has to do. And while a failing build won’t leave you with a stopped service — you won’t even get that far — if the build succeeds, but the new service fails to come up for some reason, then you’ll be scrambling fast.
This being NixOS, getting your service back up is as easy as switching to the previous generation, and can be done very fast, but still, it’s not great. Realising this, and still very much wanting to use Nix as a build orchestrator in places where this isn’t an acceptable trade-off, it was time to learn a devops.
Structurally, Kubernetes seems relatively sound, giving us language for defining the shape of a deployed system upon many different axes. It is very YAML and it is very containers, neither of which I am the hugest fan of, but I felt pretty sure there would be tools to help with the former, and Nix my beloved has beautiful solutions for the latter.
If, like me before the start of this exercise, you don’t really know about the model Kubernetes gives you to work with, you might find useful David Ventura’s blog post, A skeptic’s first contact with Kubernetes. If I had found it before and not immediately after coming this far it would’ve been super helpful -_-
One thing worth mentioning is that, as a Very Nix Person (and Very Dissociated Person), I really need my infrastructure to be described in a version-controlled way. Ideally, I would be able to tie all of my infra back into the same place (which is vyx, a Nix flake).
So I decide to start up a cluster and begin experimenting. I hate Docker, Inc. with a passion — I will never forgive them for getting rid of Docker for Mac’s cute whale — plus I want to learn somewhere where I can actually deploy things, so I decide to start with k3s on my VPS. How I chose k3s to begin with, I’m not so sure — maybe because it has relatively few options exposed in its Nix module. Lightweight sounds good, and it’s a “certified Kubernetes distribution”. Whatever that means, it must be good!
NixOS has the option services.k3s.manifests, which is described as “auto-deploying manifests”. Perhaps this is the magic sauce I need to get my infrastructure as code!?
(The answer is, no, it isn’t — the entire cluster is restarted when you change its values, because NixOS. Teehee.)
Nonetheless, I struggled through writing some early manifests this way. Writing YAML in Nix is way better than writing YAML, and very easy to parameterise, extract functions, and so on. I had seen mention of Helm charts here and there, and while I felt like one day I would need to come to terms with them, I preferred to leave that until as late an opportunity as possible. As a bonus, using k3s auto-deploying manifests in this way meant I could write a NixOS module to deploy an application in Kubernetes, without a single line of raw YAML.
So, terrible in many respects — now bringing down an entire cluster on each change instead of just the relevant services (!!!) — but an introduction nonetheless. We are now at the point of siguiente:
- Decided to turn that homelab server into a gaming PC instead, haha psyche! Instead decided to learn better how to cross-build things and operate k3s without trying to shove everything through a NixOS module.
Part 2 will cover building our own software ready for orchestration (using Nix — we won’t write a single Dockerfile, promise, and as a little bit of a spoiler, we won’t write a single Go template either), and the unique fun presented by developing on aarch64-darwin while largely deploying to x86_64-linux. :)