My IRC client runs on Kubernetes
Published on , 1088 words, 4 minutes to read
Trust me, there's a reason for this
A light-blue haired anime woman with a pixie cut in a white hoodie and short skirt drinking coffee in Seattle, space needle, smartphone, chat bubbles, blue halo - Black Forest Flux.1 [dev]IRC has historically been one of the most important chat programs in my life. It's how I met my husband, how I found jobs in the early stages of my career, how I get help with weird Linux arcana that nobody else can really fix, and it's where I socialize with the Internet Illuminati. I use it every day and main reason I use tmux is to attach to that one session with my IRC client in it.
However, there's a problem with this setup: it's tied to one physical computer. If that physical computer dies, I lose easy access to all my IRC logs. My IRC logs folder is ridiculously large:
$ du -hs .weechat/logs
5.0G .weechat/logs
This is five gigabytes of text. This represents a huge fraction of my digital life and is some of the most important data to me. Not to mention my 188 kilobytes of configuration that I've built up over the years.
Point is, there's a lot of data here and I want to make sure that it's easy to access via a shell like I'm used to. I also want it to be a bit more redundant so that if one physical computer dies then it'll just be rescheduled to another machine and I'll be back up and chatting within minutes without human intervention.
Seeing as there's realistically not many other options for this (and I already have a Kubernetes cluster), I decided to move my IRC client into a VM on top of Kubernetes.
What? Why?
After reading that last bit, I'm sure that some of you have questions like this:
You're using a container orchestrator for virtual machines? That seems a bit...unorthodox. Why not just put it into a container? What's wrong with that?
There's a couple properties of Kubernetes that I'm going to be taking advantage of here:
- Kubernetes detects node failure and reschedules jobs to other machines as it happens.
- I already have Kubernetes set up (the best orchestrator is the one you already have).
- If I'm going to have a SSH daemon and tmux running in a container, I might as well just run a whole normal Linux distro so that I can use systemd for this instead of having to reinvent my own service management layer.
The big thing that makes all this work is my combination of Kubevirt and Longhorn. Kubevirt lets you schedule virtual machines onto Kubernetes and import cloud-friendly images into your cluster. Longhorn is what I use for replicated block/file storage in my Kubernetes cluster, which makes everything have at least three copies in my homelab. This combination allows me to have a virtual machine that automagically gets run somewhere in the homelab. I don't have to think about it. I don't have to care. It just works. It also backs up all of the data to Tigris, which makes me feel better about the data being intact if my house were to catch on fire.
The big move
For your convenience when reading, Kubernetes and Kubevirt terms are written in JavaClassNameCase. To be extra unambiguous the first time a term is used, the "owner" of the term will be written next to it, such as a "Kubevirt VirtualMachine" or a "Kubernetes Service".
To move my IRC client over to Kubernetes, I needed three objects:
- A Kubevirt DataVolume, which is a Kubernetes PersistentVolumeClaim that gets pre-seeded with the contents of a Linux distribution.
- A Kubevirt VirtualMachine, which is like a Kubernetes Deployment, but it uses a template that has a virtual machine hypervisor enabled.
- A Kubernetes Service to expose ports on that VirtualMachine with a stable IP address and DNS name so that I can SSH into it and another one of my bots can use my IRC client as a bouncer.
Each of those objects are defined in Xe/x/kube/alrest/vms/arona/arona.yaml. The most exciting one I want to highlight here is the Kubevirt DataVolume:
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: "arona"
namespace: waifud
spec:
storage:
storageClassName: longhorn
volumeMode: Block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 64Gi
source:
http:
url: "https://cloud-images.ubuntu.com/daily/server/noble/current/noble-server-cloudimg-amd64.img"
This is kinda exciting to me because I implemented this in waifud with the most fucked Rust code ever. My implementation also assumed that the contents of cloud image URLs didn't change (spoiler alert: they do, all the time), and overall I wasn't really happy with how it worked in practice. Kubevirt DataVolumes make this irrelevant and I am so happy that I can grab something off the shelf for this. It's worth having to use Kubernetes for this.
From there all I needed was a hostname for the machine, to write some stuff in a shell script, copy a systemd unit out of the Arch wiki, and then to copy over my giant .weechat
folder.
I chose the hostname arona
by opening a four-split of gacha game fandom
wikis and clicking the "random page" button on each of them, then picked the
one that sounded nicest. The other options were yanfei
, hanabi
, and
changli
, but arona
just sounded better.
The results
I'm pretty happy with this so far! The VM automatically gets backed up every night, it's replicated between my machines, and to get into it, I just run ssh xe@arona.waifud.svc.alrest.xeserv.us
and then get into weechat with the command chats
.
Facts and circumstances may have changed since publication. Please contact me before jumping to conclusions if something seems wrong or unclear.
Tags: