Announcing the glorious advent of XeDN
Read time in minutes: 6
So I made a mistake with how the CDN for my website works. I use a CDN for all the static images on my blog, such as the conversation snippet images and the AI generated "hero" images. This CDN is set up to be a caching layer on top of Backblaze B2, an object storage thing for the cloud.
There's only one major problem though: every time someone loads a page on the website, assets get routed to the CDN. I thought the CDN was configured to cache things. Guess what it hasn't been doing.
This is roughly what I've intended the flow to look like when I was designing this blog:
I wanted the flow to go from users to the CDN, and the CDN would reach into its cache to make things feel snappy. If it wasn't in the cache, the CDN would just reach out into B2 and grab it, then store that in the cache. This allows normal user behavior to automatically populate the cache and then every future visitor gets things more quickly.
However, because of things that I don't completely understand, when I moved from
christine.website
to xeiaso.net
something got messed up in one of the page
rules and my CDN domain went from almost always caching everything to never
caching anything. This is not good.
So yes, I have my own CDN service now apparently. The overall architecture of how XeDN fits into everything looks something like this:
XeDN is built on top of Go's standard library HTTP server and a few other libraries:
- groupcache for in-ram Last-Recently-Used caching
- ln (the natural log function) for the logging stack
- tsnet to allow me to access the debug routes more securely over Tailscale
- xff to parse X-Forwarded-For headers for me
This allows me to have a caching CDN service in less than 250 lines of Go. I run XeDN on top of fly.io in multiple regions, so it's one of the first things I've made for this blog that is actually a redundant service geo-replicated across multiple datacentres. It's pretty nice.
I switched over my CDN to use XeDN yesterday and nobody noticed at first. The only reason people noticed at all is because I tweeted about it. Either way, things should be very fast now. This should scale to meet my CDN needs a lot better than the previous setup and everything should be a lot more streamlined in the future.
