“a victory that is not worth winning because the winner has lost so much in winning it”
It must be said that emotions on a self hosting follow a sine wave pattern of “its so over” for dips, to “we are so back” for highs. We’ve had several problems that I was able to solve, (the high) and then feeling like everything was done, encountered new ones that have derailed things entirely (the low). The title of this blog post is I think the perfect way of describing how my weekend went with trying to fix my local cluster once and for all.
Things just mostly started working again?
As I wrote about yesterday, the plan was to get home and start troubleshooting the issue with the one NUC. Absolutely horrible drive home in terms of weather, though certainly better than ice and cold. I was expecting to encounter some kind of issue that would be new to me and therefore would need an evening of research. At a bare minimum I was expecting some form of operating system corruption perhaps, hopefully not hardware failures beyond say storage and memory. Once I got home it was a case of unplugging things so that I could plug them into the NUC, a further reminder of needing to invest in a KVM.
Day Two Firefighting, Almost Literally
Christmas came and went and I spent a good chunk of mine working on my self hosting hobby. Of course I took time to recharge and enjoy the holiday period. Honestly I think it was one of my most relaxing Christmas’ ever, certainly having the ability to drive off to wherever I wanted to probably helped, even though I didn’t seize that particular opportunity. I’m never one to just sit down and be idle, even with a TV on I’d find myself wanting to be doing something else. From my last blog post I had laid the foundations for my local Kubernetes cluster and I was eager to see to some building on top of said foundations.
Strap in because we going for it
Earlier in the summer I decided I needed to repave my local infrastructure to move away from Docker Compose and towards Kubernetes, as part of revitalising my self hosting hobby. I worked on creating a remote Kubernetes cluster, initially trying to use my local compute for Nodes on this cluster, which failed. I then elected to simply pay for a managed Kubernetes cluster for a few months to at least experiment with how I would operate such a cluster.
One of the simplest ways to onboard container workloads to Tailscale
Embracing Tailscale in my infrastructure has solved myriad problems for me. Having a way of remotely accessing my infrastructure and services is obviously the main one. I think the technical challenge of learning how to securely expose such things from home would be interesting. But for me carries too much weight of something going very wrong if done poorly. Those connections also all being secured over WireGuard under the hood is also a source of comfort considering how tried and tested that solution is. There is some dealbreakers naturally. Choosing a some what proprietary product (I believe the control plane is the bit here that is) as the backbone of ones networking layer for accessing software that’s freely available to download, has a certain irony. I also believe that in the event of total internet loss at home, I wouldn’t be able to access any of the services on devices literal centimetres away from me. For now though I’m happy to tolerate these things. Tailscale is of a quality to me that I would throw my money at them with very little hesitation if I wasn’t on the free tier anymore and I always think making these commitments on foundational services helps to reduce anxiety and the oft overwhelming feeling of viewing the technical based ice cream parlour selection of what tools to use.
We were on the verge of greatness, we were this close
Reinvigorated after initial failings with LXD, I looked to creating a Scaleway Kosmos Kubernetes cluster. The setup would allow me to have a managed control plane while providing my own compute for the workers. It seemed like the best of both worlds, but unfortunately I encountered an issue that while might be fine for some to continue on, would prove to be a dead end for me.
No plan survives first contact with the enemy
Previously I wrote about how I planned to uplift my self hosting setup to a new standard, implementing new tooling and strategies to enable me to enjoy more of my hobby. That effort is still ongoing however, I’ve felt I needed to change strategies on the fly. Indeed, I wrote about how I wanted to uplift my infrastructure layer to use LXD as a hypervisor along with LXC to enable a transition away from Docker and Docker Compose. I am still working on that transition, but instead the final destination is to my first tech love, Kubernetes.
My new plans for my hobby for this year
I know that I have written about this previously on this blog, but I think everybody goes through their own cycles of highs and lows. Personally, there’s been a lot more lows as of late. Things for the most part of this year have not been feeling amazing, primarily due to events in work. When you have a hobby that is closely related to your work, things can suffer when that compartmentalisation starts to fail. And other things going on in my life meant that generally, things were quite sad feeling.
I have seen the light
Test Driven Development is a topic I have been petrified of for quite some time. It is something that I knew was praised as a development strategy, rightly so as many would argue. I struggled for a long time to even conceptually understand how TDD even worked. The slightly more naive version of brain simply could not cope with the idea of trying to test something before knowing something even existed. Put plainly, how could I assert 2 + 2 = 4 when I did not even have the code to add 2 and 2 together in the first place?