All hail the Tailscale operator

November 8, 2024

All hail the Tailscale operator

One of the simplest ways to onboard container workloads to Tailscale

Embracing Tailscale in my infrastructure has solved myriad problems for me. Having a way of remotely accessing my infrastructure and services is obviously the main one. I think the technical challenge of learning how to securely expose such things from home would be interesting. But for me carries too much weight of something going very wrong if done poorly. Those connections also all being secured over WireGuard under the hood is also a source of comfort considering how tried and tested that solution is. There is some dealbreakers naturally. Choosing a some what proprietary product (I believe the control plane is the bit here that is) as the backbone of ones networking layer for accessing software that’s freely available to download, has a certain irony. I also believe that in the event of total internet loss at home, I wouldn’t be able to access any of the services on devices literal centimetres away from me. For now though I’m happy to tolerate these things. Tailscale is of a quality to me that I would throw my money at them with very little hesitation if I wasn’t on the free tier anymore and I always think making these commitments on foundational services helps to reduce anxiety and the oft overwhelming feeling of viewing the technical based ice cream parlour selection of what tools to use.

As I begun this move towards using Kubernetes for my services a few months ago, access of course became an issue again. Unlike running a bunch of containers with Docker Compose and Traefik for a reverse proxy, it wasn’t like I could just rely on the Tailscale install on the underlying machine. Initially I was thinking of sidecar containers of some description for every deployment that would run a Tailscale container. Indeed I believe that is a supported pattern today for such a thing. Research however would lead me towards an albeit beta but incredibly powerful solution in the form of an Operator that Tailscale have developed.

Deploying the operator was very straightforward. I chose the static manifest approach over Helm as I don’t see any need to be relying on effectively a templating engine. If I am trying to recreate the state of a cluster, I just want manifest files that I know will give me my original state as versus wondering what the ‘magical box of Helm’ will spit out this time around. Once you provide the operator with a set of OAuth credentials, you’re virtually good to go. The operator will be applying tags to any created networking objects, so if you’re using an ACL, perhaps in a test driven manner like me, that will also need to be updated to ensure the right level of access is there for the networking objects created on your cluster.

Once I was up and running with the operator, being able to just connect Service and Ingress objects with a few annotations to my tailnet was an amazing feeling. This also prompted a move towards using MagicDNS to give all my services resolvable DNS names. I appreciated too that the Ingress side of things doesn’t create a bunch of load balancers on the cloud provider. I guess it doesn’t need to, but having one less cost to consider was very welcome.

Some downsides I think initially I struggled with the Ingress level of things. I think I was configuring default backends wrong which was leading to no connectivity. Since the operator is beta, I believe it’s using the unstable image for Tailscale. I think this lead to me having several issues with network uptime. Predominately I would experience moments of connection timeouts to all Kubernetes based networking objects. Getting a new manifest file and reapplying it seemed to help out a bit, least it looked like major changes had occurred considering the LOC change between the two files in git. Which also leads me onto I think a pretty big issue, there doesn’t seem to be any way currently to direct the operator to update itself or the Pods it creates. The only times I’ve managed to perform updates is when the underlying cluster itself gets upgraded, so Pods get rescheduled. Or just killing the Pods directly or using a rollout restart command. Both I am presuming are relying on pulling a fresh image from the registry each time and that’s how it updates. This might not be an easy problem to solve, but even if the documented approach to performing an update was to just do a restart of the operator, that would be helpful to at least know.

All in all I am extremely happy with the operator as a solution for Kubernetes based networking. I’m looking forward to seeing it go GA so that I hopefully see less of the issues I mentioned above too. It has some additional features around the API server for k8s plus some others that I’ve been meaning to read into. But even just the bare minimum of enabling connectivity to my tailnet has been extremely easy to use, much like the Tailscale experience in general.

Thank you!

You could of consumed content on any website, but you went ahead and consumed my content, so I'm very grateful! If you liked this, then you might like this other piece of content I worked on.

Writing up the ACL for Tailscale with a TDD mindset

Photographer

I've no real claim to fame when it comes to good photos, so it's why the header photo for this post was shot by Thomas Jensen . You can find some more photos from them on Unsplash. Unsplash is a great place to source photos for your website, presentation and more! But it wouldn't be anything without the photographers who put in the work.

Find Them On Unsplash

Support what I do

I write for the love and passion I have for technology. Just reading and sharing my articles is more than enough. But if you want to offer more direct support, then you can support the running costs of my website by donating via Stripe. Only do so if you feel I have truly delivered value, but as I said, your readership is more than enough already. Thank you :)

Support My Work

GitHub Profile

Visit My GitHub

LinkedIn

Connect With Me

Support my content

Support What I Do!

My CV / Resume

Download Here

Email

contact at evanday dot dev

Client Agreement

Read Here