Talos for my Homelab
As promised here is another blog post from me. As I like to experiment with my homelab and decided to switch to Talos not that long ago from K3s on my Turing Pi 2.5 with 4 Raspberry Pis. In this post I want to share what Talos is, why I made the switch, and how you can get started with it yourself.
You can find my full homelab setup on GitHub: homelab-turing.
What is Talos?
Talos Linux is a minimal, immutable operating system purpose-built for running Kubernetes. There is no shell, no SSH, no package manager. The entire OS is managed through a declarative YAML API. If that sounds extreme, it kind of is, but in the best way possible. The idea is that your nodes are cattle, not pets. You don’t log into them, you don’t tweak things by hand. You define the desired state in YAML and Talos takes care of the rest.
This also means the attack surface is tiny. There is literally nothing on the OS that isn’t needed to run Kubernetes. No systemd, no bash, no extra services. Just the kernel, containerd, and the Kubernetes components.
Why I switched from K3s
I had been running K3s on my Turing Pi 2.5 for a while and it worked fine. K3s is great for getting a lightweight cluster up and running fast. But over time I noticed a few things about how I was using it.
First, I never actually SSH’d into my Raspberry Pis. Not once for any real maintenance. Everything I did was through using GitOps/ArgoCD. So having a full Linux OS with SSH access running on each node felt unnecessary.
Second, I really like the idea of managing my cluster configuration as YAML. With Talos, the entire node configuration, from networking to Kubernetes settings is of course YAML. That fits perfectly with how I already work. Infrastructure as code all the way down to the OS level.
Third, Talos supports both Raspberry Pis and Turing RK1 compute modules. I don’t have RK1s yet, but they are on my wishlist. Knowing that I can drop them into my Turing Pi board and have Talos just work on them is a nice bonus for the future.
Setting up a Talos cluster on a Turing Pi 2.5
Let me walk you through what a basic setup looks like. The Turing Pi 2.5 has 4 slots. In my case I have a Raspberry Pi in each slot: one control plane node and three workers.
First, you need to generate the Talos configuration. The talosctl CLI makes this straightforward:
# Generate cluster configuration
talosctl gen config my-homelab https://<control-plane-ip>:6443
# This creates:
# - controlplane.yaml
# - worker.yaml
# - talosconfig
This gives you the base configuration files. The controlplane.yaml and worker.yaml are the machine configs for your nodes. You will want to customize these for your specific setup, things like static IPs, the cluster name, and any extra configuration you need.
To apply the config to a node:
# Apply the control plane config to your first node
talosctl apply-config --insecure --nodes <node-ip> --file controlplane.yaml
# Apply worker configs to the remaining nodes
talosctl apply-config --insecure --nodes <worker-ip> --file worker.yaml
Once the control plane node is up, you can bootstrap the cluster:
# Bootstrap etcd on the control plane
talosctl bootstrap --nodes <control-plane-ip> --endpoints <control-plane-ip>
# Grab the kubeconfig
talosctl kubeconfig --nodes <control-plane-ip> --endpoints <control-plane-ip>
That is it. You now have a running Kubernetes cluster on your Turing Pi. No SSH, no manual package installs, no fiddling with systemd services. Just YAML and a couple of commands.
Storage with a host path
For general storage I use Longhorn across the cluster, but I also have an SSD connected to node4 via USB that I use specifically for my Jellyfin home media setup. Media files are large and I don’t need them replicated across nodes, so a simple host path on a dedicated SSD makes more sense for that use case.
In Talos you need to configure the machine config to mount the USB disk so it is available to your pods. Here is what that looks like:
machine:
disks:
- device: /dev/sda
partitions:
- mountpoint: /var/mnt/storage
And then in Kubernetes, you can create a PersistentVolume that points to it:
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-storage
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/mnt/storage
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node4
It is not the most fancy storage solution, but it works well for a homelab. Down the road I might look into a proper NAS solution but with the prices of hardware going up I might have to hold off on that, but for now a simple SSD on a single node gets the job done.
Wrapping up
Switching to Talos has been a fun experience. It forced me to think about my cluster setup in a more declarative way and it removed a bunch of stuff I wasn’t using anyway. If you have a homelab running whatever and you find yourself wondering what to run, Talos might be worth a look.
Check out my homelab repository if you want to see the full setup. Feel free to reach out if you have any questions about running Talos on a Turing Pi.