With tools like kubeadm and access to cheap hardware like raspberry pi’s it’s pretty simple to get a kubernetes cluster up and running, but eventually you may want something more powerful than a raspberry pi, or perhaps you want to run some closed source software that is only offered in an X86/AMD64 format. In either case this is a good chance to upgrade your cluster to add a second architecture to your cluster, which is surprisingly more straightforward than you might expect.
The hardware I used was an Intel NUC7CJYH1 with 8GB of RAM and a 120GB SSD from some random brand I’d never heard of as storage is not a large concern. The NUC, RAM, and SSD cost me around $240 in total and gave me a good starting point to run non-ARM containers, and if I need more compute I can simply scale out.
As far as OS goes I installed Ubuntu Server 18.04 LTS, the reason I went with Ubuntu Server over something like CoreOS, Rancher, or a similar linux distro which is designed to specialize in containers is simply that I was having trouble with secure boot which this hardware required, it didn’t allow using legacy boot to install an OS, and from my research the ones I looked at quickly (CoreOS and Rancher) don’t currently support booting in UEFI.
Finally this post is based on having an existing cluster that is all ARM and adding a node that is AMD64, in theory most if not all the steps will be the same no matter what architecture you add, but just keep that in mind. This also will not go over the initial creation of the cluster as there are plenty of posts about that already.
Joining the Cluster
Once the OS is installed, updates have been applied, swap has been disabled (don’t forget to delete the swap entry from /etc/fstab), and the correct version of kubeadm and kubelet are installed it’s time to join the cluster, this node won’t work immediately but we’ll fix that in a minute. Simply use your join token which you can get from the master to use kubeadm to join like this:
sudo kubeadm join kube-controller-01:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
At this point you should be able to do a
kubectl get nodes and see your new kubernetes node showing up in there. Now you just need to setup two Daemon Sets in the kube-system namespace and your cluster will be ready to go.
First is networking, in my case I’m using flannel which supports both ARM and AMD64. If you look at the existing daemon set, then you’ll notice that it has a node selector based on the label beta.kubernetes.io/arch being arm. Since we need to replicate this for AMD64 and that’s how most people deploy it, we can just deploy flannel using their provided yaml (assuming that’s how you did it for ARM swapping out anywhere that said “amd” with “arm”), just removing everything except the daemon set as it already exists in the cluster. Once that’s deployed you should now have flannel networking on all pods. Below is a diff of my daemon sets for AMD64 and ARM as an example of the changes that need to be made.
@@ -2,7 +2,7 @@ apiVersion: extensions/v1beta1 kind: DaemonSet metadata: - name: kube-flannel-ds + name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node @@ -16,7 +16,7 @@ spec: hostNetwork: true nodeSelector: - beta.kubernetes.io/arch: arm + beta.kubernetes.io/arch: amd64 tolerations: - key: node-role.kubernetes.io/master operator: Exists @@ -24,7 +24,7 @@ serviceAccountName: flannel initContainers: - name: install-cni - image: quay.io/coreos/flannel:v0.10.0-arm + image: quay.io/coreos/flannel:v0.10.0 command: - cp args: @@ -38,7 +38,7 @@ mountPath: /etc/kube-flannel/ containers: - name: kube-flannel - image: quay.io/coreos/flannel:v0.10.0-arm + image: quay.io/coreos/flannel:v0.10.0 command: - /opt/bin/flanneld args:
Next the existing kube-proxy daemon set needs to be updated. By default there’s no node selector limiting it to a specific architecture. You can do it via
kubectl edit modifying the daemon set to include the a node selector for the architecture of ARM.
Finally a new daemon set needs to be setup for kube-proxy to allow for network traffic to be forwarded from one node to another. Just like with flannel, simply take the existing ARM object and update it to no longer use an ARM image and instead use the AMD image, and lock it to the new node with the same node selector looking for beta.kubernetes.io/arch to be amd. After that the existing daemon set for kube-proxy will need to be updated to lock it to any node with the label beta.kubernetes.io/arch set to arm. That’s it. Once you have those 4 daemon sets (2x flannel and 2x kube-proxy) your cluster should be ready to go.
While it’s easy to set this up, there are some things to keep in mind, the main thing being that the kubernetes cluster is now a bit more complex. Because images are architecture specific, existing deployments should be updated to have a node selector on them, likewise, new deployments will want them as well, that way you don’t get ARM images trying to be scheduled on an `64 node. For most people this added complexity isn’t worth it, but if you want to try your hand at it, working with a multi-arch kubernetes cluster isn’t too hard, and certainly isn’t as daunting as it might seem when you first think about it.