Now Kubernetes is a de facto standard solution for hosting cloud native applications but cluster hosting is expensive. It needs master nodes, worker nodes, networking, load balancers, etc. Recently I found a cheaper solution on Oracle cloud. 🙂
First of all, Kubernetes needs machines and memory. Oracle cloud gives us 2 AMD machines With 0.25 vCPU and 1GB ram, it is not enough for Kubernetes. We need to use a second compute option, ARM machines. We can use 4 vCPU and 24GB of memory and split it up to 4 virtual mechines. 2 vCPU and 12GB of ram is enough, even for Kubernetes master nodes. I started with two arm machines, 2CPU and 12GB memory each and kubespray for cluster configuration.
Kubespray is a powerfull and easy to use tool for configuration of kubernetes clusters, it can create clusters on AWS, and standalone machines. I don’t know Oracle cloud internals and I don’t need integration.
Setup Kubernetes cluster is very easy with kubespray, just create two machines, configure inventory and run
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml command. But in Oracle cloud there is a small pitfall. I use ubuntu machines, with a weird iptables config that blocks a lot of traffic.
I use script to disable problematic rules:
# save iptables rules
sudo iptables-save > ~/iptables-rules
# remove all DROP and REJECT
grep -v "DROP" iptables-rules > tmpfile && mv tmpfile iptables-rules-mod
grep -v "REJECT" iptables-rules-mod > tmpfile && mv tmpfile iptables-rules-mod
# apply configuration
sudo iptables-restore < ~/iptables-rules-mod
# save iptables config
sudo netfilter-persistent save
sudo systemctl restart iptables
Kubespray should create a fully operational cluster on arm64 machines. I could stop here but I have another two free machines! Let’s create two more machines with an amd64 processor and add to cluster to get more compute power. 🧅
Multiarch cluster problem
I modified my inventory file and added two more machines as nodes. After that modification I ran kubespray again. Ansible finished without problem but nodes are NotReady! I checked and found a problem with the Calico. DaemonSet calico-node doesn’t work on new nodes because architecture mismatch. Looks like my calico configuration container image is not a multiarch image. 🙁 It is weird because official calico documentation uses different image, from dockerhub with multiarch support.
I didn’t know that a multiarch image existed when I configured my cluster. Therefore I decided to create two DaemonSets with calico and use nodeSelector to select the correct architecture. I just modified daemonSet that I downloaded from cluster using
kubectl get daemonsets calico-node -n kube-system -o yaml > calico-arm64.yaml command. Code example for amd64, arm version uses arm64 arch and image tag.
- image: quay.io/calico/pod2daemon-flexvol:v3.19.2-amd64
Despite using two architectures I’d like to deploy my application on any node though two architectures. Nicely that docker can build multiarch images with buildx. I use github actions builder to build images. Example is in my repository with a simple ruby debug server: https://github.com/es1o/debug_server. Using multiarch images I can create only one deployment yaml. Kubernetes automatically chooses the correct architecture.
Kubernetes cluster doesn’t need all nodes with the same architecture if you take care about DaemonSets especially (e.g. kube-proxy, networking).