Multi-Node Minikube K8s Deployment on M1 Mac with Colima and NFS PV

Seralahthan
9 min readAug 5, 2023
Photo by Growtika on Unsplash

In this blog we are going to explore how we can run a Multi-Node Minikube K8s deployment on M1 Mac without Docker-Desktop and with NFS PersistentVolume Storage.

Motivation:
I had a requirement where i needed to run a Multi-Node K8s Cluster on M1 Mac without Docker-Desktop which could run a K8s Deployment with a NFS PersistentVolume.

What is issue there?

Requirement for running Multi-Node K8s cluster on a Mac host:

K8s cluster runs Containers within Pods.
So in order to create a K8s cluster on a Mac host, first and foremost we need to have the ability to run Containers on Mac host.

Containers (any type of container Docker or otherwise) are natively designed to run on Linux-based systems. They utilise Linux kernel features such as cgroups and namespaces to provide isolation and resource management for containers.
So we need to run a Linux VM on the Mac host.

Why without Docker Desktop?
Docker Inc has updated their terms on the usage of Docker-Desktop with affective from the 31st of August 2021. So in short, Docker Desktop for Mac/Windows is no longer free for “professional use in larger companies”

Options to run a Multi-Node K8s cluster on Mac:
Let’s explore some of the technologies which support provisioning Multi-Node K8s cluster on Mac:
- Minikube (supports multi-node clusters, supports both VM-based and Container-based approaches to provision the cluster)
- Docker Desktop for Mac with Docker Swarm (supports multi-node cluster, but we need a solution which doesn’t involve Docker Desktop)
- Rancher Desktop with K3d (supports multi-node clusters, but K3d runs K3s a minimal K8s distribution, hence not using this option as well)
-
KinD(Kubernetes in Docker)(supports multi-node clusters, but only supports Container-based approach to provision the cluster)

So, we will be using Minikube to provision the Multi-Node K8s cluster on Mac.

Minikube can be deployed as a VM, a Container, Or Bare-Metal.
Depending on the type of the Host OS, Minikube supports different drivers to achieve VM-based and Container-based approaches.

Issues Here

  • In Apple M1 based on ARM64, Minikube VM-based approach with Hyperkit driver is not supported by the ARM based CPUs.
  • In Apple M1 based on ARM64, Minikube VM-based approach with QEMU2 driver is not working (even though it is supported) and fails with an error when joining the worker node to the cluster in a 2-node cluster.
  • Kubernetes is not yet fully compatible with ARM-based CPUs, so in a
    VM-based approach to run K8s cluster on Mac host, we need to create VMs with Intel CPUs using hypervisors like Hyperkit Or QEMU2.
  • Hyperkit hypervisor doesn’t have the ability to create VMs with different CPU architectures, like Intel and ARM. Thus, Hyperkit hypervisor is not supported on darwin/arm64 to run Minikube cluster.
  • But QEMU2 hypervisor has the ability to create VMs with Intel and ARM architectures. QEMU2 hypervisor can create a VM with Intel CPU on a M1 Mac with ARM CPU enabling to run K8s on ARM-based CPUs.
  • Since Minikube VM-based approaches resulted in failure, we are left with the Minikube Container-based approach with Docker driver without using the Docker Desktop for Mac.
  • We need a solution which can run dockerd (Docker Daemon) aka Docker Runtime on a Mac host without using Docker Desktop for Mac.
  • Bingo!!! Colima comes as a saviour.

Install Colima to run Docker Containers on macOS

Colima is an open-source project that provides container runtimes on macOS (and Linux) with minimal setup.

The name Colima means Containers in LimaLinux virtual machines on macOS.

Note:
Even though the name Colima (bit of a misnomer) means Containers in Lima, Colima is not a Containerized version of Lima.

Colima uses QEMU2 hypervisor to create a full Linux VM with Intel CPU on a M1 Mac (with ARM CPU) while Lima uses Docker to create containerized Linux VM.

Colima was originally intended to be a containerized version of Lima, but the project evolved into a separate tool that uses QEMU2 instead of Docker.

Install Colima

Easiest way to install Colima is using Homebrew,

Check the Colima installation by executing the following command from Mac terminal,

Minikube with Docker driver will create a docker containers inside the Colima Linux VM, in order for it to succeed we need Docker CLI to be installed on the Mac host.

Install Docker and Docker-Compose

Install Docker and Docker-Compose via Homebrew,

Verify Docker and Docker-Compose installation by executing the following command from Mac terminal,

Docker Info

As you can see from the above image, we have only installed the Docker CLI and dockerd (Docker Daemon) is not yet running.

We need to create a Colima Linux VM with Docker runtime installed to make the dockerd available on the Mac host.

Create Colima Linux VM with a Mount from macOS Host

Let’s create a Colima Linux VM with dockerd running with a file mount from Mac host.

In order to create the Colima Linux VM named linux-vm while editing the configurations, execute the following command from the Mac terminal,

Colima Linux VM creation with Configurable Options

We have mount the “/Users/seralahthan/Minikube/nfs-mount” the directory from the Mac host to the Colima Linux VM.

Colima VM is created successfully, let’s check the status of the Colima VM by executing the following command from the Mac terminal,

Colima list and status of Linux VM

Now let’s SSH into the linux-vm and check whether our mount is intact

Check Mount from Linux VM to Mac Host

Now that we have provisioned a dockerd (Docker runtime) with Colima Linux VM, we can now create the Minikube cluster with Docker driver.

Create Minikube 2-Node Cluster with Docker Driver and Docker Runtime

Let’s create a 2-node Minikube Cluster named multi-node profile

Minikube Cluster Creation

Set the multi-node cluster profile as the active profile,

Note:
Each Minikube cluster node is created as Docker Container inside the Colima Linux VM.

Minikube Docker driver creates a Docker daemon inside the Colima Linux VM.
Minikube Nodes are created as Docker containers using this Docker daemon.

We can verify this by SSH into the linux-vm and listing the Docker containers with docker ps -a

Minikube Nodes Run As Docker Containers in Colima Linux VM

Note:
Kubernetes Containers are created within the Kubernetes Pods based on the Container Runtime(Docker) specified for Minikube cluster.

Kubernetes Pods run inside the Docker Containers (Minikube Nodes) which run in the Colima Linux VM which is ultimately hosted on the Mac host.

So this can be thought of as Docker in Docker Or K8s in Docker.

Get Minikube IP Range

We are going to create a NFS on the Colima Linux VM which should be accessible from the Minikube nodes.

We need to retrieve the Minikube IP range so that we can export the NFS to be accessible over all the Minikube nodes.

Execute the following command from the Mac terminal to fetch Minikube Control Plane node IP,

In my case the Minikube Control Plane node IP is 192.168.49.2
Minikube Cluster Network IP range is first 3 octets of the Minikube Cluster IP fetched, which is 192.168.49.0/24.

Create NFS in Colima Linux VM with a Mount to Mac

Test Mount NFS to Minikube Node (Optional)

Now that we have created a NFS, let’s try to mount it to the Minikube node as a NFS (this is an optional step for testing the mount)

This verifies that the NFS can be mount to the Minikube nodes without any errors.

Create Kubernetes PV, PVC and Deployment

Create a PersistentVolume with NFS share

Now that we have a Minikube cluster created let’s create a PersistentVolume(PV) pointing to the NFS server using the following PV definition yaml file,

Execute the following command from the Mac terminal to create the PV,

kubectl apply -f standard-host-path-nfs-manual-pv.yaml

Note that we have used the Minikube Gateway IP address as the nfs server ip along with the nfs mount.

Above PersistentVolume is created using the standard(default) Minikube Storage Class with k8s.io/minikube-hostpath Storage Provisioner.

Further details on NFS volume can be found here.

Let’s now create a PersistentVolumeClaim to request some storage from the NFS PersistentVolume.

Create a PersistentVolumeClaim

Create a PersistentVolumeClaim in the test namespace using the following yaml definition file,

Execute the following command to create the PersistentVolumeClaim,

kubectl create ns test
kubectl apply -f standard-host-path-nfs-manual-pvc.yaml

Note that once the PersistentVolumeClaim is created K8s will automatically bind it to a PersistentVolume based on the requested storage of the claim and available storage in the volume.

We can view the created PersistentVolume and PersistentVolumeClaim using the kubectl get pvc,pv command,

Finally, let us create a deployment with 2 pods where each pod is deployed to a separate cluster node (both pods shouldn’t be scheduled to the same node).

Create a Deployment with topologySpreadConstraints

Since our use-case is to check the functionality of using NFS share in a
2-node K8s cluster, we need to make sure the application pods are spread across all the nodes in the cluster (and not scheduled to the same node).

In order to make sure the pods are spread across all the nodes, we need to use the topologySpreadConstraints with topologyKey: kubernetes.io/hostname and a DoNotSchedule condition based on the pod label.

Following deployment yaml file can be used to create a deployment of,

  • 2 busybox pods scheduled in both nodes.
  • with a volume mount of /my-test
  • and shell process writing the “current time” and the “hostname” of the pod to the /my-test/shared.txt file.

Execute the following command to create the Deployment,

kubectl create -f standard-host-path-nfs-manual-pvc-deployment.yaml

Note that we have referenced the persistentVolumeClaim standard-host-path-nfs-manual-pvc as the volume volv and mounted it to the pod under /my-test mount path.

Pod will execute the following shell script which will write the current time and hostname of the pod to the /my-test/shared.txt file within the pod.

sleep 60; for i in  $(seq 1 5); do host=$(hostname); now=$(date +"%T"); echo "adding time: $now, from $host" >> /my-test/shared.txt; sleep 10; done; sleep infinity

We can verify that the pods are scheduled across both nodes by executing the kubectl get pods -o wide command,

As you can see from the above image pods are spread across both nodes.

Each pod will write the data on to its own volume mount
/my-test/shared.txt file.
But, since the volume mount is backed by the NFS volume both pods should be ideally writing to a common shared file in the Linux VM NFS mount.

Let’s verify whether we can see data written by both pods to the /Users/seralahthan/Minikube/nfs-mountdirectory on Mac host machine.

As you can see, data from both pods have been written to the Mac host at /Users/seralahthan/Minikube/nfs-mount directory.

This concludes our journey!!!

References:

[1] https://www.docker.com/
[2] https://www.docker.com/legal/docker-subscription-service-agreement
[3] https://www.docker.com/products/docker-desktop
[4] https://minikube.sigs.k8s.io/docs/tutorials/multi_node/
[5] https://docs.docker.com/engine/swarm/
[6] https://docs.docker.com/engine/swarm/swarm-tutorial/
[7] https://docs.rancherdesktop.io/how-to-guides/create-multi-node-cluster/
[8] https://mcvidanagama.medium.com/set-up-a-multi-node-kubernetes-cluster-locally-using-kind-eafd46dd63e5
[9] https://minikube.sigs.k8s.io/docs/drivers/
[10] https://github.com/moby/hyperkit/issues/310#issuecomment-1003707160
[11] https://github.com/abiosoft/colima
[12] https://github.com/lima-vm/lima
[13] https://formulae.brew.sh/formula/docker
[14] https://formulae.brew.sh/formula/docker-compose
[15] https://kubernetes.io/docs/concepts/storage/volumes/#nfs

Hope you enjoyed the blog and got something to take away ✌️

Thank you for Reading!
Cheers 🍻

--

--

Seralahthan

Consultant - Integration & CIAM | ATL@WSO2 | BScEng(Hons) in Computer Engineering | Interested in BigData, ML & AI