Minikube Multi-Node K8s Cluster with Shared NFS mount PV

Seralahthan
6 min readJan 22, 2023
Photo by Growtika on Unsplash

In this blog, let us explore how we can create and export a shared NFS mount on Mac and use it as a persistent volume storage in a multi-node K8s cluster.

Before we dive into the steps, let us try to understand why we would need the shared NFS mount for a multi-node K8s cluster.

Recently, i came across this requirement, where i had a multi-node Minikube K8s cluster to which i wanted to mount a local directory from the host machine which can be shared and accessed across all the nodes.

I tried to achieve this requirement using the following approaches,

  1. Creating a Minikube cluster with --mount-string and --mount options

2. Creating the Minikube cluster without the --mount options and mount the host directory later

Both of the above approaches only mount the directory from the host file system to the K8s Control Plane node, the mount is not created in other worker nodes in a multi-node cluster.

This happens as the default storage provisioner (k8s.io/minikube-hostpath) is broken for multi-node mode.

Refer to the link for further details on this behaviour and issue.

So as an alternative, to achieve this requirement, we are going to create NFS share on the host system (Mac in my case) and create a NFS shared Persistent Volume storage to share a mount across all the nodes in a multi-node cluster.

Let’s have a look at the steps.

Create a Minikube Multi-Node K8s Cluster

As the first step, let’s create a Minikube 2-node K8s cluster without any mounts.

Fetch the ip of the Minikube cluster using the following command,

minikube ip

This ip belongs to the Minikube cluster K8s Control Plan node.
In my case, all the other nodes will have the ip addresses with 192.168.64.0/24 network CIDR.

Enable NFS server and create a NFS mount on Mac

Now that the cluster is up and running, let us enable the NFS server and export a host file system directory as the NFS mount.

In order to export a file system directory from the host machine as an NFS mount to the Minikube cluster, we need the NFS daemon to be enabled and running

Following command can be used to check the status of the NFS daemon and enable and start it on Mac,

Now that we have configured NFS server and created a NFS mount, let us create K8s PersistentVolume using the NFS share, PersistentVolumeClaim and a Deployment with 2 pods to write data to the NFS share.

Create a PersistentVolume with NFS share

Before we create the PersistentVolume storage, we need to find the Minikube Gateway IP address. This IP address will be used to connect to the NFS server in the PersistentVolume configuration file.

Let us now create a PersistentVolume pointing to the NFS server using the following PersistentVolume definition yaml file,

Execute the following command to create the PersistentVolume,

kubectl apply -f standard-host-path-nfs-pv.yaml

Note that we have used the Minikube Gateway IP address as the nfs server ip along with the nfs mount.

Above PersistentVolume is created using the standard (default) Minikube Storage Class with k8s.io/minikube-hostpath Storage Provisioner.

Further details on NFS volume can be found here.

Let’s now create a PersistentVolumeClaim to request some storage from the NFS PersistentVolume.

Create a PersistentVolumeClaim

Create a PersistentVolumeClaim in the test namespace using the following yaml definition file,

Execute the following command to create the PersistentVolumeClaim,

kubectl apply -f standard-host-path-nfs-pvc.yaml

Note that once the PersistentVolumeClaim is created K8s will automatically bind it to a PersistentVolume based on the requested storage of the claim and available storage in the volume.

We can view the created PersistentVolume and PersistentVolumeClaim using the kubernetes get pv,pvc command

Finally, let us create a deployment with 2 pods where each pod is deployed in one cluster node (both pods can’t be scheduled to the same node).

Create a Deployment with topologySpreadConstraints

Since our use-case is to check the functionality of using NFS share in a multi-node K8s cluster, we need to make sure the application pods are spread across all the nodes in the cluster (and not scheduled to the same node).

In order to make sure the pods are spread across all the nodes, we need to use the topologySpreadConstraints with topologyKey: kubernetes.io/hostname and a DoNotSchedule condition based on the pod label.

Following deployment yaml file can be used to create a deployment of,

  • 2 busybox pods scheduled in both nodes
  • with a volume mount of /my-test
  • and shell process writing the “current time” and the “hostname” of the pod to the /my-test/shared.txt file.

Execute the following command to create the Deployment,

kubectl create -f standard-host-path-nfs-pvc-deployment.yaml

Note that we have referenced the persistentVolumeClaim nfs-claim as the volume volv and mounted it to the pod under /my-test mount path.

Pod will execute the following shell script which will write the current time and hostname of the pod to the /my-test/shared.txt file within the pod.

sleep 60; for i in  $(seq 1 5); do host=$(hostname); now=$(date +"%T"); echo "adding time: $now, from $host" >> /my-test/shared.txt; sleep 10; done; sleep infinity

We can verify that the pods are scheduled across both nodes by executing the kubectl get pods -o wide command,

As you can see the pods are spread across both nodes.

Each pod will write the data on to its own volume mount
/my-test/shared.txt file.
But, since the volume mount is backed by the NFS volume both pods should be ideally writing to a common shared file in the NFS mount.

Let’s verify whether we can see data written by both pods in the NFS mount in the host machine, in my case /Users/seran/Minikube/nfs-mount directory in my Mac.

As you can see from the above image current time is added to the /Users/seran/Minikube/nfs-mount/shared.txt file from both pods running on separate nodes.

With NFS Persistent Volumes we don’t need to worry about on which node the pod will get scheduled, since the NFS volume is shared across all the nodes, which ever the node the pod is scheduled the application has access to all its data.

References:

[1] https://stackoverflow.com/questions/75037771/minikube-multi-node-cluster-mounting-host-machine-filesystem-to-all-the-nodes
[2] https://kubernetes.io/docs/concepts/storage/volumes/#nfs
[3] https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#topologyspreadconstraints-field

Hope you enjoyed the blog and got something to take away ✌️

Thank you for Reading!
Cheers 🍻

--

--

Seralahthan

Consultant - Integration & CIAM | ATL@WSO2 | BScEng(Hons) in Computer Engineering | Interested in BigData, ML & AI