OpenShift 4.8 (and above) with NFS Subdir External Provisioner

Oren Oichman
5 min readJan 26, 2022

--

Why this Tutorial

In the past I have written a tutorial about how to connect OpenShift 4 with NFS storageClass but a lot has changed sense then.

Today (from Kubernetes 1.20 to be precise) the old tutorial will not work due to required Kubernetes changes There for I am writing a new tutorial which address the current version.

Why NFS ?

For Many reasons…
In Test/Dev environment we want to provide a storage class which is cheap on one hand and has all the abilities we need on the other. NFS provides just that!
when we work with NFS we can provide ReadWriteMany storage in case application needs persistent storage but needs to write to the same volumes from different Pods at once.

Steps

First we will use RHEL 8 for our NFS server (with some minor modification) and then we will deploy the NFS provider (I would like to thank Yuri Pismerov for pointing me to the right direction) on top of OpenShift and then test storage.

NFS Server

for the NFS server we will need to NFS utils packages , setup the required directories and files and make sure the firewalld + SElinux are configured correctly.
This tutorial already assume that you have already pre installed the OS so I will not go into that in this tutorial.

NFS Packages

First we will install the necessary packages for which we require :

# dnf install -y nfs-utils policycoreutils-python-utils policycoreutils

The next step in our NFS configuration is to create a parent directory for our NFS share :

# mkdir /var/nfsshare

Sense we are dealing with storage which is out of the Operating system needs I highly suggest creating a new storage device and mount the directory separately .

Create an LVM device and mount it into the new directory :

# lvcreate -L50G -n nfs vg00
# mkfs.xfs /dev/vg00/nfs
# mount -t xfs /dev/vg00/nfs /var/nfsshare

and add it to your fstab (or autofs) file to make sure it is consistent at boot time :

# cat >> /etc/fstab << EOF
/dev/vg00/nfs /var/nfsshare xfs defaults 0 0
EOF

Now we to make sure everybody has write permissions on the directory (I know this is bad security wise) so we will change it to 777

# chmod 777 /var/nfsshare

Edit the /etc/exports file to publish the nfs share directory

# cat > /etc/exports << EOF
/var/nfsshare *(rw,sync,no_wdelay,no_root_squash,insecure,fsid=0)
EOF

Services

We need to run an nfs service in order to provide NFS share directories so let’s go ahead and start the services :

# systemctl start nfs-server.service
# systemctl enable nfs-server.service
# systemctl status nfs-server.service

SElinux

for the SElinux part we need to set two (2) boolean roles and set the nfs directory with semanage:

# setsebool -P nfs_export_all_rw 1
# setsebool -P nfs_export_all_ro 1

And for the Directory content switch :

# semanage fcontext -a -t public_content_rw_t  "/var/nfsshare(/.*)?"
# restorecon -R /var/nfsshare

Firewall

For the firewalld we need to make sure the following services are open :

# firewall-cmd --permanent --add-service=nfs
# firewall-cmd --permanent --add-service=rpc-bind
# firewall-cmd --permanent --add-service=mountd
# firewall-cmd --reload

Export

To export the above file system, run the exportfs command with the -a flag means export or unexport all directories, -r means reexport all directories, synchronizing /var/lib/nfs/etab with /etc/exports and files under /etc/exports.d, and -v enables verbose output.

# exportfs -arv

Testing

On another machine (we can also run the test on the same machine) , make sure you have “nfs-utils” installed and mount the directory from the server :

$ mount -t nfs nfs-server:/var/nfsshare /mnt
$ touch /mnt/1 && rm -f /mnt/1

If the command was successful and you can create a file and delete it , we are good to go!

NFS Subdir External Provisioner

NFS subdir external provisioner is an automatic provisioner that use your existing and already configured NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}.

Note: This repository is migrated from https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client. As part of the migration:

  • The container image name and repository has changed to k8s.gcr.io/sig-storage and nfs-subdir-external-provisioner respectively.
  • To maintain backward compatibility with earlier deployment files, the naming of NFS Client Provisioner is retained as nfs-client-provisioner in the deployment YAMLs.

namespace

We will begin by creating the name space

# oc create namespace openshift-nfs-storage

And we would want the cluster to monitor the Operator so we will add the label to the namespace :

# oc label namespace openshift-nfs-storage "openshift.io/cluster-monitoring=true"

NOTE !!

In a case (which I do not recommend) your working NFS in one of the worker nodes then I would strongly advise you to label that node for management requirements:

# oc label node <NodeName> cluster.ocs.openshift.io/openshift-storage=''

Clone the repository

Let’s clone the GIT repository to a new directory :

# git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git nfs-subdir

Configuration updates

Before we deploy the storageClass we will need to make a few configuration modifications.

switch to the new directory:

# cd nfs-subdir
# oc project openshift-nfs-storage

and run the following :

# NAMESPACE=`oc project -q`
# sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
# oc create -f deploy/rbac.yaml
# oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:nfs-client-provisioner

In the “deploy/deployment.yaml” file let’s make the following changes from the original configuration :

env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: <YOUR NFS SERVER HOSTNAME>
- name: NFS_PATH
value: /var/nfs
volumes:
- name: nfs-client-root
nfs:
server: <YOUR NFS SERVER HOSTNAME>
path: /var/nfs

TO :

env:
- name: PROVISIONER_NAME
value: storage.io/nfs
- name: NFS_SERVER
value: <YOUR NFS SERVER HOSTNAME>
- name: NFS_PATH
value: /var/nfsshare
volumes:
- name: nfs-client-root
nfs:
server: <YOUR NFS SERVER HOSTNAME>
path: /var/nfsshare

NOTE! (for disconnected)

if we are running in a disconnected environment make sure you are updating the image path as well :

# sed -i '' "s/k8s.gcr.io\/sig-storage\/<internal registry>/g" ./deploy/deployment.yaml

Lets’ create a storageClass.yaml file :

# echo 'apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: storage.io/nfs
parameters:
pathPattern: "${.PVC.namespace}/${.PVC.name}"
onDelete: delete' > deploy/class.yaml

And deploy the rest of the files :

# oc create -f deploy/deployment.yaml -f deploy/class.yaml

at this point (after a minutes) you should see the pod running :

# oc get pods -n openshift-nfs-storage
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-7894d87997-gqjcq 1/1 Running 0 21h

Testing

for testing you new storage class the deployment comes with 2 simple images we can use for testing. Just run the following deployment :

# oc create -f deploy/test-claim.yaml -f deploy/test-pod.yaml

Now check your NFS Server for the file SUCCESS.

To delete the pods just run :

# oc delete -f deploy/test-pod.yaml -f deploy/test-claim.yaml

In case you want to use your newly created storage class you can use the following pvc as an example :

# cat > deploy/pvc.yaml << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
EOF

and create it

# oc create -f deploy/pvc.yaml

Multiple StorageClass

In some cases this StorageClass will not be the only Storageclass you have in the Cluster.
For this scenario we would want to make this our default StorageClass by adding a value to the StorageClass annotations :

# oc patch storageclass managed-nfs-storage -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'

That is it !!

If you have any question feel free to respond/ leave a comment.
You can find on linkedin at : https://www.linkedin.com/in/orenoichman
Or twitter at : https://twitter.com/ooichman

--

--