OpenShift 4.8 (and above) with NFS Subdir External Provisioner

Why this Tutorial

In the past I have written a tutorial about how to connect OpenShift 4 with NFS storageClass but a lot has changed sense then.

Today (from Kubernetes 1.20 to be precise) the old tutorial will not work due to required Kubernetes changes There for I am writing a new tutorial which address the current version.

Why NFS ?

For Many reasons…
In Test/Dev environment we want to provide a storage class which is cheap on one hand and has all the abilities we need on the other. NFS provides just that!
when we work with NFS we can provide ReadWriteMany storage in case application needs persistent storage but needs to write to the same volumes from different Pods at once.


First we will use RHEL 8 for our NFS server (with some minor modification) and then we will deploy the NFS provider (I would like to thank Yuri Pismerov for pointing me to the right direction) on top of OpenShift and then test storage.

NFS Server

for the NFS server we will need to NFS utils packages , setup the required directories and files and make sure the firewalld + SElinux are configured correctly.
This tutorial already assume that you have already pre installed the OS so I will not go into that in this tutorial.

First we will install the necessary packages for which we require :

# dnf install -y nfs-utils policycoreutils-python-utils policycoreutils

The next step in our NFS configuration is to create a parent directory for our NFS share :

# mkdir /var/nfsshare

Sense we are dealing with storage which is out of the Operating system needs I highly suggest creating a new storage device and mount the directory separately .

Create an LVM device and mount it into the new directory :

# lvcreate -L50G -n nfs vg00
# mkfs.xfs /dev/vg00/nfs
# mount -t xfs /dev/vg00/nfs /var/nfsshare

and add it to your fstab (or autofs) file to make sure it is consistent at boot time :

# cat >> /etc/fstab << EOF
/dev/vg00/nfs /var/nfsshare xfs defaults 0 0

Now we to make sure everybody has write permissions on the directory (I know this is bad security wise) so we will change it to 777

# chmod 777 /var/nfsshare

Edit the /etc/exports file to publish the nfs share directory

# cat > /etc/exports << EOF
/var/nfsshare *(rw,sync,no_wdelay,no_root_squash,insecure,fsid=0)

We need to run an nfs service in order to provide NFS share directories so let’s go ahead and start the services :

# systemctl start nfs-server.service
# systemctl enable nfs-server.service
# systemctl status nfs-server.service

for the SElinux part we need to set two (2) boolean roles and set the nfs directory with semanage:

# setsebool -P nfs_export_all_rw 1
# setsebool -P nfs_export_all_ro 1

And for the Directory content switch :

# semanage fcontext -a -t public_content_rw_t  "/var/nfsshare(/.*)?"
# restorecon -R /var/nfsshare

For the firewalld we need to make sure the following services are open :

# firewall-cmd --permanent --add-service=nfs
# firewall-cmd --permanent --add-service=rpc-bind
# firewall-cmd --permanent --add-service=mountd
# firewall-cmd --reload

To export the above file system, run the exportfs command with the -a flag means export or unexport all directories, -r means reexport all directories, synchronizing /var/lib/nfs/etab with /etc/exports and files under /etc/exports.d, and -v enables verbose output.

# exportfs -arv

On another machine (we can also run the test on the same machine) , make sure you have “nfs-utils” installed and mount the directory from the server :

$ mount -t nfs nfs-server:/var/nfsshare /mnt
$ touch /mnt/1 && rm -f /mnt/1

If the command was successful and you can create a file and delete it , we are good to go!

NFS Subdir External Provisioner

NFS subdir external provisioner is an automatic provisioner that use your existing and already configured NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}.

Note: This repository is migrated from As part of the migration:

  • The container image name and repository has changed to and nfs-subdir-external-provisioner respectively.
  • To maintain backward compatibility with earlier deployment files, the naming of NFS Client Provisioner is retained as nfs-client-provisioner in the deployment YAMLs.

We will begin by creating the name space

# oc create namespace openshift-nfs-storage

And we would want the cluster to monitor the Operator so we will add the label to the namespace :

# oc label namespace openshift-nfs-storage ""


In a case (which I do not recommend) your working NFS in one of the worker nodes then I would strongly advise you to label that node for management requirements:

# oc label node <NodeName>''

Let’s clone the GIT repository to a new directory :

# git clone nfs-subdir

Before we deploy the storageClass we will need to make a few configuration modifications.

switch to the new directory:

# cd nfs-subdir
# oc project openshift-nfs-storage

and run the following :

# NAMESPACE=`oc project -q`
# sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
# oc create -f deploy/rbac.yaml
# oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:nfs-client-provisioner

In the “deploy/deployment.yaml” file let’s make the following changes from the original configuration :

- name: NFS_SERVER
- name: NFS_PATH
value: /var/nfs
- name: nfs-client-root
path: /var/nfs

TO :

- name: NFS_SERVER
- name: NFS_PATH
value: /var/nfsshare
- name: nfs-client-root
path: /var/nfsshare

NOTE! (for disconnected)

if we are running in a disconnected environment make sure you are updating the image path as well :

# sed -i '' "s/\/sig-storage\/<internal registry>/g" ./deploy/deployment.yaml

Lets’ create a storageClass.yaml file :

# echo 'apiVersion:
kind: StorageClass
name: managed-nfs-storage
pathPattern: "${.PVC.namespace}/${}"
onDelete: delete' > deploy/class.yaml

And deploy the rest of the files :

# oc create -f deploy/deployment.yaml -f deploy/class.yaml

at this point (after a minutes) you should see the pod running :

# oc get pods -n openshift-nfs-storage
nfs-client-provisioner-7894d87997-gqjcq 1/1 Running 0 21h

for testing you new storage class the deployment comes with 2 simple images we can use for testing. Just run the following deployment :

# oc create -f deploy/test-claim.yaml -f deploy/test-pod.yaml

Now check your NFS Server for the file SUCCESS.

To delete the pods just run :

# oc delete -f deploy/test-pod.yaml -f deploy/test-claim.yaml

In case you want to use your newly created storage class you can use the following pvc as an example :

# cat > deploy/pvc.yaml << EOF
kind: PersistentVolumeClaim
apiVersion: v1
name: test-claim
storageClassName: managed-nfs-storage
- ReadWriteMany
storage: 100Mi

and create it

# oc create -f deploy/pvc.yaml

Multiple StorageClass

In some cases this StorageClass will not be the only Storageclass you have in the Cluster.
For this scenario we would want to make this our default StorageClass by adding a value to the StorageClass annotations :

# oc patch storageclass managed-nfs-storage -p '{"metadata": {"annotations": {"": "true"}}}'

That is it !!

If you have any question feel free to respond/ leave a comment.
You can find on linkedin at :
Or twitter at :




Open Source contributer for the past 15 years

Love podcasts or audiobooks? Learn on the go with our new app.

5 Accessories Every Developer’s Desk Needs

How to root Huawei MediaPad S7–301w

Getting started with AWS Lex using a datafile and AWS Python SDK

Handle List of Switch state in flutter

Token Service IdentityServer4 Sign in with GitHub

GSoC @OpenMRS 4th week

I once had a disagreement with a colleague while designing a piece of software.

Employment Data Analysis using Python and Spark

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Oren Oichman

Oren Oichman

Open Source contributer for the past 15 years

More from Medium

Using OpenShift 4 As A Certificates CA

OpenShift Mirror Registry in Practise

Visualize Prometheus metrics in Grafana

Creating HTTPS/TLS Route by HELM inOpenshift/OCP