Working with NFS as a StorageClass in OpenShift 4

Oren Oichman
7 min readMar 18, 2020


Why This Tutorial

lately I found myself dealing with a lot of environment Test/P.O.C Labs where they all had a common issue … No centralized storage to provide a sufficient StorageClass for the OpenShift Environment.

For that reason I started search the internet for a good why to provide centralized storage and I found two (2) :

  1. ISCSI Server
  2. NFS with kubernetes-incubator

In my Tutorial I will focus on the second options of the kubernetes-incubator which provides NFS as a storage class for 2 reasons.
The first reason is that it is an “out of the box” approach which I tend to prefer and the second reason is that I need in my application the ability to provide ReadWriteMany storage which I can by using NFS.

At the end I will add nice used case about how to get the internal registry to work with a PVC from our StorageClass NFS.

First we will build the NFS server and then we will continue to StorageClass configuration:

NFS Server

for the NFS server we will need to NFS utils packages , setup the required directories and files and make sure the firewalld + SElinux are configured correctly.
This tutorial already assume that you have already pre installed the OS so I will not go into that in this tutorial.

NFS Packages

First we will install the necessary packages for which we require :

$ yum install -y nfs-utils policycoreutils-python-utils policycoreutils-python

now for the NFS configuration we will create a new directory

$ mkdir /var/nfsshare


Create an LVM device and mount it into the new directory :

$ lvcreate -L50G -n nfs vg00
$ mkfs.xfs /dev/vg00/nfs
$ mount -t xfs /dev/vg00/nfs /var/nfsshare

and add it to your fstab (or autofs) file to make sure it is consistent at boot time :

$ cat >> /etc/fstab << EOF
/dev/vg00/nfs /var/nfsshare xfs defaults 0 0


Now we to make sure everybody has write permissions on the directory (I know this is bad security wise) so we will change it to 777

$ chmod 777 /var/nfsshare

Edit the /etc/exports file to publish the nfs share directory

$ cat > /etc/exports << EOF
/var/nfsshare *(rw,sync,no_wdelay,no_root_squash,insecure,fsid=0)

Start the services :

$ systemctl start rpcbind
$ systemctl start nfs-server
$ systemctl start nfs-lock
$ systemctl start nfs-idmap

Make sure they run on boot time :

$ systemctl enable rpcbind
$ systemctl enable nfs-server
$ systemctl enable nfs-lock
$ systemctl enable nfs-idmap


For the firewalld we need to make sure the following services are open :

$ export FIREWALLD_DEFAULT_ZONE=`firewall-cmd --get-default-zone`
$ firewall-cmd --permanent --zone=${FIREWALLD_DEFAULT_ZONE} --add-service=rpc-bind
$ firewall-cmd --permanent --zone=${FIREWALLD_DEFAULT_ZONE} --add-service=nfs
$ firewall-cmd --permanent --zone=${FIREWALLD_DEFAULT_ZONE} --add-service=mountd

Now we will reload the firewall configuration :

$ firewall-cmd --reload
$ firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: ens192
services: dhcpv6-client dns ftp http https kerberos ldap ldaps mountd nfs ntp ssh tftp
ports: 80/tcp 443/tcp 389/tcp 636/tcp 53/tcp 88/udp 464/udp 53/udp 123/udp 88/tcp 464/tcp 123/tcp 9000/tcp 8000/tcp 8080/tcp 111/tcp 54302/tcp 20048/tcp 2049/tcp 46666/tcp 42955/tcp 875/tcp
masquerade: no
rich rules:


for the SElinux part we need to set two (2) boolean roles and set the nfs directory with semanage:

$ setsebool -P nfs_export_all_rw 1
$ setsebool -P nfs_export_all_ro 1

And for the Directory content switch :

$ semanage fcontext -a -t public_content_rw_t  "/var/nfsshare(/.*)?"
$ restorecon -R /var/nfsshare


On another machine , make sure you have “nfs-utils” installed and mount the directory from the server :

$ mount -t nfs nfs-server:/var/nfsshare /mnt
$ touch /mnt/1 && rm -f /mnt/1

If the command was successful and you can create a file and delete it , we are good to go


the “kubernetes-incubator” is a very good open source project which host several very interesting Kubernetes projects. One of the project is “ external-storage”.

What is an ‘external provisioner’?

An external provisioner is a dynamic PV provisioner whose code lives out-of-tree/external to Kubernetes. Unlike in-tree dynamic provisioners that run as part of the Kubernetes controller manager, external ones can be deployed & updated independently.
External provisioners work just like in-tree dynamic PV provisioners. A StorageClass object can specify an external provisioner instance to be its provisioner like it can in-tree provisioners. The instance will then watch for PersistentVolumeClaims that ask for the StorageClass and automatically create PersistentVolumes for them. For more information on how dynamic provisioning works, see the docs or this blog post.


now let’s clone the directory to our working server:

# git clone kubernetes-incubator

Now we need to make a few adjustments to our files and to our openshift environment.

First lets create a project for our operator :

# oc create namespace openshift-nfs-storage

And we would want the cluster to monitor the Operator so we will add the label to the namespace :

# oc label namespace openshift-nfs-storage ""


In a case (which I do not recommend) your working NFS in one of the worker nodes then I would strongly advise you to label that node for management requirements:

# oc label node <NodeName>''

Now we will run an update on some of the files we just cloned but first let’s change our project to our newly created one :

# oc project openshift-nfs-storage

Next we will update the deployment and the RBAC accordingly:

# cd kubernetes-incubator/nfs-client/
# NAMESPACE=`oc project -q`
# sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml
# sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/deployment.yaml

And Apply the RBAC configuration

# oc create -f deploy/rbac.yaml

and we need to update the service account with the right permissions:

# oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:nfs-client-provisioner

Now edit the “deployment.yaml” file with the NFS configuration and You may also want to change the PROVISIONER_NAME above from fuseim.pri/ifs to something more descriptive like, but if you do remember to also change the PROVISIONER_NAME in the storage class definition.

change the following section :

value: fuseim.pri/ifs
- name: NFS_SERVER
- name: NFS_PATH
value: /var/nfs
- name: nfs-client-root
path: /var/nfs

to :

- name: NFS_SERVER
- name: NFS_PATH
value: /var/nfsshare
- name: nfs-client-root
path: /var/nfsshare

Next just update the deploy/class.yaml with the new PROVISIONER_NAME :

# cat > deploy/class.yaml << EOF
kind: StorageClass
name: managed-nfs-storage
provisioner: # or choose another name, must match deployment's env PROVISIONER_NAME'
archiveOnDelete: "false"

Let’s deploy everyting we just modified :

# oc create -f deploy/class.yaml 
# oc create -f deploy/deployment.yaml

at this point (after a minutes) you should see the pod running :

# oc get pods -n openshift-nfs-storage
nfs-client-provisioner-7894d87997-gqjcq 1/1 Running 0 21h

NOTE !!!

if you do not see anything running , make sure you can pull the image :

# oc import-image

if that does not work , make sure you have the image available ( If you are working in an Air Gapped environment then change the image url to your internal registry).


for testing you new storage class the deployment comes with 2 simple images we can use for testing. Just run the following deployment :

# oc create -f deploy/test-claim.yaml -f deploy/test-pod.yaml

Now check your NFS Server for the file SUCCESS.

To delete the pods just run :

# oc delete -f deploy/test-pod.yaml -f deploy/test-claim.yaml

In case you want to use your newly created storage class you can use the following pvc as an example :

# cat > deploy/pvc.yaml << EOF
kind: PersistentVolumeClaim
apiVersion: v1
name: test-claim
storageClassName: managed-nfs-storage
- ReadWriteMany
storage: 100Mi

and create it

# oc create -f deploy/pvc.yaml

Multiple StorageClass

In some cases this StorageClass will not be the only Storageclass you have in the Cluster.
For this scenario we would want to make this our default StorageClass by adding a value to the StorageClass annotations :

# oc patch storageclass managed-nfs-storage -p '{"metadata": {"annotations": {"": "true"}}}'

Setting the Registry with NFS

if you tested the configuration and everything is working fine so far then all we need to now is create a new pvc and point the registry to work with it.


the following PVC is for Openshift Image Registry which needs to be ReadWriteMany :

# cat > deploy/pvc-registry.yaml << EOF
kind: PersistentVolumeClaim
apiVersion: v1
name: image-registry-nfs
namespace: openshift-image-registry
storageClassName: managed-nfs-storage
- ReadWriteMany
storage: 100Gi

Notice that we are claiming 100Gi according to the official document requirements so we need to make sure our NFS storage has at the very least around 200Gi of storage to be sufficient.

Now create the PVC :

# oc create -f deploy/pvc-registry.yaml

Image Registry Operator

for the Final step we well let the Image Registry Operator know that we have a created a claim for him which it can use so let edit the Operator configuration and modify the Storage section :

First let’s patch the registry :

#  oc patch cluster --type merge --patch '{"spec":{"managementState":"Managed"}}'

And attach the storage :

# oc patch cluster --type merge --patch '{"spec":{"storage":{"pvc":{"claim":"image-registry-nfs"}}}}'

We can view the changes with the following command :

# oc get -o yaml

and make sure the storage section looks as follow :

claim: image-registry-nfs

That is it

Have FUN !!!



Oren Oichman

Open Source contributer for the past 15 years