Working with NFS as a StorageClass in OpenShift 4

Oren Oichman
7 min readMar 18, 2020

--

Why This Tutorial

lately I found myself dealing with a lot of environment Test/P.O.C Labs where they all had a common issue … No centralized storage to provide a sufficient StorageClass for the OpenShift Environment.

For that reason I started search the internet for a good why to provide centralized storage and I found two (2) :

  1. ISCSI Server
  2. NFS with kubernetes-incubator

In my Tutorial I will focus on the second options of the kubernetes-incubator which provides NFS as a storage class for 2 reasons.
The first reason is that it is an “out of the box” approach which I tend to prefer and the second reason is that I need in my application the ability to provide ReadWriteMany storage which I can by using NFS.

At the end I will add nice used case about how to get the internal registry to work with a PVC from our StorageClass NFS.

First we will build the NFS server and then we will continue to StorageClass configuration:

NFS Server

for the NFS server we will need to NFS utils packages , setup the required directories and files and make sure the firewalld + SElinux are configured correctly.
This tutorial already assume that you have already pre installed the OS so I will not go into that in this tutorial.

NFS Packages

First we will install the necessary packages for which we require :

$ yum install -y nfs-utils policycoreutils-python-utils policycoreutils-python

now for the NFS configuration we will create a new directory

$ mkdir /var/nfsshare

Optional

Create an LVM device and mount it into the new directory :

$ lvcreate -L50G -n nfs vg00
$ mkfs.xfs /dev/vg00/nfs
$ mount -t xfs /dev/vg00/nfs /var/nfsshare

and add it to your fstab (or autofs) file to make sure it is consistent at boot time :

$ cat >> /etc/fstab << EOF
/dev/vg00/nfs /var/nfsshare xfs defaults 0 0
EOF

Continue…

Now we to make sure everybody has write permissions on the directory (I know this is bad security wise) so we will change it to 777

$ chmod 777 /var/nfsshare

Edit the /etc/exports file to publish the nfs share directory

$ cat > /etc/exports << EOF
/var/nfsshare *(rw,sync,no_wdelay,no_root_squash,insecure,fsid=0)
EOF

Start the services :

$ systemctl start rpcbind
$ systemctl start nfs-server
$ systemctl start nfs-lock
$ systemctl start nfs-idmap

Make sure they run on boot time :

$ systemctl enable rpcbind
$ systemctl enable nfs-server
$ systemctl enable nfs-lock
$ systemctl enable nfs-idmap

Firewall

For the firewalld we need to make sure the following services are open :

$ export FIREWALLD_DEFAULT_ZONE=`firewall-cmd --get-default-zone`
$ echo ${FIREWALLD_DEFAULT_ZONE}
public
$ firewall-cmd --permanent --zone=${FIREWALLD_DEFAULT_ZONE} --add-service=rpc-bind
$ firewall-cmd --permanent --zone=${FIREWALLD_DEFAULT_ZONE} --add-service=nfs
$ firewall-cmd --permanent --zone=${FIREWALLD_DEFAULT_ZONE} --add-service=mountd

Now we will reload the firewall configuration :

$ firewall-cmd --reload
$ firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: ens192
sources:
services: dhcpv6-client dns ftp http https kerberos ldap ldaps mountd nfs ntp ssh tftp
ports: 80/tcp 443/tcp 389/tcp 636/tcp 53/tcp 88/udp 464/udp 53/udp 123/udp 88/tcp 464/tcp 123/tcp 9000/tcp 8000/tcp 8080/tcp 111/tcp 54302/tcp 20048/tcp 2049/tcp 46666/tcp 42955/tcp 875/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:

SElinux

for the SElinux part we need to set two (2) boolean roles and set the nfs directory with semanage:

$ setsebool -P nfs_export_all_rw 1
$ setsebool -P nfs_export_all_ro 1

And for the Directory content switch :

$ semanage fcontext -a -t public_content_rw_t  "/var/nfsshare(/.*)?"
$ restorecon -R /var/nfsshare

Testing

On another machine , make sure you have “nfs-utils” installed and mount the directory from the server :

$ mount -t nfs nfs-server:/var/nfsshare /mnt
$ touch /mnt/1 && rm -f /mnt/1

If the command was successful and you can create a file and delete it , we are good to go

kubernetes-incubator

the “kubernetes-incubator” is a very good open source project which host several very interesting Kubernetes projects. One of the project is “ external-storage”.

What is an ‘external provisioner’?

An external provisioner is a dynamic PV provisioner whose code lives out-of-tree/external to Kubernetes. Unlike in-tree dynamic provisioners that run as part of the Kubernetes controller manager, external ones can be deployed & updated independently.
External provisioners work just like in-tree dynamic PV provisioners. A StorageClass object can specify an external provisioner instance to be its provisioner like it can in-tree provisioners. The instance will then watch for PersistentVolumeClaims that ask for the StorageClass and automatically create PersistentVolumes for them. For more information on how dynamic provisioning works, see the docs or this blog post.

Clone

now let’s clone the directory to our working server:

# git clone https://github.com/kubernetes-incubator/external-storage.git kubernetes-incubator

Now we need to make a few adjustments to our files and to our openshift environment.

First lets create a project for our operator :

# oc create namespace openshift-nfs-storage

And we would want the cluster to monitor the Operator so we will add the label to the namespace :

# oc label namespace openshift-nfs-storage "openshift.io/cluster-monitoring=true"

NOTE !!

In a case (which I do not recommend) your working NFS in one of the worker nodes then I would strongly advise you to label that node for management requirements:

# oc label node <NodeName> cluster.ocs.openshift.io/openshift-storage=''

Now we will run an update on some of the files we just cloned but first let’s change our project to our newly created one :

# oc project openshift-nfs-storage

Next we will update the deployment and the RBAC accordingly:

# cd kubernetes-incubator/nfs-client/
# NAMESPACE=`oc project -q`
# sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml
# sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/deployment.yaml

And Apply the RBAC configuration

# oc create -f deploy/rbac.yaml

and we need to update the service account with the right permissions:

# oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:nfs-client-provisioner

Now edit the “deployment.yaml” file with the NFS configuration and You may also want to change the PROVISIONER_NAME above from fuseim.pri/ifs to something more descriptive like storage.io/nfs, but if you do remember to also change the PROVISIONER_NAME in the storage class definition.

change the following section :

env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: <YOUR NFS SERVER HOSTNAME>
- name: NFS_PATH
value: /var/nfs
volumes:
- name: nfs-client-root
nfs:
server: <YOUR NFS SERVER HOSTNAME>
path: /var/nfs

to :

env:
- name: PROVISIONER_NAME
value: storage.io/nfs
- name: NFS_SERVER
value: nfs-server.example.com
- name: NFS_PATH
value: /var/nfsshare
volumes:
- name: nfs-client-root
nfs:
server: nfs-server.example.com
path: /var/nfsshare

Next just update the deploy/class.yaml with the new PROVISIONER_NAME :

# cat > deploy/class.yaml << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: storage.io/nfs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
EOF

Let’s deploy everyting we just modified :

# oc create -f deploy/class.yaml 
# oc create -f deploy/deployment.yaml

at this point (after a minutes) you should see the pod running :

# oc get pods -n openshift-nfs-storage
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-7894d87997-gqjcq 1/1 Running 0 21h

NOTE !!!

if you do not see anything running , make sure you can pull the image :

# oc import-image quay.io/external_storage/nfs-client-provisioner:latest

if that does not work , make sure you have the image available ( If you are working in an Air Gapped environment then change the image url to your internal registry).

Testing

for testing you new storage class the deployment comes with 2 simple images we can use for testing. Just run the following deployment :

# oc create -f deploy/test-claim.yaml -f deploy/test-pod.yaml

Now check your NFS Server for the file SUCCESS.

To delete the pods just run :

# oc delete -f deploy/test-pod.yaml -f deploy/test-claim.yaml

In case you want to use your newly created storage class you can use the following pvc as an example :

# cat > deploy/pvc.yaml << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
EOF

and create it

# oc create -f deploy/pvc.yaml

Multiple StorageClass

In some cases this StorageClass will not be the only Storageclass you have in the Cluster.
For this scenario we would want to make this our default StorageClass by adding a value to the StorageClass annotations :

# oc patch storageclass managed-nfs-storage -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'

Setting the Registry with NFS

if you tested the configuration and everything is working fine so far then all we need to now is create a new pvc and point the registry to work with it.

The PVC

the following PVC is for Openshift Image Registry which needs to be ReadWriteMany :

# cat > deploy/pvc-registry.yaml << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: image-registry-nfs
namespace: openshift-image-registry
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
EOF

Notice that we are claiming 100Gi according to the official document requirements so we need to make sure our NFS storage has at the very least around 200Gi of storage to be sufficient.

Now create the PVC :

# oc create -f deploy/pvc-registry.yaml

Image Registry Operator

for the Final step we well let the Image Registry Operator know that we have a created a claim for him which it can use so let edit the Operator configuration and modify the Storage section :

First let’s patch the registry :

#  oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}'

And attach the storage :

# oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"pvc":{"claim":"image-registry-nfs"}}}}'

We can view the changes with the following command :

# oc get configs.imageregistry.operator.openshift.io -o yaml

and make sure the storage section looks as follow :

storage:
pvc:
claim: image-registry-nfs

That is it

Have FUN !!!

--

--

Oren Oichman
Oren Oichman

Written by Oren Oichman

Open Source contributer for the past 15 years

Responses (5)