OpenShift 4 — In cluster CLI tools

Oren Oichman
5 min readAug 21, 2022


day 2 operation

During my work with OpenShift I notice that in more then one cases we need command line tools to be available to us from within the cluster (to save resources on external servers).

One good example is the ssh client in case one of our workers become unavailable we can login to the container and from there to run ssh into the rough server to see what’s going on.
Another good example is to copy the oc binary to the container and work with oc from it which by then we can use the service account permissions rather then the user permissions.

Where to begin ?

The first thing we should consider is what are the binaries we are going to use and if there is any need for special directories.
In this example we are using ssh client which needs the $HOME/.ssh/ directory and we are going to use kubectl/oc which uses $HOME/.kube/ directory. In our case we can create 2 PVC for each application or create one for the home directory. We will go with the Home Directory PVC

Creating the Image

To my opinion we do NOT need an “all in one” image which holds all of the command line binaries we need. Based on this assuption I am going to create as minimal as possible image with oc/kubectl and ssh binaries.

First create your working directory:

# mkdir ~/ubi-clitools


When Pods are running on OpenShift we want them to be always available and not completed (unlike a Job/CronJob) so we need to create a process that will never stop running.
For that we can create a very simple bash script that uses /dev/null as it’s never end point of reference. Later on in this tutorial I will show how to use the container without the daemon script.

The BASH script should look as follow :

(with your favorite EDITOR create the file named

tail -f /dev/null

That is it .


Other then the packages/binaries must of our command line tools containers will look the same.
In the following example I will install openssh-clients and copy the oc binary to /usr/bin

(with your favorite EDITOR create the file named Dockerfile)

FROM ubi8/ubi-minimalRUN microdnf install -y openssh-clients tar && microdnf clean all
WORKDIR /opt/app-root/
COPY /usr/sbin/
RUN curl -s -o - | tar zxvf - -C /usr/bin/
USER 1001
ENTRYPOINT ["/usr/sbin/"]

Let’s build the Image by setting the destination registry

# export REGISTRY=""

And now the image build :

# buildah bud -f Dockerfile -t ${REGISTRY}/ubi-clitools && buildah push ${REGISTRY}/ubi-clitools

The Image should be around 250M so we are good to move on and build the deployment.

On OpenShift

For our namespace because is it an administrative namespace it will be a good practice to name them to start with kube or openshift so we will use the following command :

# oc create ns openshift-clitools

Then we would want to add the namespace to our monitoring stack :

# oc label namespace openshift-clitools ""

for the final set let’s switch to this namespace :

# oc project openshift-clitools

Service Account
the tools in some cases will need permissions on the cluster to manage the cluster. In our case we would want to provide cluster admin to our service account so the oc command line will have administrative privileges.

# oc create sa clitools -n openshift-clitools

and for the privileges :

# oc adm policy add-cluster-role-to-user cluster-admin -z clitools -n openshift-clitools

On the POD our home directory is a persistent volume claim (PVC) so that all of our settings (BASHRC for example) will be kept to the next time we will connect to the Pod console or if the POD dies and a new POD springs up everything will be kept intact.

Let’s go ahead and create the PVC :

(with your favorite EDITOR create the file homdir-pvc.yaml )

apiVersion: v1
kind: PersistentVolumeClaim
name: homedir-claim
- ReadWriteOnce
volumeMode: Filesystem
storage: 300Mi

I don’t think we need more then 300 MB to our home directory but if you want more to yourself just set as much as you want.

# oc apply -f homdir-pvc.yaml

Due to the fact that this is an admin tools image and not a service that needs to be consistently available we don’t need more then 1 replica and if we need we can set the pod on specific node with nodeName.

For example in case our worker named worker-01 we can set the following deployment :

(with your favorite EDITOR create the file clitools-deployment.yaml )

apiVersion: apps/v1
kind: Deployment
name: ubi-clitools
app: ubi-clitools
replicas: 1
app: ubi-clitools
serviceAccount: clitools
serviceAccountName: clitools
- name: ubi-clitools
- mountPath: "/opt/app-root/"
name: homedir
image: ${REGISTRY}/ubi-clitools
- name: HOME
value: "/opt/app-root"
nodeName: worker-01
- name: homedir
claimName: homedir-claim

(Don’t forget to change the REGISTRY to your internal registry)

Now let’s apply the deployment file :

# oc apply -f clitools-deployment.yaml

Now that we have everything deployed we can start and play with it a little bit.

first for a simple and quick login we can create the following alias :

# alias oc-rsh='oc rsh $(oc get pods -o name -n openshift-clitools | grep clitools) /bin/bash'

(if you want it permanently you can add the alias to your .bashrc file )

Once you are logged in you create a .bashrc file with the following values :

# .bashrc 
export TERM="linux";
export PS1="\[\033[01;33m\]clitools $\[\033[00m\] "

More so , create a new .bash_profile with the following content :

# .bash_profile# Get the aliases and functions
if [ -f /opt/app-root/.bashrc ]; then
source /opt/app-root/.bashrc
# User specific environment and startup programsexport PATH=$PATH:$HOME/.local/bin:$HOME/bin

Once we’ve created the file we can go ahead the source it :

# source .bashrc

Now next time you will login you will automatically be prompt with your new prompt.

Next Step

It’s all up 2 you. you can add your profile how ever you want just make sure you keep it as text files that will help you work better rather then binaries in the home directory.

If you have any question feel free to responed/ leave a comment.
You can find on linkedin at :
Or twitter at :