OpenShift 4 automation with Ansible from a Container

Oren Oichman
DevOps.dev
Published in
12 min readOct 29, 2022

--

Past and Future

When dealing with technology for a long time you starting to see some things repeat them self but in many variations and in interesting scenarios. I think this is a classic case where tools from the past help getting into a better future using automation tools to make sure what we running is what we want and apply Infrastructure as code along the way.

Automation

If you are new to OpenShift/Kubernetes or to Ansible I suggest learning about those tools before continue with this tutorial.

In this tutorial we are going to run Ansible in a Pod and make sure the playbook helps use with some Kubernetes maintenance so Ansible will actually run in OpenShift for Openshift . as an Open Source lover this is one of the main reasons why I like to work with this kind of tools.

Step by Step

The first step in our journy will be to create the container image which are going to use to run Ansible with Openshift module. Followed by creating an Ansible role and working with templates and tasks within the role to define what we want to achieve.

The creating steps are :

  1. build the image which will run ansible with Kubernetes module
  2. create a service account with the clusterrole/role and bindings.
  3. Running the image within OpenShift.

In our scenario we want to run a very simple health check playbook that creates A Pod , Service and a route , then run a simple url check and output if everything is available and then deletes the resources within the namespace. For this to work we will need to create an Ansible container with the OpenShift module. Build the playbook and test it from out side of the cluster with the service account the Pod will be using and create a Role and A rolebinding to make sure the script and create the given resources. In case you want to test a namespcae creation as well we need to create ClusterRole and Clusterrolebinding accordingly.

Basic Steps

Let’s first create a base working directory from which we are going to work from :

$ mkdir ose-ansible
$ cd ose-ansible

Bulding the Image

If you are using Ansible for some time now you know that Ansible is based on Python so we are going to use a python 3.8 image and build Ansible with the requested modules.

With your favorite editor (VIM obviously) copy/paste the following test to your Dockerfile :

FROM python:3.8-slimENV HOME=/opt/app-root/ \
PATH="${PATH}:/root/.local/bin"
COPY run-ansible.sh /usr/bin/RUN mkdir -p /opt/app-root/src && \
mkdir /opt/app-root/.kube/ && \
chmod a+x /usr/bin/run-ansible.sh
RUN pip install pip --upgrade && \
pip install ansible openshift kubernetes
LABEL \
name="openshift/ose-ansible" \
summary="OpenShift's installation and configuration tool" \
description="A containerized ose-openshift image to let you run playbooks" \
url="https://github.com/openshift/openshift-ansible" \
io.k8s.display-name="openshift-ansible" \
io.k8s.description="A containerized ose-openshift image to let you run playbooks on OpenShift" \
io.openshift.expose-services="" \
io.openshift.tags="openshift,install,upgrade,ansible" \
version="v4" \
release="8" \
architecture="x86_64" \
atomic.run="once" \
License="GPLv2+" \
vendor="slim" \
io.openshift.maintainer.product="OpenShift Container Platform" \
io.openshift.build.commit.id="1"
WORKDIR /opt/app-root/ENTRYPOINT [ "/usr/bin/run-ansible.sh" ]
CMD [ "/usr/bin/run-ansible.sh" ]

As you can see we are calling a bash script named run-ansible.sh. this scripts just make sure the variable are set and run the ansible-playbook command.

Again, with your favorite editor , create a new file called “run-ansible.sh” and copy the following content :

#!/bin/bashif [[ -z "$INVENTORY" ]]; then
echo "No inventory file provided (environment value INVENTORY)"
INVENTORY=/etc/ansible/hosts
else
export INVENTORY=${INVENTORY}
fi
if [[ -z "$OPTS" ]]; then
echo "no OPTS option provided (OPTS environment value)"
else
export OPTS=${OPTS}
fiif [[ -z "$K8S_AUTH_API_KEY" ]]; then
echo "No Kubernetes Authentication key provided (K8S_AUTH_API_KEY environment value)"
else
export K8S_AUTH_API_KEY=${K8S_AUTH_API_KEY}
fi
if [[ -z "${K8S_AUTH_KUBECONFIG}" ]]; then
echo "NO K8S_AUTH_KUBECONFIG environment variable configured"
exit 1
fi
if [[ -z "${K8S_AUTH_HOST}" ]]; then
echo "no Kubernetes API provided (K8S_AUTH_HOST environment value)"
else
export K8S_AUTH_HOST=${K8S_AUTH_HOST}
fi
if [[ -z "${K8S_AUTH_VALIDATE_CERTS}" ]]; then
echo "No validation flag provided (Default: K8S_AUTH_VALIDATE_CERTS=true)"
else
export K8S_AUTH_VALIDATE_CERTS=${K8S_AUTH_VALIDATE_CERTS}
fi
if [[ -z $"$PLAYBOOK_FILE" ]]; then
echo "No Playbook file provided... exiting"
exit 1
else
ansible-playbook ${OPTS} $PLAYBOOK_FILE
fi

Now that everything is in place we can go ahead and build the image. In my example we are using “registry.example.com” to save the image. Please go ahead and set it to your own registry :

$ export REGISTRY="registry.example.com"

and now run the buildah command :

$ buildah bud -f Dockerfile -t ${REGISTRY}/ose-ansible

NOTE

If you don’t want to create the image yourself I have created this image myself and you can download it from quay.io here

This may that about 10 minutes so try to be patient with this step…

Inventory File

The inventory file is very simple , all you need to add is the localhost because we are going to use kubeconfig and that will direct to the OpenShift API so we would work with the following inventory file :

With your favorite editor create a file named “inventory” with the following content :

[localhost]
127.0.0.1 ansible_connection=local ansible_host=localhost ansible_python_interpreter=/usr/bin/python3

This will insure ansible is going to use Python 3.8 and all the attached modules.

The playbook

The playbook in our case is going to be very simple , all we are going to run in the playbook is to call a role we are going to create and all our tasks will be run from the role.

With your favorite editor create a file named “playbook.yaml” with the following content :

---
- name: Run the hellogo image
hosts: localhost
roles:
- health-check

Now we are going to create the role backbone with the following command :

$ mkdir roles
$ ansible-galaxy init --init-path roles health-check

Templates

First Let’s create the templates which we are going to use for our OpenShift Testing.
The templates will be for the Pod which in our case will run the monkey app application I have build in a different tutorial (if you want you can found the link here ) and Templates for the sevice and the route as well.

We are going to use the same templates to delete the resources after our test is completed.

Pod Template

The “ansible-galaxy command created for use all the directories we need. Now we will create a new file under “roles/health-check/templates” named pod.yaml

With your favorite editor create a file named “pod.yaml.j2” with the following content :

apiVersion: v1
kind: Pod
metadata:
name: monkey-app
labels:
app: monkey-app
spec:
containers:
- name: monkey-app
image: quay.io/two.oes/monkey-app:latest
ports:
- containerPort: 8080

Now we will go and edit the main.yml file under the role “tasks” directory or in other words we will edit the “roles/health-check/tasks/main.yml” file

With your favorite editor open the “roles/health-check/tasks/main.yml” file and add the following content :

- name: set the monkey app Pod to present
kubernetes.core.k8s:
state: present
definition: "{{ lookup('template', 'pod.yaml.j2') | from_yaml }}"
namespace: "{{ namespace }}"

Notice that the namespace will be the name of the project we select and place it in the namespace variable. This is one of the things we are going to set as an environment variable and add it to the ansible run at the final step when we’ll run it from OpenShift itself but for now we will just set it up in the defaults file.

With your favorite editor open the “roles/health-check/defaults/main.yml” file and add the following content :

namespace: health-check

Now We will do the same for the service and the route. First we will create the template for the service and then we will update the tasks main.yml file.

With your favorite editor create a file named “roles/health-check/templates/service.yaml.j2” with the following content :

apiVersion: v1
kind: Service
metadata:
name: monkey-app
spec:
selector:
app: monkey-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080

Update the main.yml under the tasks directory and add the following lines that refers to the new service YAML jinja template under the templates directory.

With your favorite editor open the “roles/health-check/tasks/main.yml” file and add the following content :

- name: set the monkey app Service to present
kubernetes.core.k8s:
state: present
definition: "{{ lookup('template', 'service.yaml.j2') | from_yaml }}"
namespace: "{{ namespace }}"

For the next piece of the puzzle we will create the route for our service the same way we did so far. We will first create the template and then the reference through the main.yml file.

With your favorite editor create a file named “roles/health-check/templates/route.yaml.j2” with the following content :

apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: monkey-app
spec:
port:
targetPort: 8080
to:
kind: Service
name: monkey-app
weight: 100
wildcardPolicy: None

Update the main.yml of the tasks.

With your favorite editor open the “roles/health-check/tasks/main.yml” file and add the following content :

- name: set the monkey app Route to present
kubernetes.core.k8s:
state: present
definition: "{{ lookup('template', 'route.yaml.j2') | from_yaml }}"
namespace: "{{ namespace }}"

After the playbook finishes to create all the objects Let’s tell it to waif for 10 seconds before running the test to make sure the application is available.

With your favorite editor open the “roles/health-check/tasks/main.yml” file and add the following content :

- name: Sleep for 10 seconds and continue with play
ansible.builtin.wait_for:
timeout: 10

All we need to do now is to create a new tasks that will run a simple URL test with our route to make sure the application replies. if the return code is successful we will delete all the resources we have created in the playbook.

First we need to get the route details.

With your favorite editor open the “roles/health-check/tasks/main.yml” file and add the following content :

- name: Get an existing Route object
kubernetes.core.k8s_info:
api_version: v1
kind: Route
name: monkey-app
namespace: "{{ namespace }}"
register: route_url
- name: show route details
ansible.builtin.debug:
msg: "{{ route_url.resources[0].spec.host }}"

Once the playbook has the route we can run the check in the follow task.

With your favorite editor open the “roles/health-check/tasks/main.yml” file and add the following content :

- name: Run a URI test to get a positive response from our application.
ansible.builtin.uri:
url: "http://{{ route_url.resources[0].spec.host }}/api/says=banana"
method: GET
ignore_errors: yes

Once we have completed with the testing stage we will go ahead and delete the resource

With your favorite editor open the “roles/health-check/tasks/main.yml” file and add the following content :

- name: set the monkey app Pod to absent
kubernetes.core.k8s:
state: absent
definition: "{{ lookup('template', 'pod.yaml.j2') | from_yaml }}"
namespace: "{{ namespace }}"
- name: set the monkey app Pod to absent
kubernetes.core.k8s:
state: absent
definition: "{{ lookup('template', 'service.yaml.j2') | from_yaml }}"
namespace: "{{ namespace }}"
- name: set the monkey app Pod to absent
kubernetes.core.k8s:
state: absent
definition: "{{ lookup('template', 'route.yaml.j2') | from_yaml }}"
namespace: "{{ namespace }}"

And we are done for know with our tasks role file.

Testing

In Order to run the container (or the Pod later on) we need to set a few environment variables and mount several file in order for the playbook to work.

Environment Variable

First we will setup the environment variables we need:

  • INVENTORY — the variable which set the inventory file for the playbook
  • OPTS — this environment variable will contain flegs that do not require follow up referance (We will use this options to set the run verbosity options with a single or multiple “v” option)
  • K8S_AUTH_API_KEY — the API Key for the OpenShift Cluster. If you are using a simple user and not system:admin you can run the following command : $(oc whoami -t)
  • K8S_AUTH_KUBECONFIG — a referance for the kubeconfig we are going to use with the run , for the test run we will use our user kubeconfig and on OpenShift we will use the service account kubeconfig.
  • K8S_AUTH_HOST — the URL for the OpenShift API. A very simple way to get it is by running $(oc whoami — show-server) command
  • K8S_AUTH_VALIDATE_CERTS — this option will check the API TLS certificate, I usually set it to “false”
  • PLAYBOOK_FILE — A referance for the playbook file which ansible is going to run.

Before the testing we need to create an src directory for ansible :

$ mkdir src
$ chmod 777 src

Create the health-check namespace :

$ oc new-project health-check

Now let’s run a test run for our playbook

$ podman run -ti --rm --name ose-ansible  -e OPTS="-vv" \
-v ~/.kube/config:/opt/app-root/kubeconfig:Z,ro \
-v $(pwd)/src:/opt/app-root/src:Z,rw \
-v $(pwd):/opt/app-root/ose-ansible \
-e PLAYBOOK_FILE=/opt/app-root/ose-ansible/playbook.yaml \
-e K8S_AUTH_KUBECONFIG=/opt/app-root/kubeconfig \
-e INVENTORY=/opt/app-root/ose-ansible/inventory \
-e K8S_AUTH_API_KEY=$(oc whoami -t) \
-e DEFAULT_LOCAL_TMP=/tmp/ \
-e K8S_AUTH_HOST=$(oc whoami --show-server) \
-e K8S_AUTH_VALIDATE_CERTS=false \
ose-ansible

As you can see we have mounted one file and 2 directories and ran the ansible playbook with interactive terminal “-ti” so we can see the anisble colors in the output.

If everything is working as expected you should get the following results :

PLAY RECAP *****************************************************************************************************************************************************************
localhost : ok=10 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

From OpenShift

For our final step we need to create a Kubernetes cronjob resource to run this playbook from Ansible itself.

As in must cases we would want to monitor the logs to make sure the health check is successful so make sure to send the logs to an external system so you can monitor and follow the logs to make sure everything is o.k.

ServiceAccount

We want to control the permissions of the Ansible Playbook in regards to what resources can it create,list and delete so we need to create a new service account and then create a role and a rolebinding to give it the need permissions:

$ oc create sa health-check

Now for the we will create the role that will allow the playbook to create a pod , a service and a route so the role should look as such :

Create a new Directory named “YAML”

With your favorite editor create a new file named “YAML/role.yaml” and add the following content :

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: health-check
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","get", "watch", "list", "update"]
- apiGroups: [""]
resources: ["services"]
verbs: ["create","get", "watch", "list", "update"]
- apiGroups: [""]
resources: ["routes"]
verbs: ["create","get", "watch", "list", "update"]

Same step for the rolebinding :

With your favorite editor create a new file named “YAML/rolebinding.yaml” and add the following content :

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: health-check
namespace: health-check
subjects:
- kind: ServiceAccount
name: health-check
namespace: health-check
roleRef:
kind: Role
name: health-check
apiGroup: rbac.authorization.k8s.io

Let’s create all the resources :

$ oc create -f YAML/role.yaml -f YAML/rolebinding.yaml

Now that the setup is ready we need to grep the token from a file from within the Pod. In order to make that happen we need to change a small thing in our ansible-run.sh file.

with your favorite editor , open the file called “run-ansible.sh” and change the file according to the following content ( Line 17):

REPLACE :

if [[ -z "$K8S_AUTH_API_KEY" ]]; then 
echo "No Kubernetes Authentication key provided (K8S_AUTH_API_KEY environment value)"
else
export K8S_AUTH_API_KEY=${K8S_AUTH_API_KEY}
fi
if [[ -z "${K8S_AUTH_KUBECONFIG}" ]]; then
echo "NO K8S_AUTH_KUBECONFIG environment variable configured"
exit 1
fi

WITH :

if [[ -f /var/run/secrets/kubernetes.io/serviceaccount/token ]]; then
K8S_AUTH_API_KEY=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
elif [[ -z "$K8S_AUTH_API_KEY" ]]; then
echo "No Kubernetes Authentication key provided (K8S_AUTH_API_KEY environment value)"
else
export K8S_AUTH_API_KEY=${K8S_AUTH_API_KEY}
fi

Next we will copy all the needed files to our ose-ansible image

With your favorite editor (VIM obviously) copy/paste the following line to your Dockerfile (at line 7) :

COPY roles /opt/app-root/

and recreate the ose-ansible image :

$ buildah bud -f Dockerfile -t ${REGISTRY}/ose-ansible && buildah push ${REGISTRY}/ose-ansible

ConfigMap

We now need to take the inventory and the playbook.yaml and make then available on Openshift by creating a ConfigMap subPath from them.

From the inventory file we will run :

# oc create configmap inventory --from-file=inventory=inventory

Now let’s do the same for our playbook.yaml file :

# oc create configmap playbook \
--from-file=playbook.yaml=playbook.yaml

Now that everything is setup all we need to do is build the cronjob. In my example I will run it every 5 minutes

with your favorite editor, create a new file named “cronjob.yaml” and add the following context to it :

apiVersion: batch/v1
kind: CronJob
metadata:
name: cron-health-check
spec:
concurrencyPolicy: Forbid
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
serviceAccountName: health-check
containers:
- name: health-check
image: ${REGISTRY}/ose-ansible
env:
- name: OPTS
value: "-vv"
- name: HOME
value: "/opt/app-root/"
- name: PLAYBOOK_FILE
value: "/opt/app-root/ansible/playbook.yaml"
- name: INVENTORY
value: "/opt/app-root/ansible/inventory"
- name: DEFAULT_LOCAL_TMP
value: "/tmp/"
- name: K8S_AUTH_HOST
value: "<Server API URL>"
- name: K8S_AUTH_VALIDATE_CERTS
value: "false"
volumeMounts:
- name: playbook
mountPath: /opt/app-root/ansible/playbook.yaml
subPath: playbook.yaml
- name: inventory
mountPath: /opt/app-root/ansible/inventory
subPath: inventory
- name: cache-volume
mountPath: /opt/app-root/src
volumes:
- name: playbook
configMap:
name: playbook
- name: inventory
configMap:
name: inventory
- name: cache-volume
emptyDir: {}

Now Apply the newly created file :

$ oc apply -f cronjob.yaml

Now comes the hard part of waiting for a full 5 minutes and then if we run “oc get all” we should see something like :

NAME                                   READY   STATUS      RESTARTS   AGE
pod/cron-health-check-27784600-k98tf 0/1 Completed 0 2m30s
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/cron-health-check */5 * * * * False 0 2m38s 3m39s
NAME COMPLETIONS DURATION AGE
job.batch/cron-health-check-27784600 1/1 18s 2m38s

YOU can now check the logs of your playbook to see if everything is as expected.
more so you can trigger anything you want in case of a failure by adding a new task that runs after the URL task.

That is it.
Have fun

If you have any question feel free to responed/ leave a comment.
You can find me on linkedin at : https://www.linkedin.com/in/orenoichman
Or twitter at : https://twitter.com/ooichman

--

--