Dynamic Admission Controllers in OpenShift 4

Oren Oichman
12 min readApr 7, 2022

Why This Tutorial ?

There are only a few places where we can create policies to check whether security/high availability requirements are met or change the way the application is been deployed to enforce such policies. Admission controller is such place!

Who is it for ?

This tutorial is for PAAS admins which are NOT developers but do want to create a “Group Policy” style of tool in order to automate their cluster but have no experience in application deployment.
The requirements for this tutorial is a basic understanding of a programing language (we will write the code in GO but python/nodeJS is also a good option) and a basic understanding HTTP form.
I highly recommend you complete the basic knowledge skill so you will get the most of this tutorial.

Where to Begin ?

First we need to understand how admission controllers work and how do the cluster communicate with them and then we will go into a dipper level of what is actually been communicated between the cluster and the webhook and for the last part I will provide a small example (in code) of how to both validate and mutate our code so it will fit our need.

What is a dynamic admission controller?

In short a “webhook” …
In details is a webhook that the Kubernetes API talks to when a set of conditions are met. During the communication the Kubernetes API send a “request” to the webhook and waits for a “response”. After the “response” the Kubernetes cluster understands what it needs to do with the Object.

Dynamic Admission Controllers Type.

When the Kubernetes Receives an API request there are 2 point during the Transaction where it checks for a webhook configuration.

  1. the Mutation Stage
  2. the Validation Stage

A very good illustration for the API call Transaction process can be seen in the following diagram :

There are 2 kind of Kubernetes resources which the PAAS admin can use for the web-hook definition in order to tell the cluster to call the web-hook in each of stages.

We can grep the webhook definitions from the api-resourses command :

# oc api-resources -o name | grep -i webhook
mutatingwebhookconfigurations.admissionregistration.k8s.io
validatingwebhookconfigurations.admissionregistration.k8s.io

To make thinks simpler :

  • mutation stage == MutatingWebhookConfiguration
  • validation stage == ValidatingWebhookConfiguration

Now that we know the kind of resource we need we can start building the webhook itself (don’t worry , we get back to the resource definition in details later on).

Building the Webhook

We we think about Kubernetes GO is the go to scripting language we will want to use but in this is not case for must admins whom are used to other scripting languages such as BASH , Perl or Python.
we can write the webhook in any language we want :

  • Go
  • Python
  • Nodejs
  • JAVA
  • Php
  • Apache/Nginx + bash

In this example we will use GO to build the web Hook server to create a response for both the mutation and the validation requests.
In the mutation we will check if the “imagePullPolocy” is defined on all the containers in the Pod and check if the value is “Always” if it’s not we will change it so it will match.
In the validation we will do the same without changing the value. If all the values are “Always” then it will replace an 200 (or o.k.) response.

Initializing GO

First we will build our GOPATH directory.

$ mkdir imagepullpolicy 

And export the GOPATH variable :

$ export GOPATH="$(pwd)/imagepullpolicy/"

Next we will create the needed directories :

$ mkdir imagepullpolicy/{src,bin,pkg}$ mkdir imagepullpolicy/src/ippac/ $ cd imagepullpolicy/src/ippac/

GO Coding

No we are going to start with GO files starting with the main.go.
In this file we will start the TLS web server and create 2 URL references one for the mutation and one for the validation:
(Note are in the code)

With your favorite editor (IDE) create the following file (main.go):

package mainimport (
"fmt"
"crypto/tls"
"flag"
"net/http"
"os"
"os/signal"
"syscall"
// "github.com/golang/glog"
"context"
)
type myServerHandler struct {}var (
tlscert , tlskey string
)
func getEnv(key , fallback string) string {
value , exists := os.LookupEnv(key)
if !exists {
value = fallback
}
return value
}
func main() {

// chaeck the Environment variable for User Define Placements
certpem := getEnv("CERT_FILE", "/opt/app-root/tls/tls.crt")
keypem := getEnv("KEY_FILE", "/opt/app-root/tls/tls.key")
port := getEnv("PORT", "8443")
// Setting the variables for the TLS
flag.StringVar(&tlscert, "tlsCertFile", certpem , "The File Contains the X509 Certificate for HTTPS")
flag.StringVar(&tlskey, "tlsKeyFile", keypem , "The File Contains the X509 Private key")
flag.Parse()certs , err := tls.LoadX509KeyPair(tlscert, tlskey)if err != nil {
// glog.Errorf("Failed to load Certificate/Key Pair: %v", err)
fmt.Fprintf(os.Stderr, "Failed to load Certificate/Key Pair: %v", err);
}
// Setting the HTTP Server with TLS (HTTPS)
server := &http.Server {
Addr: fmt.Sprintf(":%v", port),
TLSConfig: &tls.Config{Certificates: []tls.Certificate{certs}},
}
// Setting 2 variable which are defined by an empty struct for each of the function depending on the URL path
// the http request is calling
// in our example we have 2 paths , one for the mutate and one for validate
mr := myServerHandler{}
gs := myServerHandler{}
mux := http.NewServeMux()
// Setting a function reference for the /mutate URL
mux.HandleFunc("/mutate", mr.mutserve)
// Setting a function reference for the /validate URL
mux.HandleFunc("/validate", gs.valserve)
server.Handler = mux
// Starting a new channel to start the Server with TLS configuration we provided when we defined the server
// variable
go func() {
if err := server.ListenAndServeTLS("",""); err != nil {
// Failed to Listen and Serve Web Hook Server
fmt.Fprintf(os.Stderr, "Failed to Listen and Serve Web Hook Server: %v", err);
}
}()
// The Server Is running on Port : 8080 by default
fmt.Fprintf(os.Stdout, "The Server Is running on Port : %s \n" , port)
// Next we are going to setup the single handling for our HTTP server by sending the right signals to the channelsignalChan := make(chan os.Signal, 1)
signal.Notify(signalChan, syscall.SIGINT, syscall.SIGTERM)
<-signalChan
// Get Shutdown signal , sutting down the webhook Server gracefully...
fmt.Fprintf(os.Stdout, "Get Shutdown signal , sutting down the webhook Server gracefully...\n")
server.Shutdown(context.Background())
}

For our validate function we will create a new file named validate.go which run the logic of testing the image pull policy and will send back and “o.k” if all the images are set the Always :
(Notes are in the code)

Again with your favorite editor (IDE) create the following file (validate.go) :

package mainimport (
"net/http"
"os"
"io/ioutil"
admissionv1 "k8s.io/api/admission/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
corev1 "k8s.io/api/core/v1"
"encoding/json"
"fmt"
)func (gs *myServerHandler) valserve(w http.ResponseWriter, r *http.Request) {if r.URL.Path != "/validate" {
fmt.Fprintf(os.Stderr, "Not a Valid URL Path\n")
http.Error(w, "Not A Valid URL Path", http.StatusBadRequest)
}
var Body []byte
if r.Body != nil {
if data , err := ioutil.ReadAll(r.Body); err == nil {
Body = data
} else {
fmt.Fprintf(os.Stderr, "Unable to Copy the Body\n")
}
}
if len(Body) == 0 {
fmt.Fprintf(os.Stderr, "Unable to retrieve Body from the WebHook\n")
http.Error(w, "Unable to retrieve Body from the API" , http.StatusBadRequest )
return
} else {
fmt.Fprintf(os.Stdout, "Body retrieved\n")
}
arRequest := &admissionv1.AdmissionReview{}if err := json.Unmarshal(Body, arRequest); err != nil {
fmt.Fprintf(os.Stderr, "unable to Unmarshal the Body Request\n")
http.Error(w, "unable to Unmarshal the Body Request" , http.StatusBadRequest)
return
}

// Making Sure we are not running on a system Namespace
if isKubeNamespace(arRequest.Request.Namespace) {
fmt.Fprintf(os.Stderr, "Unauthorized Namespace\n")
http.Error(w, "Unauthorized Namespace", http.StatusBadRequest)
}
// initial the POD values from the requestrow := arRequest.Request.Object.Raw
pod := corev1.Pod{}
if err := json.Unmarshal(row, &pod); err != nil {
fmt.Fprintf(os.Stderr, "Unable to Unmarshal the Pod Information\n")
http.Error(w, "Unable to Unmarshal the Pod Information", http.StatusBadRequest)
}
// Now we Are going to Start and build the Response
arResponse := admissionv1.AdmissionReview {
Response: &admissionv1.AdmissionResponse{
Result: &metav1.Status{Status: "Failure", Message: "Not All Images are set to \"Always pull image\" policy", Code: 401},
UID: arRequest.Request.UID,
Allowed: false,
},
}
// Let's take an array of all the containers
containers := pod.Spec.Containers
var pullPolicyFlag bool
pullPolicyFlag = true
for i , container := range containers {
fmt.Fprintf(os.Stdout, "container[%d]= %s imagePullPolicy=%s", i, container.Name , container.ImagePullPolicy)
if container.Name != "recycler-container" && containers[i].ImagePullPolicy != "Always" {
pullPolicyFlag = false
}
}
arResponse.APIVersion = "admission.k8s.io/v1"
arResponse.Kind = arRequest.Kind
if pullPolicyFlag == true {
arResponse.Response.Allowed = true
arResponse.Response.Result = &metav1.Status{Status: "Success",
Message: "All Images are set to \"Always pull image\" policy",
Code: 201}
}
resp , resp_err := json.Marshal(arResponse)if resp_err != nil {
fmt.Fprintf(os.Stderr, "Unable to Marshal the Request\n")
http.Error(w, "Unable to Marshal the Request", http.StatusBadRequest)
}
if _ , werr := w.Write(resp); werr != nil {
fmt.Fprintf(os.Stderr, "Unable to Write the Response\n")
http.Error(w, "Unable to Write Response", http.StatusBadRequest)
}
}

For the mutate part is where things start to get more interesting. In order to change the image pull policy we need to create a patch which is consistent of all the image pull policy the Pod needs to be set up depending on the numder of images the Pod needs to pull :
(Notes are in the code)

Create another file with your favorite editor (IDE) and make sure the file (I called it mutate.go) is with the following content :

package mainimport (
"net/http"
"fmt"
"io/ioutil"
"encoding/json"
"os"
"strconv"
"k8s.io/api/admission/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
admissionv1 "k8s.io/api/admission/v1"
)
func isKubeNamespace(ns string) bool {
return ns == metav1.NamespacePublic || ns == metav1.NamespaceSystem
}// the struct the patch needs for each image
type patchOperation struct {
Op string `json:"op"`
Path string `json:"path"`
Value interface{} `json:"value,omitempty"`
}
func (mr *myServerHandler) mutserve(w http.ResponseWriter, r *http.Request) {var Body []byte
if r.Body != nil {
if data , err := ioutil.ReadAll(r.Body); err == nil {
Body = data
}
}
if len(Body) == 0 {
fmt.Fprintf(os.Stderr, "Unable to retrieve Body from API")
http.Error(w,"Empty Body", http.StatusBadRequest)
}
fmt.Fprintf(os.Stdout,"Received Request\n")if r.URL.Path != "/mutate" {
fmt.Fprintf(os.Stderr, "Not a Validate URL Path")
http.Error(w, "Not A Validate URL Path", http.StatusBadRequest)
}
// Read the Response from the Kubernetes API and place it in the RequestarRequest := &v1.AdmissionReview{}
if err := json.Unmarshal(Body, arRequest); err != nil {
fmt.Fprintf(os.Stderr, "Error Unmarsheling the Body request")
http.Error(w, "Error Unmarsheling the Body request", http.StatusBadRequest)
return
}
raw := arRequest.Request.Object.Raw
obj := corev1.Pod{}
if !isKubeNamespace(arRequest.Request.Namespace) {

if err := json.Unmarshal(raw, &obj); err != nil {
fmt.Fprintf(os.Stderr, "Error , unable to Deserializing Pod")
http.Error(w,"Error , unable to Deserializing Pod", http.StatusBadRequest)
return
}
} else {
fmt.Fprintf(os.Stderr, "Error , unauthorized Namespace")
http.Error(w,"Error , unauthorized Namespace", http.StatusBadRequest)
return
}
containers := obj.Spec.ContainersarResponse := v1.AdmissionReview{
Response: &v1.AdmissionResponse{
UID: arRequest.Request.UID,
},
}
var patches []patchOperationfmt.Fprintf(os.Stdout, "Starting the Loop for containers\n")

// running a for loop on all the images in order to create the patch
for i , container := range containers {
fmt.Fprintf(os.Stdout, "container[%d] = %s imagePullPolicy = %s\n", i, container.Name , container.ImagePullPolicy)
if containers[i].ImagePullPolicy == "Never" || containers[i].ImagePullPolicy == "IfNotPresent" {

patches = append(patches, patchOperation{
Op: "replace",
Path: "/spec/containers/"+ strconv.Itoa(i) +"/imagePullPolicy",
Value: "Always",
})
}
}
fmt.Fprintf(os.Stdout, "the Json Is : \"%s\"\n", patches)patchBytes, err := json.Marshal(patches)if err != nil {
fmt.Fprintf(os.Stderr, "Can't encode Patches: %v", err)
http.Error(w, fmt.Sprintf("couldn't encode Patches: %v", err), http.StatusInternalServerError)
return
}
v1JSONPatch := admissionv1.PatchTypeJSONPatch
arResponse.APIVersion = "admission.k8s.io/v1"
arResponse.Kind = arRequest.Kind
arResponse.Response.Allowed = true
arResponse.Response.Patch = patchBytes
arResponse.Response.PatchType = &v1JSONPatch

resp, rErr := json.Marshal(arResponse)
if rErr != nil {
fmt.Fprintf(os.Stderr, "Can't encode response: %v", rErr)
http.Error(w, fmt.Sprintf("couldn't encode response: %v", rErr), http.StatusInternalServerError)
}
if _ , wErr := w.Write(resp); wErr != nil {
fmt.Fprintf(os.Stderr, "Can't write response: %v", wErr)
http.Error(w, fmt.Sprintf("cloud not write response: %v", wErr), http.StatusInternalServerError)
}
}

Note that in both files our “good” response in in the end when we using w.Write as a function to send back the response.

Now that all the files are in place we need to make sure our modules + vendor directory is ready for build :

# go mod init ippac

And then to generate the vendor :

# go mod vendor

Now that everything is set we can move on and build the image.

Let’s create a Dockerfile that will build our go binary and then we will copy it to a scratch image.

change your working directory back to the GOPATH directory :

# cd $GOPATH

Now create a Containerfile (Yes, with your favorite editor) :

FROM ubi8/go-toolset AS builderRUN mkdir -p /opt/app-root/src/ippac
WORKDIR /opt/app-root/src/ippac
ENV GOPATH=/opt/app-root/
ENV PATH="${PATH}:/opt/app-root/src/go/bin/"
COPY src/ippac .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o ippacFROM ubi8/ubi-minimal
COPY --from=builder /opt/app-root/src/ippac/ippac /usr/bin/
USER 1001
EXPOSE 8080 8443CMD ["/usr/bin/ippac"]
ENTRYPOINT ["/usr/bin/ippac"]

Go ahead and build the image (with buildah)

# buildah bud -f Containerfile -t registry.example.com/library/ippac

Once the image is ready we can push it to our registry :

# buildah push registry.example.com/library/ippac

Admission Controller Deployment

Or deployment is consistent of 3 parts :

  1. Creating a service with a “generate secret” annotation
  2. Deploying the image with the secret
  3. setting the Validation / Mutation web hook resource and annonating the webhook configration with the internal CA

Service and Certificate

As mentioned the Kubernetes Cluster and the webhook are communicating over TLS connection. For that we will create a certificate which we will provide to the Kubernetes Cluster and a Cert/Keys files for the application Pod in order to run the container with TLS support (you can see in the main.go file that we are initiating a TLS connection).

OpenShift Internal CA.

OpenShift manage internal CA and rotate the certificate authomatically. we can use this ability to provide a certificate for our admission controller. Service serving certificates are intended to support complex middleware applications that require encryption. These certificates are issued as TLS web server certificates.

The service-ca controller uses the x509.SHA256WithRSA signature algorithm to generate service certificates.

The generated certificate and key are in PEM format, stored in tls.crt and tls.key respectively, within a created secret. The certificate and key are automatically replaced when they get close to expiration.

Before we start let’s make sure we have a namespace :

# cat << EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: kube-ippac
labels:
openshift.io/cluster-monitoring: "true"
EOF

Switch to it

# oc project kube-ippac

Service Deployment

In this case we want to deploy the service before the pods in order to auto generate the secret that will be used by the pods.

Deploy the service :

# cat << EOF | oc apply -f -
apiVersion: v1
kind: Service
metadata:
name: ippac
namespace: kube-ippac
spec:
selector:
app: ippac
ports:
- protocol: TCP
port: 8443
targetPort: 8443
EOF

Now let’s annotate the service with the “ service.beta.openshift.io/serving-cert-secret-name" equal to the name of the secret we want to us :

# oc annotate service ippac service.beta.openshift.io/serving-cert-secret-name=ippac-tls 

The above action will generate a new TLS with the name that we’ve provided.

Pod Deployment

Now for the deployment :

# cat << EOF | oc apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: ippac
namespace: kube-ippac
spec:
selector:
matchLabels:
app: ippac
replicas: 2
template:
metadata:
labels:
app: ippac
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ippac
topologyKey: kubernetes.io/hostname
containers:
- name: ippac
image: registry.example.com/library/ippac:latest
imagePullPolicy: Always
ports:
- containerPort: 8443
env:
- name: CERT_FILE
value: '/opt/app-root/tls/tls.crt'
- name: KEY_FILE
value: '/opt/app-root/tls/tls.key'
- name: PORT
value: '8443'
volumeMounts:
- name: ippac-certs
mountPath: /opt/app-root/tls/
readOnly: true
volumes:
- name: ippac-certs
secret:
secretName: ippac-tls
EOF

to complete the deployment part all we need to do is to make sure the pods are running :

# oc get pods -n kube-ippac -o wide

If everything is in order you should see 2 pods running on 2 different cluster workers.

Validation / Mutation Resource

for the final part we will want to tell the cluster to check with our webhook that everything is in order. to modify the pod configuration we will use “MutatingWebhookConfiguration” and for validating the pod configuration we will use “ValidatingWebhookConfiguration”.

We can annotate a MutatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint.

And now let’s create the webhook resource :

# cat << EOF | oc apply -f -
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: "imagepullpolicy.il.redhat.io"
annotations:
service.beta.openshift.io/inject-cabundle: true
webhooks:
- name: "imagepullpolicy.il.redhat.io"
namespaceSelector:
matchExpressions:
- key: admission.il.redhat.io/imagePullPolicy
operator: In
values: ["True"]
rules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["CREATE","UPDATE"]
resources: ["pods"]
scope: "Namespaced"
clientConfig:
service:
namespace: "kube-ippac"
name: "ippac"
path: /validate
port: 8443
caBundle:
admissionReviewVersions: ["v1"]
sideEffects: None
timeoutSeconds: 5
EOF

as you can see in the YAML configration we set it up with a filter that is looking for a label of “admission.il.redhat.io/imagePullPolicy = true” and only then it will use the webhook for validation.

for the mutate webhook we are going to create the following resource :

# cat << EOF | oc apply -f -
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: "imagepullpolicy.il.redhat.io"
annotations:
service.beta.openshift.io/inject-cabundle: true
webhooks:
- name: "imagepullpolicy.il.redhat.io"
reinvocationPolicy: IfNeeded
namespaceSelector:
matchExpressions:
- key: admission.il.redhat.io/imagePullPolicy
operator: In
values: ["True"]
rules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["CREATE","UPDATE"]
resources: ["pods"]
scope: "Namespaced"
clientConfig:
service:
namespace: "kube-ippac"
name: "ippac"
path: /mutate
port: 8443
caBundle:
admissionReviewVersions: ["v1", "v1beta1"]
sideEffects: None
EOF

in the mutate part we are using the same filter and we are adding the “reinvocationPolicy: IfNeeded” so it will run again if other mutate admission controller are running and effecting the pod after it.

Once every thing is configured we can go ahead and test it :

Testing

First create a new namespace :

# cat << EOF | oc apply -f - 
apiVersion: v1
kind: Namespace
metadata:
name: imagepullpolicy-test
labels:
admission.il.redhat.io/imagePullPolicy: "True"
EOF

Next we will create a deployment that will set the imagePullPolicy to “never” during the deployment. watch as the pods are being deployed the admission controller changes them to “imagePullPolicy: Always”

# cat << EOF | oc apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: ippac-example
namespace: imagepullpolicy-test
spec:
selector:
matchLabels:
app: ippac-example
replicas: 1
template:
metadata:
labels:
app: ippac-example
spec:
containers:
- name: ippac-example
image: quay.io/ooichman/daemonize:latest
imagePullPolicy: Never
EOF

Now run “oc get pods” and test the pod to see that the imagePullPolicy is set to Always.

That is it

If you have any question feel free to responed/ leave a comment.
You can find me on linkedin at : https://www.linkedin.com/in/orenoichman
Or twitter at : https://twitter.com/ooichman

--

--