OpenShift 4 with Kerberos authentication (Request Header)
Why This Article
Well, because some organization haven’t switched to OpenID Connect yet , Kerberos is the closest Protocol to allow then connect with a Single Sign On to application within the organization
In An Enterprise Company
As we all know in any Large Enterprise company there is a central identity management which holds all of the organization users information.
all of the internal application (should) use that central identity for the triple-A standard which stand for Authentication , Authorization and Audit.
Authentication
when we are going through an authentication process we are basically telling the application “here is prof the we are who we say we are”. A real life example is showing your ID card at a bank to prove you are who you say you are.
In our tutorial our Authentication mechanism is Kerberos which used a ticket (after first login) in order to check if the user is the user which claims to be
Authorization
After the user has been authenticated the application needs to go and check if the user is allowed to access the application (or a specific information/page with in the application) and returns the data accordingly.In real life is like a backstage pass at a concert.
In our example there actually 2 authorization mechanism, the first is the Kerberos which needs to make sure the application has permission provide service over Kerberos and the second is a usually a group based authorization which means you have to be a part of a specific group in order to gain access.
Audit
The Audit part is unfortunately sometimes neglected but is a very critical part in our process.
Every login no matter if it is successful or unsuccessful should be audit and saved. the best way to find security breaches ore attempts is to go over the logs and see if anyone had login in unreasonable hours or from a far away location which is just not possible that it is the same user.
Kerberos
Kerberos is a computer-network authentication protocol that works on the basis of tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner.
Apache httpd with Kerberos
Apache httpd has a lot of authentication modules. One of the authentication modules is for Kerberos authentication (in combined with and LDAP module can provide authorization as well) which named mod_auth_gssapi .
request Header
A request header identity provider identifies users from request header values, such as X-Remote-User
. It is typically used in combination with an authenticating proxy, which sets the request header value. The request header identity provider cannot be combined with other identity providers that use direct password logins, such as HTPasswd, Keystone, LDAP or Basic authentication.
The Architecture
Due to the fact that we want to use this solution for an enterprise environment we need to make sure the solution is highly available and we want the cluster to be autonomous so it’s needs to (or at least should) run on the infra servers ( I case you are using an Infra Server it can run on those as well).
WorkFlow Design
Getting Things Done
Creating a Project
The following action can only be run with the permission of a cluster admin
First we will create a directory on out bastion server named openshift-rh
# mkdir openshift-rh
# cd openshift-rh/
Now let’s create a project and add the node selector annotation so that pods will only run on our infra servers :
# oc create ns openshift-request-header
And let’s add a monitoring label so it will be monitored :
# oc label namespace openshift-request-header "openshift.io/cluster-monitoring=true"
Now we need to add annotation to the namespace to make sure it will only run on our dedicated servers:
# oc patch ns/openshift-request-header --type merge --patch '{"metadata":{"annotations":{"openshift.io/node-selector":"node-role.kubernetes.io/infra="}}}'
And switch our workspace to our newly created namespace :
# oc project openshift-request-header
Service and Route
In this tutorial we will use the name of the route in our scripts so let’s create the service and the route in advanced :
First we will ceate a YAML directory to store the configurations :
# mkdir YAML
# cd YAML/
Next we will create the service :
# cat > rh-service.yaml << EOF
apiVersion: v1
kind: Service
metadata:
name: httpd-rh-svc
labels:
app: httpd-rh-proxy
namespace: openshift-request-header
spec:
ports:
- name: login
port: 8443
selector:
app: httpd-rh-proxy
EOF
And the Route :
# cat > rh-route.yaml << EOF
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: httpd-rh-proxy
name: rh-proxy
namespace: openshift-request-header
spec:
port:
targetPort: login
to:
kind: Service
name: httpd-rh-svc
tls:
termination: passthrough
EOF
Now we need to apply our 2 objects :
# oc apply -f rh-service.yaml -f rh-route.yaml
TLS Certificates
In any Organization the Security/admin team should maintain an enterprise CA. That CA should sign our Server certificate which will be used by our request header we server.
As an alternative we can use our wildcard certificate which should already be present in our cluster (and signed by our Organization CA).
In our example we will create a Custom CA for both the ingress router and for the client authentication and apply it in out cluster proxy , our bastion trust ca list and in our browser (so we can login from our desktop).
Orginized our Directories :
# cd ..
# mkdir TLS
# cd TLS
Now Let’s create an answer file for our CA :
# cat > openssl_config.cnf << EOF
[ server_auth ]basicConstraints=CA:FALSE
nsCertType = server
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
SUBJECT_ALTER_NAME=[ client_auth ]
basicConstraints=CA:FALSE
nsCertType = client, email
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer[ v3_ca ]
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer
basicConstraints = critical,CA:true
keyUsage = cRLSign, keyCertSign[ req ]
default_bits = 2048
prompt = no
default_md = sha512
distinguished_name = dn
attributes = req_attributes
x509_extensions = v3_ca[ req_attributes ]
challengePassword = A challenge password
challengePassword_min = 4
challengePassword_max = 20
unstructuredName = An optional company name[ dn ]
C=US
ST=New York
L=New York
O=MyOrg
OU=MyOU
emailAddress=me@working.me
CN = server.example.com
EOF
Now that our CA answer file is ready we can go ahead and create a Certificate Authority key :
# openssl genrsa -out rootCA.key 4096
And let’s create the CA certificate :
# openssl req -x509 -new -nodes -key rootCA.key -sha512 -days 3655 -out rootCA.crt -extensions v3_ca -config openssl_config.cnf
Once the CA is in place we will create our Server Certificate
NOTE !
you can skip the next part if you already have an ingress router certificate witch is signed by the organization CA.
Server Auth
For our Server authentication request we need to make sure the CN and the DNS alter names are configured by the route FQDN.
# ROUTER_FQDN=$(oc get route rh-proxy -o jsonpath='{.spec.host}')
Now we will create an answer file for our server CSR :
(change the vaule in the dn section to fit your organization)
# cat > server_csr.txt << EOF[req]
default_bits = 2048
prompt = no
default_md = sha512
req_extensions = req_ext
distinguished_name = dn
[ dn ]C=US
ST=New York
L=New York
O=MyOrg
OU=MyOrgUnit
emailAddress=me@working.me
CN = ${ROUTER_FQDN}
[ req_ext ]
subjectAltName = @alt_names[ alt_names ]
DNS.1 = ${ROUTER_FQDN}
EOF
Let’s create a key for the server :
# openssl genrsa -out server.key 4096
And the Certificate request :
# openssl req -new -key server.key -out server.csr -config <(cat server_csr.txt)
Before we sign the CSR we need to modify our openssl config file with our router FQDN :
# export SAN="DNS:${ROUTER_FQDN}"
# sed -i 's/SUBJECT_ALTER_NAME=/subjectAltName=${ENV::SAN}/g' openssl_config.cnf
Now Let’s go ahead and sign the certificate :
# openssl x509 -req -in server.csr -CA rootCA.crt -CAkey rootCA.key -CAcreateserial -out server.crt -days 1024 -sha256 -extfile openssl_config.cnf -extensions server_auth
We can test the alter DNS names with the following command :
# openssl x509 -in server.crt -noout -text | grep DNS
Client Auth
For the client authentication we need to make sure the proxy can authentication over X509 certificate in order to pass the header back to OpenShift so we will create a new key and certificate request and sign it with out new CA (you can sign it with your organization CA just make sure it is “client authenitcaion” enabled.
Let’s create the key :
# openssl genrsa -out client.key 4096
And create the CSR (we don’t need a DNS alter name for this one)
# openssl req -new -sha256 -key client.key \
-subj "/O=system:authenticated/CN=kube-proxy" \
-out client.csr
Now , sign the certificate :
# openssl x509 -req -in client.csr -CA rootCA.crt \
-CAkey rootCA.key -CAcreateserial -out client.crt \
-days 1024 -sha256 \
-extfile openssl_config.cnf -extensions client_auth
In order for everything to work we need to merge between all CA certificates and between the certificate and key of the authentication proxy
First of our CA obtain your Organization CA and copy all of it’s content to a file named router-ca.crt
# cat > router-ca.crt << EOF
< Organization CA >
EOF
More so I would suggest adding the cluster CA bundle list :
# oc get cm -n openshift-config-managed default-ingress-cert -o template='{{ index .data "ca-bundle.crt"}}' >> router-ca.crt
Merge the file with our custom CA :
# cat rootCA.crt router-ca.crt > ultimateca.crt
And for our proxy authentication file :
# cat client.{crt,key} > client.pem
Now that we have all the signatures in place we can start and deploy our request header service :
Kerberos
I will cover this part in an other section , we just need to take into our consideration that we have a keytab with the principle of http/<request-header-route>@KERBEROS.DOMAIN.
Custom Image
In our scenario we will want to create and run an httpd image that can run within OpenShift without any special priviliges and provide the request header it exepting for.
In order to atchieve that we will now create a startup file that will generate a VirtualHost file from our environment variables and then start the httpd over the pods.
Go Back one directory and create a new one named image :
# cd .. && mkdir image && cd image/
Now the file looks as such :
# cat > run-httpd.sh << EOF
#!/bin/bashif [ -z ${PROXY_ROUTE} ]; then
echo "Environment variable SERVER_FQDN undefined"
exit 1
elif [[ -z ${OAUTH_ROUTE} ]]; then
echo "Environment variable OUATH_FQDN undefined"
exit 1
elif [[ -z ${SSL_CERT_FILE} ]]; then
echo "Environment variable SSL_CERT_FILE undefined"
exit 1
elif [[ -z ${SSL_KEY_FILE} ]]; then
echo "Environment variable SSL_KEY_FILE undefined"
exit 1
elif [[ -z ${SSL_CA_FILE} ]]; then
echo "Environment variable SSL_CA_FILE undefined"
exit 1
elif [[ -z ${PROXY_CA_FILE} ]]; then
echo "Environment variable PROXY_CA_FILE undefined"
exit 1
elif [[ -z ${PROXY_CERT_FILE} ]]; then
echo "Environment variable PROXY_CERT_FILE undefined"
exit 1
fiecho "
LoadModule mpm_event_module modules/mod_mpm_event.so
LoadModule request_module modules/mod_request.so
LoadModule authn_core_module modules/mod_authn_core.so
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule headers_module modules/mod_headers.so
LoadModule ssl_module modules/mod_ssl.so
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule rewrite_module modules/mod_rewrite.so
LoadModule unixd_module modules/mod_unixd.soUser apache
Group apacheLogLevel trace5
Listen 8443
ServerName ${PROXY_ROUTE}<VirtualHost *:8443>
" > /tmp/reverse.conf
echo '
CustomLog "logs/ssl_request_log" "%t U=%{SSL_CLIENT_SAN_OTHER_msUPN_0}x %h \"%r\" %b"
' >> /tmp/reverse.conf
echo "
SSLEngine on
SSLCertificateFile ${SSL_CERT_FILE}
SSLCertificateKeyFile ${SSL_KEY_FILE}
SSLCACertificateFile ${SSL_CA_FILE}
SSLProxyEngine on
SSLProxyCACertificateFile ${PROXY_CA_FILE}
# It is critical to enforce client certificates. Otherwise, requests can
# spoof the X-Remote-User header by accessing the /oauth/authorize endpoint
# directly.
SSLProxyMachineCertificateFile ${PROXY_CERT_FILE}
SSLProxyVerify none
RewriteEngine on
RewriteRule ^/challenges/oauth/authorize(.*) https://${OAUTH_ROUTE}/oauth/authorize$1 [P,L]
RewriteRule ^/web-login/oauth/authorize(.*) https://${OAUTH_ROUTE}/oauth/authorize$1 [P,L]
<Location /challenges/oauth/authorize>
AuthType Kerberos
AuthName "Kerberos Login"
KrbMethodNegotiate On
KrbAuthRealms EXAMPLE1.COM EXAMPLE2.COM
Krb5KeyTab /opt/app-root/rh-proxy.keytab
KrbDelegateBasic on
KrbSaveCredentials On
Require valid-user
RequestHeader set \"X-Remote-User\" %{REMOTE_USER}s
</Location>
<Location /web-login/oauth/authorize>
AuthType Kerberos
AuthName "Kerberos Login"
KrbMethodNegotiate On
KrbAuthRealms EXAMPLE1.COM EXAMPLE2.COM
Krb5KeyTab /opt/app-root/rh-proxy.keytab
KrbDelegateBasic on
KrbSaveCredentials On
Require valid-user
RequestHeader set \"X-Remote-User\" %{REMOTE_USER}s
ErrorDocument 401 '<html><body>Client certificate authentication failed</body></html>'
</Location>
</VirtualHost>
RequestHeader unset X-Remote-User
" >> /tmp/reverse.confmv /tmp/reverse.conf /opt/app-root/reverse.conf/usr/sbin/httpd -DFOREGROUND
EOF
Once the file has been created we need to make sure it is an executable :
# chmod a+x run-httpd.sh
Next we will create a Dockerfile that will craete our needed image.
The Dockerfile should look as such :
# cat > Dockerfile << EOF
FROM centos
MAINTAINER Oren Oichman <Back to Root>RUN yum module enable mod_auth_openidc -y && \
yum install -y httpd mod_auth_gssapi mod_session \
apr-util-openssl mod_auth_gssapi mod_ssl \
mod_auth_openidc openssl && \
yum clean allCOPY run-httpd.sh /usr/sbin/run-httpd.shRUN echo "PidFile /tmp/http.pid" >> /etc/httpd/conf/httpd.conf
RUN echo 'IncludeOptional /opt/app-root/*.conf' >> /etc/httpd/conf/httpd.conf
RUN sed -i "s/Listen\ 80/Listen\ 8080/g" /etc/httpd/conf/httpd.conf
RUN sed -i "s/\"logs\/error_log\"/\/dev\/stderr/g" /etc/httpd/conf/httpd.conf
RUN sed -i "s/CustomLog \"logs\/access_log\"/CustomLog \/dev\/stdout/g" /etc/httpd/conf/httpd.conf
RUN rm -f /etc/httpd/conf.d/ssl.conf
RUN mkdir /opt/app-root/ && chown apache:apache /opt/app-root/ && chmod 777 /opt/app-root/USER apacheEXPOSE 8080 8443
ENTRYPOINT ["/usr/sbin/run-httpd.sh"]
EOF
Once we have the 2 file ready we can start the image building :
# buildah bud -f Dockerfile -t <registry>/httpd-request-header
And Push it to the registry :
#buildah push <registry>/httpd-request-header
Now that we have the image ready we can move on to the deployment.
Deployment
we know that we already created a service and a route so now let’s create the secrets , the configMaps and the daemonset.
First let’s start by creating a TLS secret for our Server key and certificate :
# oc create secret tls server-tls --key=server.key --cert=server.crt
Once the secret is created we can move on for the configMaps.
First we will create a configMap which will contain our request header route and our oauth route.
In order to create the cm we need first to set the 2 variales
# OAUTH_ROUTE=$(oc get route -n openshift-authentication oauth-openshift -o template='{{ .spec.host }}')
# PROXY_ROUTE=$(oc get route -n openshift-request-header rh-proxy -o jsonpath='{.spec.host }')
And now create it :
# oc create cm routes --from-literal=oauth_route="$OAUTH_ROUTE" --from-literal=proxy_route="$PROXY_ROUTE"
Now we need to create the config map for the CA and for the proxy authentication.
# oc create cm custom-ca --from-file=ca.crt=ultimateca.crt
(for the CA file)
# oc create cm proxy-auth --from-file=proxy-auth.crt=client.pem
For our Kerberos configuration we are going to create 2 more configMaps. One for the keytab and one for the krb5.conf file:
(Assuming that the 2 files are under the krb directory)
# oc create cm krb5-conf --from-file=krb5.conf=../krb/krb5.conf
# oc create cm krb5-keytab --from-file=krb5.keytab=../krb/krb5.keytab
Now all we need is to create a daemon set in order to deploy our image with everything put together :
# cd .. && mkdir YAML && cd YAML
# cat > rh-daemonset.yaml << EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: rh-proxy
labels:
app: httpd-rh-proxy
spec:
selector:
matchLabels:
app: httpd-rh-proxy
replicas: 3
template:
metadata:
labels:
app: httpd-rh-proxy
spec:
containers:
- name: httpd-rh
image: <your registry>/httpd-request-header:latest
ports:
- containerPort: 8443
protocol: TCP
volumeMounts:
- name: server-tls
mountPath: /opt/app-root/server/
- name: custom-ca
mountPath: /opt/app-root/ca/ca.crt
subPath: ca.crt
- name: proxy-auth
mountPath: /opt/app-root/proxyauth/proxy-auth.crt
subPath: proxy-auth.crt
- name: krb5-keytab
mountPath: /etc/krb5.keytab
subPath: krb5.keytab
- name: krb5-conf
mountPath: /etc/krb5.conf
subPath: krb5.conf
env:
- name: OAUTH_ROUTE
valueFrom:
configMapKeyRef:
name: routes
key: oauth_route
- name: PROXY_ROUTE
valueFrom:
configMapKeyRef:
name: routes
key: proxy_route
- name: SSL_CERT_FILE
value: /opt/app-root/server/tls.crt
- name: SSL_KEY_FILE
value: /opt/app-root/server/tls.key
- name: SSL_CA_FILE
value: /opt/app-root/ca/ca.crt
- name: PROXY_CA_FILE
value: /opt/app-root/ca/ca.crt
- name: PROXY_CERT_FILE
value: /opt/app-root/proxyauth/proxy-auth.crt
volumes:
- name: server-tls
secret:
secretName: server-tls
- name: custom-ca
configMap:
name: custom-ca
- name: proxy-auth
configMap:
name: proxy-auth
- name: krb5-conf
configMap:
name: krb5-conf
- name: krb5-keytab
configMap:
name: krb5-keytab
EOF
Chage the registry to your image registry and apply the daemon set :
# oc apply -f rh-daemonset.yaml
Once every this is in place all we need to do is to update our cluster’s oauth type.
First we will create a YAML file with the necessary configuration :
# cat > oauth-cluster.yaml << EOF
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: requestHeadersIdP
type: RequestHeader
requestHeader:
ca:
name: request-header-ca
headers: ["X-Remote-User"]
loginURL: "https://${PROXY_ROUTE}/web-login/oauth/authorize?${query}"
challengeURL: "https://${PROXY_ROUTE}/challenges/oauth/authorize?${query}"
EOF
Now all we need to do is apply the YAML file :
# oc apply -f oauth-cluster.yaml
Testing
The test Is very simple , from a server/workstation which is part of our Kerberos Domain just make sure you have a kerberos ticket :
# klist -e
And now try to login with the oc command (or through the web console):
# oc login
That is it
If you have any question feel free to responed/ leave a comment.
You can find on linkedin at : https://www.linkedin.com/in/orenoichman
Or twitter at : https://twitter.com/ooichman