Working with OpenShift/Kubernetes is an enterprise environment can present interesting challenges, one of them is make sure that none HTTP services are available outside of the OpenShift/Kubernetes Cluster SDN (Software defined Network).
for HTTP/HTTPS services we can use route/ingress to expose them outside of our cluster but for TCP services we need different approach …
The different approach is actually more then one.
The first option is to setup an external Load Balancer (HAproxy) and on the Cluster side setup a Service Load balancer with a NodePort. …
In the PAAS Admin world there are (roughly) 2 types of admins . One type (like me) are Linux Dinosaurs which have adopted and learn a lot on the Kubernetes’s ins and outs while retaining their long Linux experience. the other type are briete new admins with a fresh per of eyes and knowledge hungry approach.
In this tutorial I will address the first type and how to bring a vast knowledge in bash scripting to the OpenShift/Kubernetes world.
When I gave it some thoughts the best why for me to address the transition is to think about where the…
In some cases we would like to provide a better looking URL for our console and monitoring stack other then our build-in routes which are been provided during the installation without reinstalling the cluster.
When you want your client to access the OpenShift console through an elegant URL rather then console-openshift-console.app.<your cluster>.<your domain> you can now do it by edit the console operator object. In regards to the monitoring stack the process is a bit more complex but not that hard.
In our tutorial we will change our console to URL to openshift.example.com and our monitor stack to *.monitor.example.com …
Well, because some organization haven’t switched to OpenID Connect yet , Kerberos is the closest Protocol to allow then connect with a Single Sign On to application within the organization
As we all know in any Large Enterprise company there is a central identity management which holds all of the organization users information.
all of the internal application (should) use that central identity for the triple-A standard which stand for Authentication , Authorization and Audit.
when we are going through an authentication process we are basically telling the application “here is prof the we are who we say we are”…
Sometimes when debug an application we need to go down to the network (TCP stuck) level and see how things work (or don’t work) in order to debug communication issues.
In this scenario
Well, In OpenShift there are 2 networks , one is the underlay which is the host network interface consistent of the host IP address.
The other network is the overlay network which is the network the pods communicate with each other.
For the debugging to work on the node level and on the pod level we need 2 things.
2. ksniff module
The first is…
Initial sign-on prompts the user for credentials, and gets a Kerberos ticket-granting ticket (TGT).
Additional software applications requiring authentication, such as email clients, wikis, and revision-control systems, use the ticket-granting ticket to acquire service tickets, proving the user’s identity to the mailserver / wiki server / etc. without prompting the user to re-enter credentials.
Unix/Linux environment — Log in via Kerberos PAM modules fetches TGT. Kerberized client applications such as Evolution, Firefox, and SVN use service tickets, so the user is not prompted to re-authenticate.
This document is going to explain how to configure an NFS service with Kerberos authentication…
If you been working in the Kubernetes Management world then you probably already know that you need a good scripting language to manage your infrastructure.
Bash in that sense will only get you so far (with oc/kubectl )and Ansible is a very powerful automation tool but if you want to be able to write complex Self service operators (with operator-sdk) or a very reach multi micro services application then Go is you why to go …
In this tutorial we are going to use podman/buildah to build our image.
I think the best place to start is with a very…
Well… for many reasons.
I have been working for a long time in the disconnected environment and I have not found a web application for community/communities communication as good as Rocket Chat.
In Rocket Chat I can create a channel for each group and gave permission (as admin) to anyone I wanted to.
More so I notice that the communication within the organization is better and intermediate communication between different teams have increased significantly.
In this tutorial I will do a walk through about how to run a small deployment of Rocket chat in our OpenShift environment and how to…
Well, for Many Reasons… While going through the transition from Modular Application to Micro Service Application the authentication methods had changed as well..
while in the old days we would connect our application to Ldap or even a Kerberos Server (and more Active directory a like) in today’s world we are using HTTP based protocols for authentication such as SAML2 and OpenID Connect.
In some cases the overhead of migrating the application to the new way of authentication is a lot of work. …
When working in a disconnected environment more then once a multiple set of clusters are required to be installed by the organization.
In order to deploy Openshift 4 we need to create the “openshift-install” command and point it to our internal registry.
the official document stat that you need to be connected to the internet to be able to generate the “openshift-install” binary but this is incorrect.
we can extract the “openshift-install” binary in a disconnected (Air Gaped) environment but this will require us to do a little bit of trickerring.