Openshift Industry Use Cases

Subhashyadav
6 min readMar 13, 2021

What is Openshift?

Enterprises are using microservices and containers to build application faster and to deliver and scale intelligently across hybrid cloud environment. But first , they need right platform.

RedHat Openshift is the Kubernetes platform that provides a trusted foundation for the on-premises , hybrid and Mutlicloud deployments today enterprises demand.

Openshift Architecture:

OpenShift is a layered system wherein each layer is tightly bound with the other layer using Kubernetes and Docker cluster. The architecture of OpenShift is designed in such a way that it can support and manage Docker containers, which are hosted on top of all the layers using Kubernetes. In this containerized infrastructure model , Docker helps in creation of lightweight Linux-based containers and Kubernetes supports the task of orchestrating and managing containers on multiple hosts.

Infrastructure layer:

In the infrastructure layer, you can host your applications on physical servers, virtual servers, or even on the cloud (private/public).

Service layer:

The service layer is responsible for defining pods and access policy. The service layer provides a permanent IP address and host name to the pods; connects applications together; and allows simple internal load balancing, distributing tasks across application components.

There are mainly two types of nodes in an OpenShift cluster: main nodes and worker nodes. Applications reside in the worker nodes. You can have multiple worker nodes in the cluster; the worker nodes are where all your coding adventures happen, and they can be virtual or physical.

Main node:

The Main node is responsible for managing the cluster, and it takes care of the worker nodes. It is responsible for four main tasks:

  • API and authentication: Any administration request goes through the API; these requests are SSL-encrypted and authenticated to ensure the security of the cluster.
  • Data Store: Stores the state and information related to environment and application.
  • Scheduler: Determines pod placements while considering current memory, CPU, and other environment utilization.
  • Health/scaling: Monitors the health of pods and scales them based on CPU utilization. If a pod fails, the main node restarts it automatically. If it fails too often, it is marked as a bad pod and is not restarted for a temporary time.

Worker nodes:

As shown in the following image, the worker node is made of pods. A pod is the smallest unit that can be defined, deployed, and managed, and it can contain one or more containers. These containers include your applications and their dependencies. For example, Alex saves the code for her e-commerce platform in containers for each of the databases, front-end, user system, search engine, and so on.

Keep in mind that containers are ephemeral, so saving data in a container risks the loss of data. To prevent that, you can use persistent storage to save the database.

All containers in one pod share the same IP Address and same volume. In the same pod, you can also have a sidecar container, which can be a service mesh or for security analysis — it must be defined in the same pod sharing the same resources as other containers. Applications can be scaled horizontally, and they are wired together by services.

Registry:

The registry saves your images locally in the cluster. When a new image is pushed to the registry, it notifies OpenShift and passes image information.

Persistent storage:

Persistent storage is where all of your data is saved and connected to containers. It is important to have persistent storage because containers are ephemeral, which means when they are restarted or deleted, any saved data is lost. Therefore, persistent storage prevents any loss of data and allows the use of stateful applications.

Routing layer:

The last component is the routing layer. It provides external access to the applications in the cluster from any device. It also provides load balancing and auto-routing around unhealthy pods.

Features and benefits:

Simple Management:

Red Hat Openshift offers automated installation, upgrades, and lifecycle management throughout the container stack — the operating system, Kubernetes and cluster services, and applications — on any cloud. Openshift use Operator for this management.

Operators?

Automate the creation, configuration, and management of instances of Kubernetes-native applications.

An Operator is a method of packaging, deploying and managing a Kubernetes-native application. A Kubernetes-native application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl tooling.

An Operator is essentially a custom controller.

A controller is a core concept in Kubernetes and is implemented as a software loop that runs continuously on the Kubernetes master nodes comparing, and if necessary, reconciling the expressed desired state and the current state of an object. Objects are well known resources like Pods, Services, ConfigMaps, or PersistentVolumes. Operators apply this model at the level of entire applications and are, in effect, application-specific controllers.

The Operator is a piece of software running in a Pod on the cluster, interacting with the Kubernetes API server. It introduces new object types through Custom Resource Definitions, an extension mechanism in Kubernetes. These custom objects are the primary interface for a user; consistent with the resource-based interaction model on the Kubernetes cluster.

An Operator watches for these custom resource types and is notified about their presence or modification. When the Operator receives this notification it will start running a loop to ensure that all the required connections for the application service represented by these objects are actually available and configured in the way the user expressed in the object’s specification.

Building Shipping and Deploying anywhere:

Red Hat OpenShift helps teams build with speed, agility, confidence, and choice. Code in production mode anywhere you choose to build. Get back to doing work that matters .

Develop, deploy, and manage containers

Red Hat is a container platform for Kubernetes that can automate the provisioning, management and scaling of applications so that we can focus on writing the code for our next big idea

Flexible-capacity:

With your app in the cloud you can monitor, debug, and tune on the fly.

Pods — Pods are one or more containers deployed together on one host. Each pod is allocated CPU, memory, disk, and network bandwidth. Use a single pod to create an entire web app, complete with a private database instance. Use multiple pods to create multiple apps.

Pod autoscaling — OpenShift enables cloud elasticity by providing automatic horizontal pod scaling as application load increases. This eliminates the need for Operations to manually increase the number of application instances.

High availability — OpenShift is architected with a control plane (Kebernetes master ), persistent storage of REST API objects (etcd), and application hosting infrastructure (kebernetes nodes ). Each piece of the platform can be configured with multiple redundancy for fail-over and load-balancing scenarios to eliminate the impact of hardware or infrastructure failure.

Choice of cloud infrastructure — OpenShift provides customers with the choice to run on top of physical or virtual, public or private cloud, and hybrid cloud infrastructure. This gives IT the freedom to deploy openshift in a way that best fits within your existing infrastructure.

Heavy-duty tools:

Powerful command line client tools and a web management console to launch and manage your applications.

Responsive web console — OpenShift includes a rich web console developer interface with a responsive UI design so that it can be easily viewed in a browser. Developers can create, modify, and manage their apps, and related resources, from within the web console.

Rich command-line tool set — For developers that prefer to work from the command line, OpenShift includes a rich set of CLI that provide full access to the developer interface. These tools are easy to use and scriptable for automated interactions.

Remote access to application containers — The unique SElinux-based architecture of OpenShift allows users (developers or operations) to remotely execute commands or log in via ssh to individual containers for apps deployed on the platform. The logged-in user will see only their processes, file system, and log files. This gives users the access they need to architect and manage their applications.

Conclusion:

Red Hat OpenShift Container Platform provide an excellent foundation for building a production ready environment which simplifies the deployment process, provides the latest best practices, and ensures stability by running applications in a highly available environment.

Thanks for Reading.

Do follow me for further articles on Devops and many more.

--

--