K8s – What does it simplify?

Kubernetes (k8s) is an open-source container management system. It provides one of the most effective solution for the deployment, scaling and management of containerized applications.

If we are new to containerized applications, these are the application bundles that includes all information to create an instance of the application. It makes it easy to automate the deployment, replication or restart of the instances in a containerized environment such as Docker.

To keep it simple, we can easily replicate these application instances across containerized servers.

Provides Deployment Activities as Service

As shown in the above diagram, it automates all our activities related to the deployment process. It includes the deployment tasks related to both the servers and the applications.

If we have the K8s environment in place for instance. We can add an additional server with a service request. Create a new environment. We can deploy our applications. Specify the number of instances. Set up the communication channels. And, do many more such things, all through its configurable services.

Automatically Handles Failovers

K8s in general acts as a state management system, making sure the servers and the applications run as we have configured through its services. In case of any crash or failure, it uses its built-in best practices to bring the system back to our desired state.

For example, lets say we have configured for 10 instances of application X and one instances goes down for some reason. The system will restart or replace the instance to meet our desired state.

 

Comparing with the Traditional Systems

Let us look at few common scenarios in our traditional systems. And, see how we can use K8s to fix its problems.

 

Case 1: Server Setup

Imagine we are creating an environment for our new project. We need to get our servers ready. Set up of the necessary firewall rules. The provisioning process needs a lot of follow up with different teams. It needs a careful thinking to avoid the port conflicts, wrong entries, omissions. It often consumes a lot of time and effort. And, many a times it goes back and forth.

Every time we need a new environment for the project, we have to repeat it all over again. Moreover, we have to decide on the resource requirements for these environments well in advance as its difficult to change it later on.

What happens in Kubernetes ?

The creation of an environment in K8s is much easier. It needs a service request to create a namespace. And, the new namespace shares an existing pool of servers with the other namespaces. The shared servers, in turn, make the resource allocation flexible and more optimized.

Coming to the network communication and firewall set up, Kubernetes runs on its own virtual network. This keeps it separate and secured from the outside physical networks. Moreover, each instance of the applications in K8s run at a separate virtual IP, assigned to its POD.

Because of the separate virtual IPs, the applications can choose their ports without any conflicts with the other applications. And, due to the secured environment, it can securely open the ports for connecting applications within the network. As a result, it is able to replace the time taking firewall setups with simple and configurable services.

 

Case 2: Application Deployment

Automated Deployment & Management

The container as mentioned above, makes the replication of the application instances across servers easy and consistent. But, the challenge was to manage these instances in large numbers, in groups. And, handle them in distributed environments.

In the traditional approach, the development teams had to put a lot of effort in implementing these runtime needs correctly. We desperately needed solid frameworks that handles these complex things well at scale.

The Docker swarm was one of the leading tools to address these issues in the open-source domain for the Docker environment. When the Kubernetes was launched in 2015, it came up with a much simplified service interface. It supported a wide range of features and much higher scalability compared to the Docker swarm. It’s auto scaling features to handle spikes in the application loads was another notable one.

It simplified service-discovery, introduced features like auto-deployment. And, provided many more features, all using external configurations and meta-data. The external configurations can completely separate our application builds from its deployment needs. And, can help the development team to focus more on the core business logic.

Most of all, K8s is time tested in handling complex applications when it was developed and used at Google. It has it origin from a system named Borge at Google that is capable of handling 2 billion containers per week. It is highly configurable. And, includes many best practices to handle a variety of systems.

 

Conclusion

  • Apart from being rich in features, it provides a lot of flexibility in its use.
  • It supports multiple container technologies besides the Docker.
  • We can run Kubernetes on a bare-metal, virtual machines or cloud servers.
  • We can use it as a private cloud, move it to a public cloud or use it as an hybrid.
  • It supports a wide range of external storage systems.
  • We can easily integrate this with other cloud-native solutions.

 

Having discussed what K8s is capable of doing, we need to keep in mind that its core objective is the automation of the deployment activities. It provides easy integration with lot many supporting components. In order to maximize the effectiveness in critical production environments, we need to choose the right set of supporting tools. We will discuss more on this in our next article on Kubernetes Features