By now, you should understand what containers are and how to build and run containers using Docker. However, how we ran containers using Docker was not optimal from a production standpoint. Let me give you a few considerations to think about:

  • As portable containers can run on any Docker machine just fine, multiple containers also share server resources to optimize resource consumption. Now, think of a microservices application that comprises hundreds of containers. How will you choose what machine to run the containers on? What if you want to dynamically schedule the containers to another machine based on resource consumption?
  • Containers provide horizontal scalability as you can create a copy of the container and use a load balancer in front of a pool of containers. One way of doing this is to decide upfront and deploy the desired number of containers, but that isn’t optimal resource utilization. What if I tell you that you need to horizontally scale your containers dynamically with traffic – in other words, by creating additional container instances to handle the extra load when there is more traffic and reducing them when there is less?
  • There are container health check reports on the containers’ health. What if the container is unhealthy, and you want to auto-heal it? What would happen if an entire server goes down and you want to schedule all containers running on that server to another?
  • As containers mostly run within a server and can see each other, how would I ensure that only the required containers can interact with the other, something we usually do with VMs? We cannot compromise on security.
  • Modern cloud platforms allow us to run autoscaling VMs. How can we utilize that from the perspective of containers? For example, if I need just one VM for my containers during the night and five during the day, how can I ensure that the machines are dynamically allocated when we need them?
  • How do you manage the networking between multiple containers if they are part of a more comprehensive service mesh?

The answer to all these questions is a container orchestrator, and the most popular and de facto standard for that is Kubernetes.

Kubernetes is an open source container orchestrator. A bunch of Google engineers first developed it and then open sourced it to the Cloud Native Computing Foundation (CNCF). Since then, the buzz around Kubernetes has not subsided, and for an excellent reason – Kubernetes with containers has changed the technology mindset and how we look at infrastructure entirely. Instead of treating servers as dedicated machines to an application or as part of an application, Kubernetes has allowed visualizing servers as an entity with a container runtime installed. When we treat servers as a standard setup, we can run virtually anything in a cluster of servers. So, you don’t have to plan for high availability (HA), disaster recovery (DR), and other operational aspects for every application on your tech stack. Instead, you can cluster all your servers into a single unit – a Kubernetes cluster – and containerize all your applications. You can then offload all container management functions to Kubernetes. You can run Kubernetes on bare-metal servers, VMs, and as a managed service in the cloud through multiple Kubernetes-as-a-Service offerings.

Leave a Reply

Your email address will not be published. Required fields are marked *