Kubernetes the swiss army knife for microservices : introduction — Part-1

almamon rasool abdali
4 min readJun 14, 2021

in order to deepdive into kuberntes i will make a series of posts to cover kuberntes from definition to production.

Kubernetes is an open-source orchestrator that was originally developed by Google for deploying scalable, reliable systems in containers via application-oriented APIs.Since Kubernetes has a rich and growing open source community that means Kubernetes suite not just to the needs of internet-scale companies but cloud-native developers of all scales.

Kubernetes provide the software necessary to successfully build and deploy
distributed systems that are reliable ( cannot fail, even if a part of the system crashes or otherwise fails), scalable (systems can grow their capacity to keep up with ever-increasing usage without radical redesign of the distributed system that implements the services) and highly available (systems must maintain availability even during software rollouts or other maintenance events).

Why kubernetes and why we need to orchestrate containers ?

1- Containers need a lot of orchestration to run efficiently and resiliently
(execution management and scheduling, replacement of a died containers and cluster rebalancing).
2- Containers are designed to be short-lived and fragile.
3- Kubernetes is designed first and foremost to do replacement and rebalancing ( will kill and re-deploy a container in a cluster if it even thinks about misbehaving )
4- Containers are optimized for weight and therefore are a lot less forgiving.
5- In the real world, individual containers fail a lot more than individual virtual machines.
6- A failure of one computing unit cannot take down another (isolation),
resources should be reasonably well balanced geographically to distribute load (orchestration), and we need to detect and replace failures near
instantaneously (scheduling).

So far what is kuberntes ??

Kubernetes provide a managed clusters for containers to be run which are heavily scheduled and orchestrated. Kubernetes detect a container failure and replace it immediately. Kubernetes make sure that containers are spread reasonably evenly across physical machines (so as to lessen the effect of a machine failure on the system) and manage overall network and memory resources for the cluster. Borg (now named kubernetes), was developed in Google to fulfill the need to create a lot of orchestration and scheduling software and handle isolation, load balancing, and placement. Borg schedules and launches approximately 7,000 containers a second on any given day

A service running in a container managed by Kubernetes is designed to do a very small number of discrete things If your services are small, and of limited purpose, then they can more easily be scheduled and re-arranged depend on the load demands Otherwise, the dependencies become too much to manage and affect the scale or the stability of the system.

Simplified view of the basic Kubernetes layout is a Bunches of machines sit networked together in lots of data centers. Each of those machines is
hosting one or more containers. Those worker machines are called nodes and other machines run special coordinating software that schedule containers on the nodes. These machines are called masters. Collections of masters and nodes are known as clusters.

Masters and nodes are defined by which software components they run.

The master runs three main items:

  1. API Server — It is nearly all the components on the master and nodes accomplish their respective tasks by making API calls. These are handled by the API Server running on the master.
  2. Etcd — It is a service whose job is to keep and replicate the current configuration and run state of the cluster. It is implemented as a lightweight distributed key-value store where it was developed inside the CoreOS project.
  3. Scheduler and Controller Manager — These processes schedule containers
    (actually, pods — but more on them later) onto target nodes. They also make sure that the correct numbers of these things are running at all times.

A node usually runs three important processes:
1. Kubelet — A special background process (daemon that runs on each node whose job is to respond to commands from the master to create, destroy, and monitor the containers on that host).
2. Proxy — This is a simple network proxy that’s used to separate the IP address of a target container from the name of the service it provides.

These various parts can be distributed across different machines for scale or all run on the same host for simplicity.

The key difference between a master and a node comes down to who’s running which set of processes.
Pods — A pod is a collection of containers and volumes that are bundled and
scheduled together because they share a common resource — usually a filesystem or IP address. Kubernetes introduce some simplifications with pods vs. normal Docker. In the standard Docker configuration, each container gets its own IP address. Kubernetes simplify this scheme by assigning a shared IP address to the pod.

containers in the pod all share the same address and communicate with one another via localhost. In this way, you can think of a pod a little like a VM because it basically emulates a logical host to the containers in it. This is a very important optimization. Kubernetes schedule and orchestrate things at the pod level, not the container level. This means that if you have several containers running in the same pod they have to be managed together. This concept known as shared fate is a key underpinning of any clustering system.

--

--

almamon rasool abdali

a Software developer with 14 year experience - with 9 years experience in cloud , 7 years experience as a Machine learning engineer and data scientists .