For enquiries call:



HomeBlogDevOpsKubernetes Cluster: Setup, Security, Maintenance

Kubernetes Cluster: Setup, Security, Maintenance

05th Sep, 2023
view count loader
Read it in
14 Mins
In this article
    Kubernetes Cluster: Setup, Security, Maintenance

    Kubernetes, also referred to as K8s, is an open-source orchestration platform developed by Google for managing containerized applications in various environments. In this article, you’ll learn specifically about the Kubernetes cluster. If you’re interested in becoming a Certified Kubernetes Administrator, you can enroll in our Kubernetes Course where you will gain full knowledge of Kubernetes to automate deployment, scaling, and managing applications. 

    If you are familiar with Docker, you may be aware of Docker Swarm, a tool for orchestration that Docker offers. However, Kubernetes is preferred by nearly 88 percent of the organizations over Docker Swarm. If you want to learn Kubernetes and Docker, you can explore our DevOps Courses Online, where you will find various courses on the different DevOps tools and technologies. However, why even use Kubernetes? You will need to manage many containers when you deploy your containerized applications in a production environment. To guarantee nearly zero downtime for your application, you would need another container to restart instantly if one goes down. But are you going to do it manually? Of course not! The application's scalability, administration, and deployment processes are fully automated using Kubernetes. 

    What is a Kubernetes Cluster?

    A Kubernetes cluster is a set of nodes, or worker machines, running containerized applications. A cluster is made up of two types of nodes - master nodes, or the control plane, that handle and manage the cluster and the worker nodes that actually run the applications. These two nodes are themselves comprised of different components. Let us discuss their components in brief. 

    Control Plane Components

    The Kubernetes master node, also known as the control plane, is in charge of controlling the cluster's state. Important cluster decisions are made by the control plane, and it also reacts to cluster events such as by establishing a new Kubernetes pod as needed. 

    The API server, the scheduler, the controller manager, etcd, and an optional cloud controller manager are the main components that make up the control plane. 

    • apiserver: It exposes the Kubernetes API and serves as the control plane's front end. All requests, both internal and external, are processed after being validated by the control plane. You communicate with the kube-apiserver using REST calls when using the kubectl command-line interface. kube-apiserver scales horizontally by deploying more instances. 
    • etcd: The only reliable source of information about the cluster's state is the etcd, a reliable distributed key-value data store. It stores the configuration data and details about the cluster's status and is fault-tolerant. 
    • Scheduler: The kube-scheduler is in charge of scheduling the pods on the various nodes while taking resource availability and usage into account. It ensures that none of the cluster's nodes are overloaded. The scheduler places the pods on the node that is most appropriate considering its information of the overall resources available. 
    • controller-manager: The controller-manager is a collection of all the controller processes that are continuously operating in the background to manage and regulate the cluster's status. To ensure that the cluster's present state and desired state are the same, adjustments are made by the controller-manager. 
    • cloud-controller-manager: The cloud-controller-manager in a cloud environment helps in connecting your cluster with the cloud providers' API. There is no cloud-controller-manager in a local configuration where minikube is installed. 

    Node Components

    The worker nodes comprise three important components - the kubelet, the kube-proxy and the Kubernetes container runtime such as Docker. 

    • kubelet: Every node has a kubelet that runs to monitor the health and proper operation of the containers. To make sure that pods are operating in accordance with the PodSpecs, the kubelet is provided with a set of PodSpecs using a variety of techniques. 
    • kube-proxy: A service named kube-proxy that administers individual host subnetting and makes services available to the outside world is installed on each worker node. It handles request forwarding to the proper pods/containers across the many isolated networks in a cluster.  
    • container-runtime: The software used to run containers is known as a container runtime. Open Container Initiative-compliant runtimes, including Docker, CRI-O, and containers are supported by Kubernetes. 

    By combining individual nodes, Kubernetes cluster can be built on either a physical or virtual system. Depending on the specific resources and skills of your organization, such a process may be automated or carried out manually. Using the Kubernetes Control Plane, Kubernetes management manages node deployment and health monitoring across the cluster. The Control Plane manages logistics and repair tasks, such as locating crashes and resolving them with extra deployment to achieve the manifest-defined condition. 

    If things look confusing to you, you can enroll in our Docker and Kubernetes course. 

    The image below shows the different components of a Kubernetes cluster. 

    Source: X-Team 

    In some cases, a single Kubernetes cluster cannot handle the application load or distribute the application to end users appropriately. Multi-cluster Kubernetes solutions are perfect for dividing the work among several clusters in such circumstances. A Kubernetes multi-cluster setup consists of several Kubernetes clusters. 

    With just one master node, developers can deploy and manage huge groups of containers using Kubernetes clusters. Single-master clusters are more prone to failure. Multi-master clusters, on the other hand, use multiple (often at least three) master nodes, each of which has access to the same pool of worker nodes, to establish unanimity if one or more members are lost. 

    Now, before focusing on how to deploy a production-ready Kubernetes cluster, let’s discuss some common tools and technologies you will come across while working with Kubernetes: 

    • Minikube: For use in local development, Minikube is a lightweight Kubernetes distribution. It is developed as a component of the Kubernetes project and comes with all the main cluster functionalities implemented. It can execute your cluster and its workloads using containers or a virtual machine environment on Linux, Mac, and Windows hosts. It creates a one-node cluster by default, but if you prefer, you may use a Minikube environment to create a multi-node cluster as well.  
    • Docker for Desktop: You may create and distribute containerized applications and microservices with Docker Desktop, an easy-to-install program for your Mac or Windows environment. It offers a straightforward interface that lets you manage your containers, applications, and images directly from your computer without resorting to the CLI for basic operations. It includes Docker Engine, Docker CLI client, Docker Compose, Docker Content Trust, Kubernetes, and Credential Helper.
    • kOps: The quickest way to set up and run a production-grade Kubernetes cluster is with kOps. It's generally comparable to kubectl for clusters. A production-grade, highly available Kubernetes cluster may be built, destroyed, upgraded, and maintained with the assistance of kOps, which will also set up the required cloud infrastructure. It currently has official support for AWS, with beta support for DigitalOcean, GCE and OpenStack, and alpha support for Azure.

    Steps for Deploying Production-ready Kubernetes Cluster

    Whether a cluster is deployed locally or in the cloud will make the most difference. Things will be quite simple if you intend to install it in a cloud such as Azure, GCP, or AWS. Most cloud service providers walk you through the procedure and give you some logical networking and storage settings.

    On the other hand, you have a few additional options if you plan to create your own cluster locally. Everything can be manually set up from scratch. This is an excellent approach to becoming familiar with Kubernetes' inner workings, but if you plan to use the cluster in a production environment, it's advised that you use a tool that is designed for that purpose. It is advised to utilize something like Minikube or Docker Desktop if you intend to study and don't want to start from scratch on anything. 

    You can use tools like kubeadm or kubespray to build a production cluster. For any Kubernetes deployment method you choose, the required knowledge is essentially the same. All significant cloud and local providers follow a set of Kubernetes rules. This indicates that your fundamental understanding holds true regardless of the cluster. 

    How do you work with a Kubernetes Cluster?

    To create a Kubernetes cluster, you need to work with manifest files. These manifest files are YAML or JSON files where you specify the desired state of the cluster. The desired state defines what application should be running, which images they should use, what other Kubernetes resources they need, how many replicas should be running, and several other configurations. 

    The Kubernetes API is used to specify the desired state of the cluster. You can interact with the cluster to configure or alter your desired state via the command line (using kubectl) or by using the API.  

    Through the Kubernetes control plane, Kubernetes automatically manages clusters to align with their planned state. Scheduling cluster activities as well as registering and reacting to cluster events are duties of a Kubernetes control plane. To make sure that the cluster's actual state and desired state are the same, the Kubernetes control plane continuously executes control loops. Suppose, you specify the number of replicas to be 3, the Kubernetes control plane will try to keep the desired state in force at any time. If any of the replicas crash, the control plane will detect this crash and deploy new replicas immediately to match the desired state. 

    There are two ways to configure your resources in a K8s cluster - imperative and declarative. In the imperative approach, you need to describe the configuration of the resource, and you will have to execute commands from a terminal. However, using the declarative approach, you just need to create a manifest file describing all the desired configurations and then apply it using the Kubernetes apply command. If this looks similar, let us see an example with the two approaches: 

    Task: You need to create a pod using the nginx image. 

    • Imperative Approach 

    To create a pod, you need to run the below command: 

    • Declarative Approach 

    In this approach, you will create a manifest file, say mypod.yaml, and execute the kubectl apply command. 

    apiVersion: v1 
    kind: Pod 
      name: mypod 
        - name: mycontainer 
          image: nginx 
            - name: mycontainer 
              containerPort: 80 
              protocol: TCP 

    You are aware that a Kubernetes manifest specifies the resources (such as Deployments, Services, Pods, etc.) you wish to generate and the cluster configuration in which you want those resources to operate. The apiVersion field indicates the API group and version you want to use when creating the resource. The resource type you want to create is listed in the kind. You can construct resources like Pods, Deployments, ReplicaSets, CronJobs, StatefulSet, etc. The command kubectl api-resources |more will list the resources with their versions and other common details. Within a Kubernetes cluster, resources are uniquely identified using the metadata section. Here, you can give the resource a name, set tags and annotations, specify a namespace, and more. The creation and management of resources are covered in the spec section. The container image to utilize, the number of replicas in a ReplicaSet, the selector criteria, the definitions of the liveness and readiness probes, etc. will all be defined here. 

    Once the manifest file is ready, you can create the resources as below: 

    Cluster in Relation to a node, a pod, an object, and other Kubernetes terms

    As you know, containers are running in a cluster. This is true for most of the Kubernetes resources. Ingress, Services, and Pods run in the cluster. A control plane, which is a logical container, manages everything in the cluster. The control plane manages traffic flow, schedules Pods to run, and oversees all other activities within a cluster. 

    Given that everything operates in a cluster, namespaces must also be understood. Like a cluster, a namespace is also a logical container. You may think of a namespace as a small cluster all by itself because every item in Kubernetes resides in one. It doesn't function as a full cluster because it lacks its own API server, scheduler, etc., but it does offer some level of worry separation. 

    Kubernetes Cluster Management

    Depending on how you installed a Kubernetes cluster, you would manage it differently. The deployment of your cluster will have a significant impact on all external factors, including the number of nodes you have, the available outgoing IPs, and the type of storage you use. However, some characteristics are shared by all clusters.  

    Using kubectl, everything internal is managed in the same way. Once you enter the cluster, everything will be the same because all major Kubernetes providers follow the same standards. Even if the internal management of a cluster will follow the same procedure, you should be mindful that your choices will impact the outside world. For instance, you must ensure that the underlying infrastructure has a Kubernetes Load Balancer if you configure a Kubernetes service to use one. The most used managed Kubernetes services are Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), and Azure Kubernetes Service (AKS). 

    Authentication and Authorization

    All communication with the API server must be done via HTTPS, as expected by Kubernetes. In addition to using HTTPS for communication, Kubernetes API authentication is also required. You should enable service account tokens and at least one other kind of authentication. Basic Auth or X509 certs might be used as this authentication technique. 

    If you work for a large firm, you might be interested in finding out how to enable SSO in your cluster by having your cluster synced with an OIDC or LDAP solution. Most cloud service providers will have a built-in solution available for you to use if you are deploying your cluster there. 

    After choosing your authentication strategy, you must confirm that users are permitted. Through role-based access control(RBAC), this is achieved. You can assign Kubernetes users a variety of predefined roles by default, but you also have the option of creating your own roles. When you wish to specify rigorous permissions, this can be handy. 


    Kubernetes networking might be difficult to understand initially. A pod is similar to a virtual machine. Since each pod in your cluster will have its own IP address, you won't need to worry about allocating ports for different applications to communicate with one another. Containers inside a Pod can communicate with one another via localhost because each Pod has its own MAC address. This means that while port coordination is not required across pods, it is required between containers running in the same pod. 

    If you want to interact amongst applications in your cluster, the above is mainly applicable. It does get a little more difficult if you want to make your application accessible to the public. In that situation, you should use Kubernetes Services. 


    Simply put, a cluster is a logical container for a Kubernetes deployment. It contains the controller manager, scheduler, and anything else needed to make your applications function, including the API server. In this article, you learned about Kubernetes cluster architecture and how you work with them. Essentially, you also learned the basics of Kubernetes Networking and Authentication and Authorization. 

    Frequently Asked Questions (FAQs)

    1What is a Kubernetes Cluster?

    A Kubernetes cluster is a set of nodes, or worker machines, running containerized applications. It is basically a logical container for a Kubernetes deployment. 

    2What makes up a Kubernetes cluster?

    A Kubernetes cluster is made up of two types of nodes - master nodes, or the control plane, that handle and manage the cluster and the worker nodes that actually run the applications. 

    3What is a Kubernetes cluster vs node?

    A Kubernetes Node is a worker machine, physical or virtual, that runs K8s workloads. Kubernetes cluster is a set of node machines for running containerized applications.  


    Ashutosh Krishna


    Ashutosh is an Application Developer at Thoughtworks. Apart from his love for Backend Development and DevOps, he has a keen interest in writing technical blogs and articles. 

    Share This Article
    Ready to Master the Skills that Drive Your Career?

    Avail your free 1:1 mentorship session.

    Your Message (Optional)

    Upcoming DevOps Batches & Dates

    NameDateFeeKnow more
    Course advisor icon
    Course Advisor
    Whatsapp/Chat icon