Search

DevOps Filter

Kubernetes vs Docker

What is Kubernetes?Kubernetes (also known as K8s) is a production-grade container orchestration system. It is an open source cluster management system initially developed by three Google employees during the summer of 2014 and grew exponentially and became the first project to get donated to the Cloud Native Computing Foundation(CNCF).It is basically an open source toolkit for building a fault-tolerant, scalable platform designed to automate and centrally manage containerized applications. With Kubernetes you can manage your containerized application more efficiently.Kubernetes is a HUGE project with a lot of code and functionalities. The primary responsibility of Kubernetes is container orchestration. That means making sure that all the containers that execute various workloads are scheduled to run physical or virtual machines. The containers must be packed efficiently following the constraints of the deployment environment and the cluster configuration. In addition, Kubernetes must keep an eye on all running containers and replace dead, unresponsive, or otherwise unhealthy containers.Kubernetes uses Docker to run images and manage containers. Nevertheless, K8s can use other engines, for example, rkt from the CoreOS. The platform itself can be deployed within almost any infrastructure – in the local network, server cluster, data center, any kind of cloud – public (Google Cloud, Microsoft Azure, AWS, etc.), private, hybrid, or even over the combination of these methods. It is noteworthy that Kubernetes supports the automatic placement and replication of containers over a large number of hosts. It  brings a number of features and which can be thought of as:\ As a container platformAs a microservices platformAs a portable cloud platform and a lot more.Kubernetes considers most of the operational needs for application containers. The top 10 reasons why Kubernetes is so popular are as follow:Largest Open Source project in the worldGreat Community SupportRobust Container deploymentEffective Persistent storageMulti-Cloud Support(Hybrid Cloud)Container health monitoringCompute resource managementAuto-scaling Feature SupportReal-world Use cases AvailableHigh availability by cluster federationBelow is the list of features which Kubernetes provides - Service Discovery and load balancing: Kubernetes has a feature which assigns the containers with their own IP addresses and a unique DNS name, which can be used to balance the load on them.Planning & Placement: Placement of the containers on the node is a crucial feature on which makes the decision based on the resources it requires and other restrictions.Auto Scaling: Based on the CPU usage, the vertical scaling of applications is automatically triggered using the command line.Self Repair: This is a unique feature in the Kubernetes which will restart the container automatically when it fails. If the Node dies, then containers are replaced or re-planned on the other Nodes. You can stop the containers if they don't respond to the health checks.Storage Orchestration: This feature of Kubernetes enables the user to mount the network storage system as a local file system.Batch execution: Kubernetes manages both batch and CI workloads along with replacing containers that fail.Deployments and Automatic Rollbacks: During the configuration changes for the application hosted on the Kubernetes, progressively monitors the health to ensure that it does not terminate all the instances at once, it makes an automatic rollback only in the case of failure.Configuration Management and Secrets: All classifies information like keys and passwords are stored under module called Secrets in Kubernetes. These Secrets are used especially while configuring the application without having to reconstruct the image.What is Docker?Docker is a lightweight containerization technology that has gained widespread popularity in the cloud and application packaging world. It is an open source framework that automates the deployment of applications in lightweight and portable containers. It uses a number of the Linux kernel’s features such as namespaces, cgroups, AppArmor profiles, and so on, to sandbox processes into configurable virtual environments. Though the concept of container virtualization isn’t new, it has been getting attention lately with bigwigs like Red Hat, Microsoft, VMware, SaltStack, IBM, HP, etc, throwing their weight behind newcomer Docker. Start-ups are betting their fortunes on Docker as well. CoreOS, Drone.io, and Shippable are some of the start-ups that are modeled to provide services based upon Docker. Red Hat has already included it as a primary supported container format for Red Hat Enterprise Linux 7.Why is Docker popular?The major factors driving Docker’s popularity are its speed, ease of use and the fact that it is largely free. In performance, it is even said to be comparable with KVM. A container-based approach, in which applications can run in isolation and without relying on a separate operating system, can really save huge amounts of hardware resources. Industry experts have started looking at it as hardware multi-tenancy for applications. Instead of having hundreds of VMs running per server, what if it were possible to have thousands of hardware-isolated applications?Docker is used to running software packages called "containers". A container is a standardized unit of software that packages up a code and all its dependencies so the application runs quickly and reliably from one computing environment to other. Containers are the “fastest growing cloud-enabling technology”* because they speed the delivery of software and cut the cost of operating it. Writing software is faster. Deploying it is easier — in your data center or your preferred cloud. And running it requires less hardware and support.Although container technology has existed for decades, Docker makes it work for the enterprise with core features enterprises require in a container platform and best-practice services to ensure success. And containers work on both legacy applications and new development.Existing, mission-critical applications can be “containerized,” often with little or no change. The result is instant savings in infrastructure, better security, and reduced labor. And new development happens faster because engineers only target a single platform instead of a variety of servers and clouds. Less code to write. Less testing. Faster delivery.Introduction to Docker swarm.Docker Swarm is the native clustering and scheduling tool for Docker.  It allows IT, administrators and developers, to establish and manage a cluster of Docker nodes as a single virtual system.  It is written in Go and released for the first time in November 2015 by Docker, Inc.The cluster management and orchestration features embedded in the Docker Engine are built using swarmkit. Swarmkit is a separate project which implements Docker’s orchestration layer and is used directly within Docker. It is a toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.Its main benefits are:Distributed: SwarmKit uses the Raft Consensus Algorithm in order to coordinate and does not rely on a single point of failure to perform decisions.Secure: Node communication and membership within a Swarm are secure out of the box. SwarmKit uses mutual TLS for node authentication, role authorization, and transport encryption, automating both certificate issuance and rotation.Simple: SwarmKit is operationally simple and minimizes infrastructure dependencies. It does not need an external database to operateCurrent versions of Docker include swarm mode for natively managing a cluster of Docker Engines called a swarm. One can use the Docker CLI to create a swarm, deploy application services to a swarm, and manage swarm behavior. All you need is to initiate it to use the latest features which comes with the Docker Engine. Docker Swarm Mode ArchitectureEvery node in Swarm Mode has a role which can be categorized as a Manager and Worker. Manager node has a responsibility to actually orchestrate the cluster, perform the health-check, running containers serving the API and so on. The worker node just executes the tasks which are actually containers. It cannot decide to schedule the containers on the different machine. It cannot change the desired state. The workers only take work and report back the status. You can enable node promotion or demotion easily through one-liner command.Managers and Workers use two different communication models. Managers have built-in RAFT system that allows them to share information for new leader election. At one time, the only manager is actually performing the scaling and they use a leader-follower model to figure out which one is supposed to be what. No External K-V store is required as a built-in internal distributed state store is available.Workers, on the other side, uses GOSSIP network protocol which is quite fast and consistent. Whenever any new container/tasks get generated in the cluster, the gossip is going to broadcast it to all the other containers in a specific overlay network that this new container has started. Please remember that ONLY the containers which are running in the specific overlay network will be communicated and NOT globally. Gossip is optimized for heavy traffic.How Docker swarm varies with Docker?Today Docker Platform support 3 variants of Swarm:Docker Swarm ( Classic)Swarmkit(a foundation for Docker Swarm Mode)Docker Swarm ModeLet us go through each one of them one by one Docker Swarm 1.0 was introduced for the first time in Docker Engine 1.9 Release during November 2015. It was a separate GITHUB repo and software which needed to be installed for turning a pool of Docker Engines into a single, virtual Engine.. It was announced as the easiest way to run Docker applications at scale on a cluster. You don’t have to worry about where to put containers, or how they talk to each other – it just handles all that for you.In 2016 during Dockercon, Docker Inc. announced Docker Swarm Mode for the first time. Swarm Mode came integrated directly into Docker Engine which means you don’t need to install it separately. All you need is to initiate it using `docker swarm init` command. With an optional “Swarm Mode” feature rightly integrated into core Docker Engine, native management of a cluster of Docker Engines, orchestration, decentralized design, service and application deployment, scaling, desired state reconciliation, multi-host networking, service discovery and routing mesh implementation is just a matter of few liner commands.Said that Docker Swarm mode is fundamentally different from Classic Swarm. The basic difference are listed below:Docker Swarm ModeDocker Classic SwarmDocker Swarm Mode comes integrated into Docker EngineDocker Swarm is a GITHUB repository and comes as a separate project. It is NOT integrated into Docker Engine.Comes with inbuilt Service DiscoveryNeed external KV store based on Consul & etc.Comes with inbuilt feature like: ScalingRolling UpdatesService DiscoveryLoad-Balancing Routing MeshTopological PlacementLack of inbuilt feature like Load Balancing, Scalability, Routing Mesh etc.Secured Control & Data PlaneControl Plane and Data Plane are insecureLet’s talk about Swarmkit a bit.Swarmkit is a plumbing open source project. It is a toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.Its main benefits are:Distributed: SwarmKit uses the Raft Consensus Algorithm in order to coordinate and does not rely on a single point of failure to perform decisions.Secure: Node communication and membership within a Swarm are secure out of the box. SwarmKit uses mutual TLS for node authentication, role authorization, and transport encryption, automating both certificate issuance and rotation.Simple: SwarmKit is operationally simple and minimizes infrastructure dependencies. It does not need an external database to operate.SwarmKit is completely built in Go and leverages a standard project structure to work well with Go tooling. If you want to learn more about Swarmkit, head over to https://github.com/docker/swarmkit/How Docker can be used with Kubernetes?From 30,000 feet, Docker and Kubernetes might appear to be similar technologies. They both are an open platform which allows you to run applications within Linux containers. But as you deep-dive little closer, you’ll find that the technologies operate at different layers of the stack, and can even be used together. Let’s talk about Docker first-Docker provides the ability to package and run an application in a loosely isolated environment called a container. At their core, containers are a way of packaging software. The unique feature about container is that when you run a container, you know exactly how it will run - it’s very predictable, repeatable and immutable. You are just left with no unexpected errors when you move it to a new machine, or between environments. All of your application’s code, libraries, and dependencies are packed together in the container as an immutable artifact. You can think of running a container like running a virtual machine, without the overhead of spinning up an entire operating system. Docker CLI provides the mechanism for managing the life cycle of the containers. Whereas the docker image defines the build time framework of runtime containers, CLI commands are there to start, stop, restart and perform lifecycle operations on these containers. Today, containers can be orchestrated and can be made to run on multiple hosts. The questions that need to be answered are how these containers are coordinated and scheduled? And how will the application running in these containers communicate with each other? The answer is Kubernetes.Today, Kubernetes mostly uses Docker to package, instantiate, and run containerized applications. Said that there are various another container runtime available but Docker is the most popular runtime binary used by Kubernetes. Both Kubernetes and Docker build a comprehensive standard for managing the containerized applications intelligently along with providing powerful capabilities. Docker provides a platform for building running and distributing Docker containers. Docker brings up its own clustering tool which can be used for orchestration. But Kubernetes is an orchestration platform for Docker containers which is more extensive than the Docker clustering tool and has the capacity to scale to the production level. Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker Swarm and is meant to coordinate clusters of nodes at scale in production in an efficient manner.  It is a plug and plays architecture for the container orchestration which provides features like high availability among the distributed nodes.For Example ~ Today it is possible to run Kubernetes under Docker EE 2.0 platform. Docker Enterprise Edition (EE) 2.0 is the only platform that manages and secures applications on Kubernetes in multi-Linux, multi-OS, and multi-cloud customer environments. As a complete platform that integrates and scales with your organization, Docker EE 2.0 gives you the most flexibility and choice over the types of applications supported, orchestrators used, and where it’s deployed. It also enables organizations to operationalize Kubernetes more rapidly with streamlined workflows and helps you deliver safer applications through integrated security solutions.Difference between Kubernetes and Dockeri) Kubernetes vs DockerSet up and installationKubernetesDockerIt requires a series of manual steps to setup Kubernetes Master and worker nodes components in a cluster of nodesInstalling Docker is a matter of one-liner command on Linux Platform like Debian, Ubuntu, and CentOS.Kubernetes can run on various platforms: from your laptop, to VMs on a cloud provider, to a rack of bare metal servers. For setting up a single node K8s cluster, one can use Minikube.To install a single-node Docker Swarm or Kubernetes cluster, one can deploy Docker for Mac & Docker for Windows.Kubernetes support for Windows server is under beta phase.Docker has official support for Windows 10 and Windows Server 2016 and 1709.Kubernetes Client and Server packages need to be upgraded manually on all the systems.It’s so easy to upgrade Docker Engine under Docker for Mac & Windows via just 1 click.Working in two systemsKubernetesDockerKubernetes operates at the application level rather than at the hardware level. Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes.Kubernetes can run on top of Docker but requires you to know the command line interface (CLI) specifications for both to access your data over the API.There is a kubernetes client called kubectl which talks to kube API which is running on your master node.Unlike Master components that usually run on a single node (unless High Availability Setup is explicitly stated), Node components run on every node.kubelet: agent running on the node to inspect the container health and report to the master as well as listening to new commands from the kube-apiserverkube-proxy: maintains the network rulescontainer runtime: software for running the containers (e.g. Docker, rkt, runc)Docker Platform is available in the form of two editions:Docker Community EditionDocker Enterprise EditionDocker Community comes with community-based support forums whereas Docker Enterprise Edition is offered as enterprise-class support with defined SLAs and private support channels.Docker Community and Enterprise Edition both come by default with Docker Swarm Mode. Additionally, Kubernetes is supported under Docker Enterprise Edition.For Docker Swarm Mode, one can use Docker Compose file and use Docker Stack Deploy CLI to deploy an application across the cluster nodes.The `docker stack` CLI deploys a new stack or update an existing stack. The client and daemon API must both be at least 1.25 to use this command. One can use the docker version command on the client to check your client and daemon API versionsLogging and MonitoringKubernetesDockerLogging:Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster. Few of popular logging tools are listed below:Fluentd is an open source data collector for a unified logging layer. It’s written in Ruby with a plug-in oriented architecture.It helps to collect, route and store different logs from different sources. While Fluentd is optimized to be easily extended using plugin architecture, fluent-bit is designed for performance. It’s compact and written in C so it can be enabled to minimalistic IOT devices and remain fast enough to transfer a huge quantity of logs. Moreover, it has built-in Kubernetes support. It’s an especially compact tool designed to transport logs from all nodes.Other tools like Stackdriver logging provided by GCP, Logz.io and other 3rd party drivers are available too.Monitoring:There are various open source tools available for Kubernetes application monitoring like:Heapster: Installed as a pod inside of Kubernetes, it gathers data and events from the containers and pods within the cluster.Prometheus: Open source Cloud Native Computing Foundation (CNCF) project that offers powerful querying capabilities, visualization and alerting.Grafana:  Used in conjunction with Heapster for visualizing data within your Kubernetes environment.InfluxDB: A highly-available database platform that stores the data captured by all the Heapster pods.CAdvisor:  focuses on container level performance and resource usage. This comes embedded directly into kubelet and should automatically discover active containers.Logging driver plugins are available in Docker 17.05 and higher. Logging capabilities available in Docker are exposed in the form of drivers, which is very handy since one gets to choose how and where log messages should be shippedDocker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers.Each Docker daemon has a default logging driver, which each container uses unless you configure it to use a different logging driver.In addition to using the logging drivers included with Docker, you can also implement and use logging driver plugins.To configure the Docker daemon to default to a specific logging driver, set the value of log-driver to the name of the logging driver in the daemon.json file, which is located in /etc/docker/ on Linux hosts.The following example explicitly sets the default logging driver to syslog:{   "log-driver": "syslog" }When you start a container, you can configure it to use a different logging driver than the Docker daemon default, using the --log-driver flag. If the logging driver has configurable options, you can set them using one or more instances of the --log-opt <NAME>=<VALUE> flag. Even if the container uses the default logging driver, it can use different configurable options.SizeKubernetes DockerAs per official page of Kubernetes documentation K8s v1.12 support clusters with up to 5000 nodes based on the below criteria:No more than 5000 nodesNo more than 150000 total podsNo more than 300000 total containersNo more than 100 pods per node.According to the Docker’s blog post on scaling Swarm clusters published during Nov 2015, Docker Swarm has been scaled and performance tested up to 30,000 containers and 1,000.SpecsDiscovery backend: Consul1,000 nodes30 containers per nodeManager: AWS m4.xlarge (4 CPUs, 16GB RAM)Nodes: AWS t2.micro (1 CPU, 1 GB RAM)Container image: Ubuntu 14.04Results Percentile  API Response Time Scheduling Delay50th     150ms              230ms90th      200ms             250ms99th      360ms             400msii) Building and Deploying Containers with DockerDocker has a capability to builds images automatically by reading the instructions via text file called Dockerfile. It is a simple text file that follows a specific format and instructions set that contains all commands, in order, needed to build a given image. A Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer. For example, below is a simple Dockerfile which Consider this Dockerfile:FROM nginx:latest COPY wrapper.sh / COPY html /usr/share/nginx/html CMD ["./wrapper.sh"]Each instruction creates one layer:FROM creates a layer from the nginx:latest Docker image.COPY adds files from your Docker client’s current directory.CMD specifies what command to run within the container.When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.Building a Docker Image$docker build -t hellowhaleThe above shown `docker build` command builds an image from a Dockerfile and a context. The build context is the set of files at a specified location PATH or URL. The PATH is a directory on your local filesystem. The URL is a Git repository location.Running the Docker ContainerA running Docker Image is called Docker container and all you need is to run the below command to expose port 80 on host machine from a container and get it up and running:docker run -d -p 80:80 --name hellowhale hellowhaleTagging the Image$docker tag hellowhale userid/hellowhalePushing the Docker Image to DockerHubBefore you push Docker Image to DockerHub, you need to login to DockerHub first using the below command:$docker login $docker push userid/hellowhaleiii) Managing container with KubernetesDocker CLI for a standalone system is used to build, ship and run your Docker containers. But if you want to run multiple containers across multiple machines, you need a robust orchestration tool and Kubernetes is the most popular in the list.Kubernetes is an open source container orchestration platform, allowing large numbers of containers to work together in harmony, reducing operational burden. It helps with things like running containers across many different machines, scaling up or down by adding or removing containers when demand changes, keeping storage consistent with multiple instances of an application, distributing load between the containers and launching new containers on different machines if something fails.Below are the list of comparative CLI used by Docker Vs Kubernetes to manage containers:Docker CLIKubernetes CLIdocker runTo run an nginx container -$ docker run -d --restart=always --name nginx-app -p 80:80 nginxkubectl runTo run an nginx Deployment and expose the Deployment, see kubectl run.$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"docker psTo list what is currently running, see kubectl get.docker:$ docker ps -akubectl getTo list what is currently running under kubernetes cluster:$ kubectl get po -adocker execTo execute a command in a  Docker container:$ docker psCONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES55c103fa1296        nginx               "nginx -g 'daemon of…"   6 minutes ago       Up 6 minutes        0.0.0.0:80->80/tcp   nginx-app$ docker exec 55c103fa1296 cat /etc/hostnamekubectl:To execute a command in a container, see kubectl exec.$ kubectl get poNAME              READY     STATUS    RESTARTS   AGEnginx-app-5jyvm   1/1       Running   0          10m$ kubectl exec nginx-app-5jyvm -- cat /etc/hostnamenginx-app-5jyvmiv) Trends in Docker and KubernetesDocker, Inc has around 550+ enterprise customer who uses Docker in a production environment. Few of non-exhaustive list of companies who actively uses Docker are list below:The New York TimesPayPalBusiness InsiderCornell University (Not a company but still can be considered)SplunkThe Washington PostSwisscomAlm BrandAssa AbloyExpediaJabilMetLifeSociete GeneraleGEGrouponYandexUberEbayShopifySpotifyNew RelicYelpRecently, the Forrester New Wave™: Enterprise Container Platform Software Suites, Q4 2018 report states that Docker leading the pack with a robust container platform well-suited for the enterprise, offering a secure container supply chain from the developer's desktop to production.Lots of organizations are already using Kubernetes in production—like the ones listed on the Kubernetes case studies page, including eBay, Buffer, Pearson, Box, and Wikimedia. But that is not a complete list. Kubernetes is even more versatile than the official case studies page suggests. Below is a list of companies using it:List of Kubernetes Users   Microservices UsageMicroservices help developers break up monolithic applications into smaller components. They can move away from all-at-once massive package deployments and break up apps into smaller, individual units that can be deployed separately. Smaller microservices can give apps more scalability, more resiliency and - most importantly - they can be updated, changed and redeployed faster. Some of the biggest public cloud applications run as microservices already.Containers are a packaging strategy for microservices. Think of them more as process containers than virtual machines. They run as a process inside a shared operating system. A container typically only does one small job - validate a login or return a search result. Docker is a tool that describes those packages in a common format, and helps launch and run them. Linux containers have been around for a while, but their popularity in the public cloud has given rise to an exciting new ecosystem of companies building tools to make them easier to use, cluster and orchestrate them, run them in more places, and manage their life cycles.Over the last two years, many different types of software vendors - from operating system to IT infrastructure companies - have all joined the container ecosystem. There’s already an industry organization - the open container initiative - guiding the market and making sure everyone plays well together. IBM, HP, Microsoft, VMware, Google, Red Hat, CoreOS - these are just some of the major vendors racing to make containers as easy as possible for developers to use, to share, to protect, and to scale.The rising demand for multi-cloud environmentsWith an estimated 85% of today’s enterprise IT organizations employing a multi-cloud strategy, it has become more critical that customers have a ‘single pane of glass’ for managing their entire application portfolio. Most enterprise organizations have a hybrid and multi-cloud strategy. Containers have helped to make applications portable but let us accept the fact that even though containers are portable today but the management of containers is still a nightmare. The reason being –Each Cloud is managed under a separate operational model, duplicating effortsDifferent security and access policies across each platformContent is hard to distribute and trackPoor Infrastructure utilization still remainsThe emergence of Cloud-hosted K8s is exacerbating the challenges with managing containerized applications across multiple CloudsThis time Docker introduced new application management capabilities for Docker Enterprise Edition that will allow organizations to federate applications across Docker Enterprise Edition environments deployed on-premises and in the cloud as well as across cloud-hosted Kubernetes. This includes Azure Kubernetes Service (AKS), AWS Elastic Container Service for Kubernetes (EKS), and Google Kubernetes Engine (GKE). The federated application management feature will automate the management and security of container applications on premises and across Kubernetes-based cloud services. It will provide a single management platform to enterprises so that they can centrally control and secure the software supply chain for all the containerized applications.With this announcement, undoubtedly Docker Enterprise Edition is the only enterprise-ready container platform that can deliver federated application management with a secure supply chain. Not only does Docker give you your choice of Linux distribution or Windows Server, the choice of running in a virtual machine or on bare metal, running traditional or microservices applications with either Swarm or Kubernetes orchestration, it also gives you the flexibility to choose the right cloud for your needs.Talking about Kubernetes Platform, version 1.3 of container management platform KubernetesIntroduced cross-cluster federated services with an ability to span workloads across clusters and, by extension, across multiple clouds. This opens up the possibility for workloads that need to draw resources from multiple clouds. This would also mean that large jobs can be split among clouds. Not only this, this introduced an ability to automatically scale services to match demand. Increasing support for Docker and KubernetesKubernetes has been enjoying widespread adoption among startups, platform vendors, and enterprises. Companies like Amazon, Google, IBM, Red Hat, and Microsoft offer managed Kubernetes under the Containers as a Service (CaaS) model. The open source ecosystem has dozens of players building various tools covering logging, monitoring, automation, storage, and networking aspects of Kubernetes. System integrators have dedicated practices and offerings based on Kubernetes. Global players like Uber, Bloomberg, Blackrock, BlaBlaCar, The New York Times, Lyft, eBay, Buffer, Squarespace, Ancestry, GolfNow, Goldman Sachs and many others are using Kubernetes in production at massive scale. According to Redmonk, a developer-focused research company, 71 percent of the Fortune 100 use containers and more than 50 percent of Fortune 100 companies use Kubernetes as their container orchestration platform.Did you know there are 35 certified Kubernetes distribution, 22 certified Kubernetes hosting platform and 50 certified Kubernetes service provider available? Over the last three years, Kubernetes has been adopted by a vibrant, diverse community of providers. The Cloud Native Computing Foundation® (CNCF®), which sustains and integrates open source technologies like Kubernetes® , today announced the availability of the Certified Kubernetes Conformance Program, which ensures Certified Kubernetes™ products deliver consistency and portability, and that 35 Certified Kubernetes Distributions and Platforms are now available. A Certified Kubernetes product guarantees that the complete Kubernetes API functions as specified, so users can rely on a seamless, stable experience.In the other hand, Docker Enterprise Edition (EE) 2.0 represents a significant leap forward in container platform solutions, delivering the only solution that manages and secures applications on Kubernetes in multi-Linux, multi-OS, and multi-cloud customer environments. One of the most promising features announced with this release includes Kubernetes integration as an optional orchestration solution, running side-by-side with Docker Swarm. Not only this, this release includes Swarm Layer 7 routing improvements, Registry image mirroring, Kubernetes integration to Docker Trusted Registry & Kubernetes integration to Docker EE access controls. With this new release, organizations will be able to deploy applications with either Swarm or fully-conformant Kubernetes while maintaining the consistent developer-to-IT workflow.Docker EE is more than just a container orchestration solution; it is a full lifecycle management solution for the modernization of traditional applications and microservices across a broad set of infrastructure platforms. It is a Containers-as-a-Service(CaaS) platform for IT that manages and secures diverse applications across disparate infrastructure, both on-premises and in the cloud. Docker EE provides an integrated, tested and certified platform for apps running on enterprise Linux or Windows operating systems and Cloud providers. It is tightly integrated into the underlying infrastructure to provide a native, easy to install experience and an optimized Docker environment.V) Kubernetes vs Docker swarmInstallation & Cluster configurationGUIScalabilityAuto-ScalingLoad BalancingRolling Updates & RollbacksData VolumesLogging & MonitoringKubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It was built by Google based on their experience running containers in production using an internal cluster management system called Borg (sometimes referred to as Omega). On the other hand, a Swarm cluster consists of Docker Engine deployed on multiple nodes. Manager nodes perform orchestration and cluster management. Worker nodes receive and execute tasks Below is the major list of differences between Docker Swarm & Kubernetes:Docker SwarmKubernetesApplications are deployed in the form of services (or “microservices”) in a Swarm cluster. Docker Compose is a tool which is majorly used to deploy the app.Applications are deployed in the form of a combination of pods, deployments, and services (or “microservices”).Autoscaling feature is not available either in  Docker Swarm (Classical) or Docker SwarmaAn auto-scaling feature is available under K8s. It uses a simple number-of-pods target which is defined declaratively using deployments. CPU-utilization-per-pod target is available.  Docker Swarm supports rolling updates features. At rollout time, you can apply rolling updates to services. The Swarm manager lets you control the delay between service deployment to different sets of nodes, thereby updating only 1 task at a time.Under kubernetes, the deployment controller supports both “rolling-update” and “recreate” strategies. Rolling updates can specify a maximum number of pods unavailable or maximum number running during the process.Under Docker Swarm Mode, the node joining a Docker Swarm cluster creates an overlay network for services that span all of the hosts in the Swarm and a host-only Docker bridge network for container.By default, nodes in the Swarm cluster encrypt overlay control and management traffic between themselves. Users can choose to encrypt container data traffic when creating an overlay network by themselves.Under K8s, the networking model is a flat network, enabling all pods to communicate with one another. Network policies specify how pods communicate with each other. The flat network is typically implemented as an overlay.Docker Swarm health checks are limited to services. If a container backing the service does not come up (running state), a new container is kicked off.Users can embed health check functionality into their Docker images using the HEALTHCHECK instruction.Under K8s, the health checks are of two kinds: liveness (is app responsive) and readiness (is app responsive, but busy preparing and not yet able to serve)Out-of-the-box K8S provides a basic logging mechanism to pull aggregate logs for a set of containers that make up a pod.

Kubernetes vs Docker

6376
Kubernetes vs Docker

What is Kubernetes?

Kubernetes (also known as K8s) is a production-grade container orchestration system. It is an open source cluster management system initially developed by three Google employees during the summer of 2014 and grew exponentially and became the first project to get donated to the Cloud Native Computing Foundation(CNCF).

It is basically an open source toolkit for building a fault-tolerant, scalable platform designed to automate and centrally manage containerized applications. With Kubernetes you can manage your containerized application more efficiently.

Types of kubernetes

Kubernetes is a HUGE project with a lot of code and functionalities. The primary responsibility of Kubernetes is container orchestration. That means making sure that all the containers that execute various workloads are scheduled to run physical or virtual machines. The containers must be packed efficiently following the constraints of the deployment environment and the cluster configuration. In addition, Kubernetes must keep an eye on all running containers and replace dead, unresponsive, or otherwise unhealthy containers.

Kubernetes uses Docker to run images and manage containers. Nevertheless, K8s can use other engines, for example, rkt from the CoreOS. The platform itself can be deployed within almost any infrastructure – in the local network, server cluster, data center, any kind of cloud – public (Google Cloud, Microsoft Azure, AWS, etc.), private, hybrid, or even over the combination of these methods. It is noteworthy that Kubernetes supports the automatic placement and replication of containers over a large number of hosts. It  brings a number of features and which can be thought of as:\ 

  • As a container platform
  • As a microservices platform
  • As a portable cloud platform and a lot more.Kubernetes High level Component Architecture

Kubernetes considers most of the operational needs for application containers. The top 10 reasons why Kubernetes is so popular are as follow:

  • Largest Open Source project in the world
  • Great Community Support
  • Robust Container deployment
  • Effective Persistent storage
  • Multi-Cloud Support(Hybrid Cloud)
  • Container health monitoring
  • Compute resource management
  • Auto-scaling Feature Support
  • Real-world Use cases Available
  • High availability by cluster federation

Below is the list of features which Kubernetes provides - 

  • Service Discovery and load balancingKubernetes has a feature which assigns the containers with their own IP addresses and a unique DNS name, which can be used to balance the load on them.
  • Planning & Placement: Placement of the containers on the node is a crucial feature on which makes the decision based on the resources it requires and other restrictions.
  • Auto ScalingBased on the CPU usage, the vertical scaling of applications is automatically triggered using the command line.
  • Self Repair: This is a unique feature in the Kubernetes which will restart the container automatically when it fails. If the Node dies, then containers are replaced or re-planned on the other Nodes. You can stop the containers if they don't respond to the health checks.
  • Storage Orchestration: This feature of Kubernetes enables the user to mount the network storage system as a local file system.
  • Batch execution: Kubernetes manages both batch and CI workloads along with replacing containers that fail.
  • Deployments and Automatic Rollbacks: During the configuration changes for the application hosted on the Kubernetes, progressively monitors the health to ensure that it does not terminate all the instances at once, it makes an automatic rollback only in the case of failure.
  • Configuration Management and Secrets: All classifies information like keys and passwords are stored under module called Secrets in Kubernetes. These Secrets are used especially while configuring the application without having to reconstruct the image.

What is Docker?

Docker is a lightweight containerization technology that has gained widespread popularity in the cloud and application packaging world. It is an open source framework that automates the deployment of applications in lightweight and portable containers. It uses a number of the Linux kernel’s features such as namespaces, cgroups, AppArmor profiles, and so on, to sandbox processes into configurable virtual environments. Though the concept of container virtualization isn’t new, it has been getting attention lately with bigwigs like Red Hat, Microsoft, VMware, SaltStack, IBM, HP, etc, throwing their weight behind newcomer Docker. Start-ups are betting their fortunes on Docker as well. CoreOS, Drone.io, and Shippable are some of the start-ups that are modeled to provide services based upon Docker. Red Hat has already included it as a primary supported container format for Red Hat Enterprise Linux 7.

Why is Docker popular?

The major factors driving Docker’s popularity are its speed, ease of use and the fact that it is largely free. In performance, it is even said to be comparable with KVM. A container-based approach, in which applications can run in isolation and without relying on a separate operating system, can really save huge amounts of hardware resources. Industry experts have started looking at it as hardware multi-tenancy for applications. Instead of having hundreds of VMs running per server, what if it were possible to have thousands of hardware-isolated applications?

Docker is used to running software packages called "containers". A container is a standardized unit of software that packages up a code and all its dependencies so the application runs quickly and reliably from one computing environment to other. Containers are the “fastest growing cloud-enabling technology”* because they speed the delivery of software and cut the cost of operating it. Writing software is faster. Deploying it is easier — in your data center or your preferred cloud. And running it requires less hardware and support.

Although container technology has existed for decades, Docker makes it work for the enterprise with core features enterprises require in a container platform and best-practice services to ensure success. And containers work on both legacy applications and new development.

Existing, mission-critical applications can be “containerized,” often with little or no change. The result is instant savings in infrastructure, better security, and reduced labor. And new development happens faster because engineers only target a single platform instead of a variety of servers and clouds. Less code to write. Less testing. Faster delivery.
Docker User Applications

Introduction to Docker swarm.

Docker Swarm is the native clustering and scheduling tool for Docker.  It allows IT, administrators and developers, to establish and manage a cluster of Docker nodes as a single virtual system.  It is written in Go and released for the first time in November 2015 by Docker, Inc.
Introduction to Docker swarmThe cluster management and orchestration features embedded in the Docker Engine are built using swarmkit. Swarmkit is a separate project which implements Docker’s orchestration layer and is used directly within Docker. It is a toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.

Its main benefits are:

  • Distributed: SwarmKit uses the Raft Consensus Algorithm in order to coordinate and does not rely on a single point of failure to perform decisions.
  • Secure: Node communication and membership within a Swarm are secure out of the box. SwarmKit uses mutual TLS for node authentication, role authorization, and transport encryption, automating both certificate issuance and rotation.
  • Simple: SwarmKit is operationally simple and minimizes infrastructure dependencies. It does not need an external database to operate

Orchestration ComponentsCurrent versions of Docker include swarm mode for natively managing a cluster of Docker Engines called a swarm. One can use the Docker CLI to create a swarm, deploy application services to a swarm, and manage swarm behavior. All you need is to initiate it to use the latest features which comes with the Docker Engine. 

Docker Swarm Mode Architecture

Every node in Swarm Mode has a role which can be categorized as a Manager and Worker. Manager node has a responsibility to actually orchestrate the cluster, perform the health-check, running containers serving the API and so on. The worker node just executes the tasks which are actually containers. It cannot decide to schedule the containers on the different machine. It cannot change the desired state. The workers only take work and report back the status. You can enable node promotion or demotion easily through one-liner command.
Docker Swarm Communication internalsManagers and Workers use two different communication models. Managers have built-in RAFT system that allows them to share information for new leader election. At one time, the only manager is actually performing the scaling and they use a leader-follower model to figure out which one is supposed to be what. No External K-V store is required as a built-in internal distributed state store is available.
Quorum LayerWorkers, on the other side, uses GOSSIP network protocol which is quite fast and consistent. Whenever any new container/tasks get generated in the cluster, the gossip is going to broadcast it to all the other containers in a specific overlay network that this new container has started. Please remember that ONLY the containers which are running in the specific overlay network will be communicated and NOT globally. Gossip is optimized for heavy traffic.
Worker-to-Worker-Gossip

How Docker swarm varies with Docker?

Today Docker Platform support 3 variants of Swarm:

  • Docker Swarm ( Classic)
  • Swarmkit(a foundation for Docker Swarm Mode)
  • Docker Swarm Mode

Let us go through each one of them one by one 

Docker Swarm 1.0 was introduced for the first time in Docker Engine 1.9 Release during November 2015. It was a separate GITHUB repo and software which needed to be installed for turning a pool of Docker Engines into a single, virtual Engine.. It was announced as the easiest way to run Docker applications at scale on a cluster. You don’t have to worry about where to put containers, or how they talk to each other – it just handles all that for you.

In 2016 during Dockercon, Docker Inc. announced Docker Swarm Mode for the first time. Swarm Mode came integrated directly into Docker Engine which means you don’t need to install it separately. All you need is to initiate it using `docker swarm init` command. With an optional “Swarm Mode” feature rightly integrated into core Docker Engine, native management of a cluster of Docker Engines, orchestration, decentralized design, service and application deployment, scaling, desired state reconciliation, multi-host networking, service discovery and routing mesh implementation is just a matter of few liner commands.

Said that Docker Swarm mode is fundamentally different from Classic Swarm. The basic difference are listed below:

Docker Swarm Mode
Docker Classic Swarm
Docker Swarm Mode comes integrated into Docker Engine
Docker Swarm is a GITHUB repository and comes as a separate project. It is NOT integrated into Docker Engine.
Comes with inbuilt Service Discovery
Need external KV store based on Consul & etc.

Comes with inbuilt feature like:

  • Scaling
  • Rolling Updates
  • Service Discovery
  • Load-Balancing 
  • Routing Mesh
  • Topological Placement

Lack of inbuilt feature like Load Balancing, Scalability, Routing Mesh etc.
Secured Control & Data Plane
Control Plane and Data Plane are insecure

Let’s talk about Swarmkit a bit.

Swarmkit is a plumbing open source project. It is a toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.

Its main benefits are:

  • Distributed: SwarmKit uses the Raft Consensus Algorithm in order to coordinate and does not rely on a single point of failure to perform decisions.
  • Secure: Node communication and membership within a Swarm are secure out of the box. SwarmKit uses mutual TLS for node authentication, role authorization, and transport encryption, automating both certificate issuance and rotation.
  • Simple: SwarmKit is operationally simple and minimizes infrastructure dependencies. It does not need an external database to operate.

SwarmKit is completely built in Go and leverages a standard project structure to work well with Go tooling. If you want to learn more about Swarmkit, head over to https://github.com/docker/swarmkit/

How Docker can be used with Kubernetes?

From 30,000 feet, Docker and Kubernetes might appear to be similar technologies. They both are an open platform which allows you to run applications within Linux containers. But as you deep-dive little closer, you’ll find that the technologies operate at different layers of the stack, and can even be used together. 

Let’s talk about Docker first-

Docker provides the ability to package and run an application in a loosely isolated environment called a container. At their core, containers are a way of packaging software. The unique feature about container is that when you run a container, you know exactly how it will run - it’s very predictable, repeatable and immutable. You are just left with no unexpected errors when you move it to a new machine, or between environments. All of your application’s code, libraries, and dependencies are packed together in the container as an immutable artifact. You can think of running a container like running a virtual machine, without the overhead of spinning up an entire operating system. 

Docker CLI provides the mechanism for managing the life cycle of the containers. Whereas the docker image defines the build time framework of runtime containers, CLI commands are there to start, stop, restart and perform lifecycle operations on these containers. Today, containers can be orchestrated and can be made to run on multiple hosts. The questions that need to be answered are how these containers are coordinated and scheduled? And how will the application running in these containers communicate with each other? The answer is Kubernetes.

Today, Kubernetes mostly uses Docker to package, instantiate, and run containerized applications. Said that there are various another container runtime available but Docker is the most popular runtime binary used by Kubernetes. Both Kubernetes and Docker build a comprehensive standard for managing the containerized applications intelligently along with providing powerful capabilities. Docker provides a platform for building running and distributing Docker containers. Docker brings up its own clustering tool which can be used for orchestration. But Kubernetes is an orchestration platform for Docker containers which is more extensive than the Docker clustering tool and has the capacity to scale to the production level. Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker Swarm and is meant to coordinate clusters of nodes at scale in production in an efficient manner.  It is a plug and plays architecture for the container orchestration which provides features like high availability among the distributed nodes.

For Example ~ Today it is possible to run Kubernetes under Docker EE 2.0 platform. Docker Enterprise Edition (EE) 2.0 is the only platform that manages and secures applications on Kubernetes in multi-Linux, multi-OS, and multi-cloud customer environments. As a complete platform that integrates and scales with your organization, Docker EE 2.0 gives you the most flexibility and choice over the types of applications supported, orchestrators used, and where it’s deployed. It also enables organizations to operationalize Kubernetes more rapidly with streamlined workflows and helps you deliver safer applications through integrated security solutions.


Difference between Kubernetes and Docker

i) Kubernetes vs Docker

  • Set up and installation

Kubernetes
Docker
It requires a series of manual steps to setup Kubernetes Master and worker nodes components in a cluster of nodes
Installing Docker is a matter of one-liner command on Linux Platform like Debian, Ubuntu, and CentOS.
Kubernetes can run on various platforms: from your laptop, to VMs on a cloud provider, to a rack of bare metal servers. For setting up a single node K8s cluster, one can use Minikube.
To install a single-node Docker Swarm or Kubernetes cluster, one can deploy Docker for Mac & Docker for Windows.
Kubernetes support for Windows server is under beta phase.
Docker has official support for Windows 10 and Windows Server 2016 and 1709.
Kubernetes Client and Server packages need to be upgraded manually on all the systems.
It’s so easy to upgrade Docker Engine under Docker for Mac & Windows via just 1 click.
  • Working in two systems

Kubernetes
Docker
Kubernetes operates at the application level rather than at the hardware level. Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes.

Kubernetes can run on top of Docker but requires you to know the command line interface (CLI) specifications for both to access your data over the API.

There is a kubernetes client called kubectl which talks to kube API which is running on your master node.

Unlike Master components that usually run on a single node (unless High Availability Setup is explicitly stated), Node components run on every node.
  • kubelet: agent running on the node to inspect the container health and report to the master as well as listening to new commands from the kube-apiserver
  • kube-proxy: maintains the network rules
  • container runtime: software for running the containers (e.g. Docker, rkt, runc)
Docker Platform is available in the form of two editions:
  • Docker Community Edition
  • Docker Enterprise Edition
Docker Community comes with community-based support forums whereas Docker Enterprise Edition is offered as enterprise-class support with defined SLAs and private support channels.

Docker Community and Enterprise Edition both come by default with Docker Swarm Mode. Additionally, Kubernetes is supported under Docker Enterprise Edition.

For Docker Swarm Mode, one can use Docker Compose file and use Docker Stack Deploy CLI to deploy an application across the cluster nodes.

The `docker stack` CLI deploys a new stack or update an existing stack. The client and daemon API must both be at least 1.25 to use this command. One can use the docker version command on the client to check your client and daemon API versions
  • Logging and Monitoring

Kubernetes
Docker
Logging:


Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster. Few of popular logging tools are listed below:

Fluentd is an open source data collector for a unified logging layer. It’s written in Ruby with a plug-in oriented architecture.

It helps to collect, route and store different logs from different sources. While Fluentd is optimized to be easily extended using plugin architecture, fluent-bit is designed for performance. 

It’s compact and written in C so it can be enabled to minimalistic IOT devices and remain fast enough to transfer a huge quantity of logs. Moreover, it has built-in Kubernetes support. It’s an especially compact tool designed to transport logs from all nodes.

Other tools like Stackdriver logging provided by GCP, Logz.io and other 3rd party drivers are available too.



Monitoring:

There are various open source tools available for Kubernetes application monitoring like:

Heapster: Installed as a pod inside of Kubernetes, it gathers data and events from the containers and pods within the cluster.

Prometheus: Open source Cloud Native Computing Foundation (CNCF) project that offers powerful querying capabilities, visualization and alerting.

Grafana:  Used in conjunction with Heapster for visualizing data within your Kubernetes environment.

InfluxDB: A highly-available database platform that stores the data captured by all the Heapster pods.

CAdvisor:  focuses on container level performance and resource usage. This comes embedded directly into kubelet and should automatically discover active containers.



Logging driver plugins are available in Docker 17.05 and higher. Logging capabilities available in Docker are exposed in the form of drivers, which is very handy since one gets to choose how and where log messages should be shipped

Docker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers.

Each Docker daemon has a default logging driver, which each container uses unless you configure it to use a different logging driver.

In addition to using the logging drivers included with Docker, you can also implement and use logging driver plugins.

To configure the Docker daemon to default to a specific logging driver, set the value of log-driver to the name of the logging driver in the daemon.json file, which is located in /etc/docker/ on Linux hosts.

The following example explicitly sets the default logging driver to syslog:

{
  "log-driver": "syslog"
}

When you start a container, you can configure it to use a different logging driver than the Docker daemon default, using the --log-driver flag. If the logging driver has configurable options, you can set them using one or more instances of the --log-opt <NAME>=<VALUE> flag. Even if the container uses the default logging driver, it can use different configurable options.
  • Size

Kubernetes
 Docker
As per official page of Kubernetes documentation K8s v1.12 support clusters with up to 5000 nodes based on the below criteria:
  • No more than 5000 nodes
  • No more than 150000 total pods
  • No more than 300000 total containers
  • No more than 100 pods per node.

According to the Docker’s blog post on scaling Swarm clusters published during Nov 2015, Docker Swarm has been scaled and performance tested up to 30,000 containers and 1,000.
Specs
  • Discovery backend: Consul
  • 1,000 nodes
  • 30 containers per node
  • Manager: AWS m4.xlarge (4 CPUs, 16GB RAM)
  • Nodes: AWS t2.micro (1 CPU, 1 GB RAM)
  • Container image: Ubuntu 14.04

Results

 Percentile  API Response Time Scheduling Delay

50th     150ms              230ms
90th      200ms             250ms
99th      360ms             400ms

ii) Building and Deploying Containers with Docker

Docker has a capability to builds images automatically by reading the instructions via text file called Dockerfile. It is a simple text file that follows a specific format and instructions set that contains all commands, in order, needed to build a given image. 

A Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer. For example, below is a simple Dockerfile which 

Consider this Dockerfile:

FROM nginx:latest
COPY wrapper.sh /
COPY html /usr/share/nginx/html
CMD ["./wrapper.sh"]

Each instruction creates one layer:

  • FROM creates a layer from the nginx:latest Docker image.
  • COPY adds files from your Docker client’s current directory.
  • CMD specifies what command to run within the container.

When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.

Building a Docker Image

$docker build -t hellowhale

The above shown `docker build` command builds an image from a Dockerfile and a context. The build context is the set of files at a specified location PATH or URL. The PATH is a directory on your local filesystem. The URL is a Git repository location.

Running the Docker Container

A running Docker Image is called Docker container and all you need is to run the below command to expose port 80 on host machine from a container and get it up and running:

docker run -d -p 80:80 --name hellowhale hellowhale

Tagging the Image

$docker tag hellowhale userid/hellowhale

Pushing the Docker Image to DockerHub

Before you push Docker Image to DockerHub, you need to login to DockerHub first using the below command:

$docker login
$docker push userid/hellowhale


iii) Managing container with Kubernetes

Docker CLI for a standalone system is used to build, ship and run your Docker containers. But if you want to run multiple containers across multiple machines, you need a robust orchestration tool and Kubernetes is the most popular in the list.

Kubernetes is an open source container orchestration platform, allowing large numbers of containers to work together in harmony, reducing operational burden. It helps with things like running containers across many different machines, scaling up or down by adding or removing containers when demand changes, keeping storage consistent with multiple instances of an application, distributing load between the containers and launching new containers on different machines if something fails.

Below are the list of comparative CLI used by Docker Vs Kubernetes to manage containers:

Docker CLI
Kubernetes CLI
docker run


To run an nginx container -
$ docker run -d --restart=always --name nginx-app -p 80:80 nginx
kubectl run


To run an nginx Deployment and expose the Deployment, see kubectl run.

$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"

docker ps

To list what is currently running, see kubectl get.

docker:

$ docker ps -a

kubectl get

To list what is currently running under kubernetes cluster:


$ kubectl get po -a

docker exec

To execute a command in a  Docker container:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
55c103fa1296        nginx               "nginx -g 'daemon of…"   6 minutes ago       Up 6 minutes        0.0.0.0:80->80/tcp   nginx-app

$ docker exec 55c103fa1296 cat /etc/hostname

kubectl:

To execute a command in a container, see kubectl exec.

$ kubectl get po
NAME              READY     STATUS    RESTARTS   AGE
nginx-app-5jyvm   1/1       Running   0          10m

$ kubectl exec nginx-app-5jyvm -- cat /etc/hostname
nginx-app-5jyvm

iv) Trends in Docker and Kubernetes

Docker, Inc has around 550+ enterprise customer who uses Docker in a production environment. Few of non-exhaustive list of companies who actively uses Docker are list below:

  1. The New York Times
  2. PayPal
  3. Business Insider
  4. Cornell University (Not a company but still can be considered)
  5. Splunk
  6. The Washington Post
  7. Swisscom
  8. Alm Brand
  9. Assa Abloy
  10. Expedia
  11. Jabil
  12. MetLife
  13. Societe Generale
  14. GE
  15. Groupon
  16. Yandex
  17. Uber
  18. Ebay
  19. Shopify
  20. Spotify
  21. New Relic
  22. Yelp

Recently, the Forrester New Wave™: Enterprise Container Platform Software Suites, Q4 2018 report states that Docker leading the pack with a robust container platform well-suited for the enterprise, offering a secure container supply chain from the developer's desktop to production.

Lots of organizations are already using Kubernetes in production—like the ones listed on the Kubernetes case studies page, including eBay, Buffer, Pearson, Box, and Wikimedia. But that is not a complete list. Kubernetes is even more versatile than the official case studies page suggests. Below is a list of companies using it:

list of companies using Kubernetes

List of Kubernetes Users


   Microservices Usage

Microservices help developers break up monolithic applications into smaller components. They can move away from all-at-once massive package deployments and break up apps into smaller, individual units that can be deployed separately. Smaller microservices can give apps more scalability, more resiliency and - most importantly - they can be updated, changed and redeployed faster. Some of the biggest public cloud applications run as microservices already.

Containers are a packaging strategy for microservices. Think of them more as process containers than virtual machines. They run as a process inside a shared operating system. A container typically only does one small job - validate a login or return a search result. Docker is a tool that describes those packages in a common format, and helps launch and run them. Linux containers have been around for a while, but their popularity in the public cloud has given rise to an exciting new ecosystem of companies building tools to make them easier to use, cluster and orchestrate them, run them in more places, and manage their life cycles.

Over the last two years, many different types of software vendors - from operating system to IT infrastructure companies - have all joined the container ecosystem. There’s already an industry organization - the open container initiative - guiding the market and making sure everyone plays well together. IBM, HP, Microsoft, VMware, Google, Red Hat, CoreOS - these are just some of the major vendors racing to make containers as easy as possible for developers to use, to share, to protect, and to scale.

The rising demand for multi-cloud environments

With an estimated 85% of today’s enterprise IT organizations employing a multi-cloud strategy, it has become more critical that customers have a ‘single pane of glass’ for managing their entire application portfolio. Most enterprise organizations have a hybrid and multi-cloud strategy. Containers have helped to make applications portable but let us accept the fact that even though containers are portable today but the management of containers is still a nightmare. The reason being –

  • Each Cloud is managed under a separate operational model, duplicating efforts
  • Different security and access policies across each platform
  • Content is hard to distribute and track
  • Poor Infrastructure utilization still remains
  • The emergence of Cloud-hosted K8s is exacerbating the challenges with managing containerized applications across multiple Clouds

This time Docker introduced new application management capabilities for Docker Enterprise Edition that will allow organizations to federate applications across Docker Enterprise Edition environments deployed on-premises and in the cloud as well as across cloud-hosted Kubernetes. This includes Azure Kubernetes Service (AKS), AWS Elastic Container Service for Kubernetes (EKS), and Google Kubernetes Engine (GKE). The federated application management feature will automate the management and security of container applications on premises and across Kubernetes-based cloud services. It will provide a single management platform to enterprises so that they can centrally control and secure the software supply chain for all the containerized applications.

With this announcement, undoubtedly Docker Enterprise Edition is the only enterprise-ready container platform that can deliver federated application management with a secure supply chain. Not only does Docker give you your choice of Linux distribution or Windows Server, the choice of running in a virtual machine or on bare metal, running traditional or microservices applications with either Swarm or Kubernetes orchestration, it also gives you the flexibility to choose the right cloud for your needs.

Talking about Kubernetes Platform, version 1.3 of container management platform Kubernetes

Introduced cross-cluster federated services with an ability to span workloads across clusters and, by extension, across multiple clouds. This opens up the possibility for workloads that need to draw resources from multiple clouds. This would also mean that large jobs can be split among clouds. Not only this, this introduced an ability to automatically scale services to match demand. 

Increasing support for Docker and Kubernetes

Kubernetes has been enjoying widespread adoption among startups, platform vendors, and enterprises. Companies like Amazon, Google, IBM, Red Hat, and Microsoft offer managed Kubernetes under the Containers as a Service (CaaS) model. The open source ecosystem has dozens of players building various tools covering logging, monitoring, automation, storage, and networking aspects of Kubernetes. System integrators have dedicated practices and offerings based on Kubernetes. Global players like Uber, Bloomberg, Blackrock, BlaBlaCar, The New York Times, Lyft, eBay, Buffer, Squarespace, Ancestry, GolfNow, Goldman Sachs and many others are using Kubernetes in production at massive scale. According to Redmonk, a developer-focused research company, 71 percent of the Fortune 100 use containers and more than 50 percent of Fortune 100 companies use Kubernetes as their container orchestration platform.

Did you know there are 35 certified Kubernetes distribution, 22 certified Kubernetes hosting platform and 50 certified Kubernetes service provider available? Over the last three years, Kubernetes has been adopted by a vibrant, diverse community of providers. The Cloud Native Computing Foundation® (CNCF®), which sustains and integrates open source technologies like Kubernetes® , today announced the availability of the Certified Kubernetes Conformance Program, which ensures Certified Kubernetes™ products deliver consistency and portability, and that 35 Certified Kubernetes Distributions and Platforms are now available. A Certified Kubernetes product guarantees that the complete Kubernetes API functions as specified, so users can rely on a seamless, stable experience.

In the other hand, Docker Enterprise Edition (EE) 2.0 represents a significant leap forward in container platform solutions, delivering the only solution that manages and secures applications on Kubernetes in multi-Linux, multi-OS, and multi-cloud customer environments. One of the most promising features announced with this release includes Kubernetes integration as an optional orchestration solution, running side-by-side with Docker Swarm. Not only this, this release includes Swarm Layer 7 routing improvements, Registry image mirroring, Kubernetes integration to Docker Trusted Registry & Kubernetes integration to Docker EE access controls. With this new release, organizations will be able to deploy applications with either Swarm or fully-conformant Kubernetes while maintaining the consistent developer-to-IT workflow.

Enterprise Edition Platform

Docker EE is more than just a container orchestration solution; it is a full lifecycle management solution for the modernization of traditional applications and microservices across a broad set of infrastructure platforms. It is a Containers-as-a-Service(CaaS) platform for IT that manages and secures diverse applications across disparate infrastructure, both on-premises and in the cloud. Docker EE provides an integrated, tested and certified platform for apps running on enterprise Linux or Windows operating systems and Cloud providers. It is tightly integrated into the underlying infrastructure to provide a native, easy to install experience and an optimized Docker environment.

V) Kubernetes vs Docker swarm

  • Installation & Cluster configuration
  • GUI
  • Scalability
  • Auto-Scaling
  • Load Balancing
  • Rolling Updates & Rollbacks
  • Data Volumes
  • Logging & Monitoring

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It was built by Google based on their experience running containers in production using an internal cluster management system called Borg (sometimes referred to as Omega). On the other hand, a Swarm cluster consists of Docker Engine deployed on multiple nodes. Manager nodes perform orchestration and cluster management. Worker nodes receive and execute tasks 

Below is the major list of differences between Docker Swarm & Kubernetes:

Docker Swarm
Kubernetes
Applications are deployed in the form of services (or “microservices”) in a Swarm cluster. Docker Compose is a tool which is majorly used to deploy the app.
Applications are deployed in the form of a combination of pods, deployments, and services (or “microservices”).
Autoscaling feature is not available either in  Docker Swarm (Classical) or Docker Swarma
An auto-scaling feature is available under K8s. It uses a simple number-of-pods target which is defined declaratively using deployments. CPU-utilization-per-pod target is available.  
Docker Swarm supports rolling updates features. At rollout time, you can apply rolling updates to services. The Swarm manager lets you control the delay between service deployment to different sets of nodes, thereby updating only 1 task at a time.
Under kubernetes, the deployment controller supports both “rolling-update” and “recreate” strategies. Rolling updates can specify a maximum number of pods unavailable or maximum number running during the process.
Under Docker Swarm Mode, the node joining a Docker Swarm cluster creates an overlay network for services that span all of the hosts in the Swarm and a host-only Docker bridge network for container.

By default, nodes in the Swarm cluster encrypt overlay control and management traffic between themselves. Users can choose to encrypt container data traffic when creating an overlay network by themselves.
Under K8s, the networking model is a flat network, enabling all pods to communicate with one another. Network policies specify how pods communicate with each other. The flat network is typically implemented as an overlay.
Docker Swarm health checks are limited to services. If a container backing the service does not come up (running state), a new container is kicked off.Users can embed health check functionality into their Docker images using the HEALTHCHECK instruction.
Under K8s, the health checks are of two kinds: liveness (is app responsive) and readiness (is app responsive, but busy preparing and not yet able to serve)
Out-of-the-box K8S provides a basic logging mechanism to pull aggregate logs for a set of containers that make up a pod.


Ajeet Singh

Ajeet Singh Raina

Blog Author

Ajeet Singh Raina is a Docker Captain & {code} Catalysts by DellEMC. He is currently working as Technical Lead Engineer in Enterprise Solution Group @ Dell R&D. He has over 10+ years of solid understanding of a diverse range of IT infrastructure, systems management, systems integration and quality assurance.  He is a frequent blogger at www.collabnix.com and have 150+ blogs contributed on new upcoming Docker releases and features. His personal blog attracts roughly thousands of visitors and tons of page-views every month. His areas of interest includes Docker on Swarm Mode, IoTs, and Legacy Applications & Cloud. 

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

How to Install Docker on Ubuntu

Docker is a platform that packages the application and all its dependencies in the container so that the application works seamlessly. The Container makes the application run its resource in an isolated process similar to the virtual machines, but it is more portable. For a detailed introduction to the different components of a Docker container, you can check out Introduction to Docker, Docker Containers & Docker Hub This tutorial covers the installation and use of Docker Community Edition (CE) on an Ubuntu 20.04 machine. Pre-requisitesAudienceThis tutorial is meant for those who are interested in learning Docker as a container service System Requirements Ubuntu 20.04 64-bit operating system. (If Linux OS is not in system, we can run docker using Virtual Box, PFB the steps) A user account with sudo privileges An account on Docker Hub to pull or push an image from Hub. Ubuntu Installation on Oracle Virtual Box If you want to use Ubuntu 20.04 without making any change to the Windows Operating system, you can proceed with the Oracle Virtual box.  Virtual Box is free and open-source virtualization software from Oracle. It enables you to install other operating systems in virtual machines. It is recommended that the system should have at least 4GB of RAM to get decent performances from the virtual operating system. Below are the steps for downloading Ubuntu 20.04 on Oracle Virtual box:Navigate to the website of Oracle Virtual Box, download the .exe file and get the latest stable version. 1. Once done with downloading the virtual box, we can navigate to and download the  Ubuntu disk image (.iso file) by clicking on the download option 2. Once the download has been completed for Ubuntu .iso file, open the virtual box and click on "New" present on top.  3. Enter the details of your virtual machine by giving any name, type as "Linux " and Version as Ubuntu (64 bit)  4. Choose the memory (RAM ) that needs to be allocated to the Virtual machine  and click on Next. (I have chosen 3000 MB) 5. After the RAM allocation ,Click on  Create a virtual disk now. This serves as the hard disk of the virtual Linux system. It is where the virtual system will store its files 6. Now, we want to select the Virtual Hard Disk.  7. We can choose either the “Dynamically allocated” or the “Fixed size” option for creating the virtual hard disk. 8. Finally, we have  to specify our Ubuntu OS's size. The recommended size is 10 GB, but it  can be increased if required.8. Finally, we have  to specify our Ubuntu OS's size. The recommended size is 10 GB, but it  can be increased if required.9. Ubuntu OS is ready to install in Virtual Box, but before starting the Virtual system, we need to a make few changes in settings. Click on storage under the setting.  10. Click on Empty under Controller IDE. Navigate to Attributes and browse the Optical Drive option. 11. Choose the .iso file from the location where it is downloaded. Once selected, click on OK and start the Virtual box by clicking on start present on the Top menu.12. Click ok and start the machine. 13. Proceed with "Install Ubuntu" 14. Under "Updates and other software" section, check "Normal installation", and the two options under “Other options” and continue.15. In Installation type, check Erase disk and install Ubuntu.16. Choose your current location and set up your profile. Click Continue.  17. It may take 10-15 minutes to complete the installation 18. Once the installation finishes, restart the virtual systemWe are done with pre-request, and can now proceed with using this Ubuntu. Docker Installation Process on Ubuntu  Method 1: Install Docker on Ubuntu Using Default Repositories One of the easiest ways is the installation of Docker from the standard Ubuntu 20.04 repositories, but It’s possible that the Ubuntu default repositories have not updated to the latest revision of Docker. It happens because in some cases Docker is not supporting that particular Ubuntu version. Therefore, there can be a scenario where  Ubuntu default repositories have not updated to the latest version. Log in to Virtual Box. Run “docker” as command to check if it is previously installed.To install Docker on Ubuntu box, first update the packages. It will ask for a password. Enter it and allow the system to complete the updates.sudo apt updateTo install Docker from Ubuntu default repositories, use the below command: sudo apt install docker.io To check the installed version, use the below: docker --version Since discussed above, it has installed the 19.03.8 version of docker whereas the latest version is 20.04  Method 2: Install Docker from Official Repository For installing docker on ubuntu 20.04 with the latest version, we’ll proceed with enabling the Docker repository, importing the repository GPG key, and finally installing the package. To install the docker on Ubuntu box, update your existing list of packages. It will ask for a password. Enter it and allow the system to complete the updates. sudo apt update  We need to install a few prerequisite packages to add HTTPS repository : sudo apt install apt-transport-https ca-certificates curl software-properties-common Import the repository’s GPG key using the following curl command: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - Add the Docker APT repository to the system sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"Again, update the package database with the Docker packages sudo apt update Finally, install Docker using below command: sudo apt install docker-ce To check the installed version use below: docker --versionTo start, enable and check the status of docker, use below command: sudo systemctl  status docker  sudo systemctl  start  docker  sudo systemctl  enable  docker To check system wide information regarding docker installation, we use the command “docker info”. Information that is shown includes the kernel version, number of containers and unique images. The output will contain details as given below, depending upon the daemon running: Source:$ docker info  Client:   Context:    default   Debug Mode: true  Server:   Containers: 14    Running: 3    Paused: 1    Stopped: 10   Images: 52   Server Version: 1.13.0   Storage Driver: overlay2    Backing Filesystem: extfs    Supports d_type: true    Native Overlay Diff: false   Logging Driver: json-file   Cgroup Driver: cgroupfs   Plugins:    Volume: local    Network: bridge host macvlan null overlay   Swarm: active    NodeID: rdjq45w1op418waxlairloqbm    Is Manager: true    ClusterID: te8kdyw33n36fqiz74bfjeixd    Managers: 1    Nodes: 2    Orchestration:     Task History Retention Limit: 5    Raft:     Snapshot Interval: 10000     Number of Old Snapshots to Retain: 0     Heartbeat Tick: 1     Election Tick: 3    Dispatcher:     Heartbeat Period: 5 seconds    CA Configuration:     Expiry Duration: 3 months    Root Rotation In Progress: false    Node Address: 172.16.66.128 172.16.66.129    Manager Addresses:     172.16.66.128:2477   Runtimes: runc   Default Runtime: runc   Init Binary: docker-init   containerd version: 8517738ba4b82aff5662c97ca4627e7e4d03b531   runc version: ac031b5bf1cc92239461125f4c1ffb760522bbf2   init version: N/A (expected: v0.13.0)   Security Options:    apparmor    seccomp     Profile: default   Kernel Version: 4.4.0-31-generic   Operating System: Ubuntu 16.04.1 LTS   OSType: linux   Architecture: x86_64   CPUs: 2   Total Memory: 1.937 GiB   Name: ubuntu   ID: H52R:7ZR6:EIIA:76JG:ORIY:BVKF:GSFU:HNPG:B5MK:APSC:SZ3Q:N326   Docker Root Dir: /var/lib/docker   Debug Mode: true    File Descriptors: 30    Goroutines: 123    System Time: 2016-11-12T17:24:37.955404361-08:00    EventsListeners: 0   Http Proxy: http://test:test@proxy.example.com:8080   Https Proxy: https://test:test@proxy.example.com:8080   No Proxy: localhost,127.0.0.1,docker-registry.somecorporation.com   Registry: https://index.docker.io/v1/   WARNING: No swap limit support   Labels:    storage=ssd    staging=true   Experimental: false   Insecure Registries:    127.0.0.0/8   Registry Mirrors:     http://192.168.1.2/     http://registry-mirror.example.com:5000/   Live Restore Enabled: false Note: In case you get below error after running “docker info” command, one way is to add sudo in front and run the command, OR you can refer to the same error-resolving steps mentioned under Running Docker Images section. Running Docker Images and Verifying the process: To check whether you can access and download the images from Docker Hub, run the following command: sudo docker run hello-worldIn case of errors received after running the docker run command, you can correct it using the following steps, otherwise proceed with the next step of checking the image. ERROR: docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/containers/create: dial unix /var/run/docker.sock: connect: permission denied. See 'docker run --help'.   Create the docker group if it does not exist sudo groupadd docker Add your user to the docker group.   sudo usermod -aG docker $USER   Eg:- sudo usermod -aG docker kanav Run the following command or Logout and login again and run ( if that doesn't work you may need to reboot your machine first)  newgrp docker Check if docker can be run without root docker run hello-world If the problem still continues, try to reboot it and run the command. To check the image, use this command: sudo docker images Uninstall Procedure: Below are the common commands used to remove images and containers: sudo  apt-get  purge docker-ce docker-ce-cli containerd.io To completely uninstall Docker, use below: To identify what are the installed packages, this is the command: dpkg -l | grep -i dockersudo apt-get purge -y docker-engine docker docker.io docker-ce docker-ce-cli  sudo apt-get autoremove -y --purge docker-engine docker docker.io docker-ce   To remove images, containers, volumes, or user created configuration files, these commands can be used: sudo rm -rf /var/lib/docker /etc/docker sudo rm /etc/apparmor.d/docker sudo groupdel docker sudo rm -rf /var/run/docker.sock  Conclusion: If you found this Install Docker on Ubuntu blog relevant and useful, do check out the Docker-Training workshop from KnowledgeHut, where you can get equipped with all the basic and advanced concepts of Docker! 
5564
How to Install Docker on Ubuntu

Docker is a platform that packages the application... Read More

How to Install Kubernetes on Windows

Kubernetes is a container-based platform for managing cloud resources and developing scalable apps. It is widely regarded as the most common platform for automating, deploying, and scaling the entire cloud infrastructure. The platform runs on all major operating systems and is the most widely used open-source cloud tool.  Kubernetes can scale your entire infrastructure, monitor each service's health, act as a load balancer, and automate deployments, among other things. You can deploy your pods (docker containers) and services across the cloud by installing and configuring as many nodes (clusters) as you want.Let’s get started. We will guide you through the complete roadmap on how to install Kubernetes for Windows users. This tutorial will show you how to set up Kubernetes and deploy the official web GUI dashboard, which will allow you to manage and monitor everything. PrerequisitesFor installing Kubernetes in your system, here are a few prerequisites that need special attention. The hardware and software requirements are discussed below:Hardware requirementsMaster node with at least 2 GB memory. (Additional will be great)Worker node with 700 MB memory capacity.Your Mouse/Keyboard (monitor navigation)Software requirementsHype-VDocker DesktopUnique MAC addressUnique product UUID for every nodeEnsuring that there is a full range of connectivity between all the machines in the cluster is a must.Installation ProcedureStep 1: Install & Setup Hyper-VAs we all know, Windows has its virtualization software, known as Hyper-V, which is essentially VirtualBox on steroids. Hyper-V allows you to manage your virtual machines (VMs) using either the free Microsoft GUI tool or the command line. It's simple to enable Hyper-V, but first, make sure your PC meets the following requirements:Your operating system should be Windows 10 (Enterprise, Pro, or Education), withAt least 4GB of RAM and CPU Virtualization support, though you should double-check that it's turned on in your BIOS settings.You can disable or enable features like Hyper-V that may not be pre-installed when Windows is installed. Always keep in mind that some of the features require internet access to download additional Windows Update components.To enable Hyper-V on your machine, follow the steps below:1. Open the Control Panel.2. Select Programs from the left panel.3. Next, go to Programs and Features, then Turn Windows Features On or Off.4. Examine Hyper-V and the Hypervisor Platform for Windows.5. Select OK.Your system will now begin installing Hyper-V in the background; it may be necessary to reboot a few times until everything is properly configured. Don't hold your breath for a notification or anything! Verify that Hyper-V is installed successfully on your machine by running the following command as Administrator in PowerShell:Get-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-VOnce the state is shown as Enabled for above command in Power shell, we are good to go.Step 2: Download Docker for Windows and install it.Kubernetes is a container orchestration system built on top of Docker. It is essentially just a tool for communicating with Docker containers and managing everything at an enterprise level. Simply go to install Docker and click to Get Docker Desktop for Windows (stable).Windows users can use Docker Desktop.Docker Desktop for Windows is a version of Docker optimized for Windows 10. It's a native Windows application that makes developing, shipping, and running dockerized apps simple. Docker Desktop for Windows is the fastest and most reliable way to develop Docker apps on Windows, as it uses Windows-native Hyper-V virtualization and networking. Docker Desktop for Windows can run Docker containers on both Linux and Windows.Installation of Docker DesktopLet us take a look on the different steps involved in installing docker desktop.Double-click Docker for Windows Installer to run the installer.Docker starts automatically once the installation is complete. Docker is running and accessible from a terminal, as indicated by the whale in the notification area.Run Try out some Docker commands in a command-line terminal like PowerShell!  Run the Docker version to check the version.Run Docker run hello-world to verify that Docker can pull and run images.Boom!As long as the Docker Desktop for Windows app is running, Docker is accessible from any terminal. The Docker whale in the taskbar has a setting button that can be accessed from the UI.For a detailed step by step installation guide with screenshot, visit the blog - How to Install Docker on Windows, Mac, & Linux: A Step-By-Step GuideWARNING: FOLLOW THE INSTRUCTIONS BELOW! If Docker was successfully installed but you can't find its tray icon, you'll need to restart your computer. Check the official troubleshooting guide here if the issue persists. Step 3: Install Kubernetes on Windows 10Docker includes a graphical user interface (GUI) tool that allows you to change some settings or install and enable Kubernetes.To install Kubernetes, simply follow the on-screen instructions on the screen:1. Right-click the Docker tray icon and select Properties.2. Select "Settings" from the drop-down menu.3. Select "Kubernetes" from the left panel.4. Check Enable Kubernetes and click "Apply"Docker will install additional packages and dependencies during the installation process. It may take between 5 and 10 minutes to install, depending on your Internet speed and PC performance. Wait until the message 'Installation complete!' appears on the screen. The Docker app can be used after Kubernetes has been installed to ensure that everything is working properly. Both icons at the bottom left will turn green if both services (Docker and Kubernetes) are running successfully and without errors.Example.Step 4: Install Kubernetes DashboardThe official web-based UI for managing Kubernetes resources is Kubernetes Dashboard. It isn't set up by default. Kubernetes applications can be easily deployed using the cli tool kubectl, which allows you to interact with your cloud and manage your Pods, Nodes, and Clusters. You can easily create or update Kubernetes resources by passing the apply argument followed by your YAML configuration file.Use the following commands to deploy and enable the Kubernetes Dashboard.1. Get the yaml configuration file from here.2. Use this to deploy it. kubectl apply -f .\recommended.yaml3. Run the following command to see if it's up and running.:kubectl.exe get -f .\recommended.yaml.txtStep 5: Access the dashboardThe dashboard can be accessed with tokens in two ways: the first is by using the default token created during Kubernetes installation, and the second (more secure) method is by creating users, giving them permissions, and then receiving the generated token. We'll go with the first option for the sake of simplicity.1. Run the following command PowerShell (not cmd)((kubectl -n kube-system describe secret default | Select-String "token:") -split " +")[1]2. Copy the generated token3. Runkubectl proxy.4. Open the following link on your browser: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/5. SelectToken & paste the generated token6. Sign InFinallyYou'll be able to see the dashboard and your cloud resources if everything is set up correctly. You can then do almost all of the "hard" work without having to deal with the CLI every time. You may occasionally get your hands dirty with the command line, but if you don't understand Docker and Kubernetes or don't have the time to manage your own cloud, it's better to stick with some PaaS providers that can be quite expensive.Kubernetes Uninstallation ProcessThe procedures for uninstalling cert-manager on Kubernetes are outlined below. Depending on which method you used to install cert-manager - static manifests or helm - you have two options.Warning: To uninstall cert-maneger, follow the same steps as you did to install it, but in reverse. Whether cert-manager was installed from static manifests or helm, deviating from the following process can result in issues and potentially broken states. To avoid this, make sure you follow the steps outlined below when uninstalling.Step 1: Before continuing, make sure that all user-created cert-manager resources have been deleted. You can check for any existing resources with the following command:$ kubectl get Issuers,ClusterIssuers,Certificates,CertificateRequests,Orders,Challenges --all-namespacesAfter you've deleted all of these resources, you can uninstall cert-manager by following the steps outlined in the installation guide.Step 2: Using regular manifests to uninstall.Uninstalling from a regular manifest installation is as simple as reversing the installation process and using the delete command.kubectl.2. Delete the installation manifests using a link to your currently running version vX.Y. Z like so:$ kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/vX.Y.Z/cert-manager.yamlStep 3: Uninstalling with Helm.1. Uninstalling cert-manager from a Helm installation is as simple as reversing the installation process and using the delete command on both the server and the client. kubectl and helm.$ helm --namespace cert-manager delete cert-manager2. Next, delete the cert-manager namespace:$ kubectl delete namespace cert-manager3. Finally, delete the cert-manger CustomResourceDefinitions using the link to the version vX.Y.Z you installed:$ kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/vX.Y.Z/cert-manager.crds.yamlThe namespace is in the process of being terminated.The namespace may become stuck in a terminating state if it is marked for deletion without first deleting the cert-manager installation. This is usually because the APIService resource is still present, but the webhook is no longer active and thus no longer reachable.4. To fix this, make sure you ran the above commands correctly, and if you're still having problems, run:$ kubectl delete apiservice v1beta1.ConclusionIn this tutorial, we have explained in detail how to install Kubernetes with Hyper-V. Also, we have tackled what requirements we need, both in terms of the software and hardware. We have explained how to install Hyper-V and Docker on Windows 10.   It is important to note that the fundamental difference between Kubernetes and Docker is that Kubernetes is meant to run across a cluster and Docker is meant to run through nodes.   Kubernetes is also more extensive than Docker Swarm and is meant to coordinate a cluster of nodes at scale in production in an efficient manner. Each software is crucial to having a smooth installation process.   We finally looked at how to install and uninstall Kubernetes.
1705
How to Install Kubernetes on Windows

Kubernetes is a container-based platform for manag... Read More

How To Install Jenkins on Ubuntu

Jenkins is a Java-built open-source Continuous Integration (CI) and CD platform. Basically, Jenkins builds software projects, checks and deploys them. This is one of the most practical programming tools you can master, and today we will show you how Jenkins is installed on Ubuntu 18.04. Use this powerful tool to activate your VPS server!Jenkins is loved by teams of all sizes, for different language projects like Java, Ruby, Dot Net, PHP etc. Jenkins is a platform that is autonomous, and can be used on Windows, Linux or any other operating system.  Prerequisites Hardware Requirements: RAM- 4 GB (Recommended) Storage- more than 50 GB of Hard Disk Space (Recommended)        Software Requirements: Java: Java Development Kit (JDK) or Java Runtime Environment (JRE).  Web Browser: Any browser such as Google Chrome, Mozilla Firefox, Microsoft Edge. Operating System: An Ubuntu 18.04 server installed with a non-root sudo user and firewall. For help in the planning of production capability of a Jenkins installation see Choosing the right hardware for Masters. Why Use Jenkins? You need to consider continuous integration (CI) and continuous delivery (CD) to understand Jenkins: Continuous integration – the practice of continuous production combined with the main industry.  Continuous delivery – the code is constantly delivered to an area after the code is ready for delivery. It could be for production or staging. The commodity is supplied to a consumer base that can provide QA or inspection by customers. Developers update the code regularly in the shared repository (such as GitHub or TFS). Improvements made in the source code are made at the end of the day, making it difficult to identify the errors. So, Jenkins is used here. Once a developer changes the repository, Jenkins will automatically enable the build and immediately warn you in the event of an error (Continuous Integration CI). Installation Procedure: Step 1: Install Java Skip to the next section if you have Java already installed on your system. To check, please run the following command in the terminal: java --version Jenkins needs Java for running, but it doesn't include certain distributions by default, and Java versions of Jenkins are incompatible. Multiple Java implementations are available to you. OpenJDK is currently the most popular one, which we will use in this guide. Being an open-source Java application, Jenkins requires the installation of OpenJDK 8 on your system. The apt repositories can directly access OpenJDK 8. The installation of OpenJDK from standard repositories is recommended. Open and enter the following in the terminal window: $ sudo apt update  $ sudo apt install openjdk-8-jdk The download and installation will be requested. Press the "Y" button and press the Enter button to finish the process. Java 8 will be installed on your system. We are ready to download Jenkins package now as we have our requirements ready! Step 2: Install Jenkins The default Ubuntu packages for Jenkins are always behind the current version of the project itself. You may use the project-maintained packages to install Jenkins to take advantage of the newest patches and features. 1. add the framework repository key: $ wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add  The device returns OK when the key is inserted. 2. Next, link the repository of Debian packages to the sources.list of the server: $ sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list' 3. When both are in place, upgrade to apt to use the new repository: $ sudo apt update 4. Install Jenkins: $ sudo apt install jenkins Now we're going to start the Jenkins server, as Jenkins and its dependencies are in place. Step 3: Start Jenkins 1. You can start Jenkins using systemctl: $ sudo systemctl start jenkins 2. As systemctl does not display performance, you can use the status command to check that Jenkins has successfully launched: $ sudo systemctl status jenkinsIf all went well, the start of the performance should demonstrate that the service is active and ready to boot: Output: jenkins.service - LSB: Start Jenkins at boot time     Loaded: loaded (/etc/init.d/jenkins; generated)     Active: active (exited) since Sat 2021-04-17 00:34:17 IST; 26s ago       Docs: man:systemd-sysv-generator(8)    Process: 17609 ExecStart=/etc/init.d/jenkins start (code=exited, status=0/SUCC As Jenkins is running, so adjust the firewall rules to complete our further setup of Jenkins from the web browser. Step 4: Opening the Firewall 1. Jenkins works by default on port 8080, so let's open the port with ufw: $ sudo ufw allow 8080  2. Check ufw’s status: $ sudo ufw status You will see that traffic from anywhere is permitted to port 8080. Output: Status: active  To                         Action      From  --                         ------      ----  8000                       ALLOW       Anywhere                    CUPS                       ALLOW       Anywhere                    27017                      ALLOW       Anywhere                    27017                      ALLOW       192.168.1.10                8080                       ALLOW       Anywhere                    8000 (v6)                  ALLOW       Anywhere (v6)               CUPS (v6)                  ALLOW       Anywhere (v6)               27017 (v6)                 ALLOW       Anywhere (v6)               8080 (v6)                  ALLOW       Anywhere (v6) 3. If the firewall is inactive, the following commands will allow OpenSSH and turn it back on: $ sudo ufw allow OpenSSH  $ sudo ufw enable We can finish the initial configuration with Jenkins installed and our firewall configured. Note: If you decide to continue to use Jenkins, use a Nginx Reverse Proxy at Ubuntu 18.04 to configure Jenkins with SSL when your exploration has been completed to protect your passwords and any sensitive system or product information sent between the machine and the server in plain text. Step 5: Setting Up Jenkins 1. To set up installation, visit Jenkins on its default 8080 port with your server domain name or IP address: http://your_server_ip_or_domain:8080 You should see the Unlock Jenkins screen, which displays the initial password's location:2. You can use the cat command to display the password: $ sudo cat /var/lib/jenkins/secrets/initialAdminPassword 3. Copy the alphanumeric terminal 32-character password and paste into the Administrator Password field, then click Continue. Output: 0aaaf00d9afe48e5b7f2a494d1881326 The following screen shows the ability to install or select certain plugins: 4. We will click on the option to install proposed plugins to start the installation process immediately. 5. When the installation is done, the first administrative user will be prompted. You can save this step and use your initial password to continue as an Admin. However, we will take some time to create the user. The Jenkins default server is NOT encrypted to prevent data from being protected. Use the Nginx Reverse Proxy on Ubuntu 18.04 to configure Jenkins with SSL. This protects the information of users and builds transmitted through the web interface. 6. You will see a configuration instance page, which asks you to confirm your Jenkins instance's URL of choice. Confirm either your server's domain name or the IP address of your server.  7. Click Save and Finish once you have confirmed the relevant information. A confirmation page will show you that "Jenkins is ready!"  Hit Start using Jenkins button and it will take you to the Jenkins dashboard.  Congratulations! You have completed the installation of Jenkins. Step 6: Creation of New Build Jobs in Jenkins: The freestyle job is a highly versatile and user-friendly choice. It's easy to set up and many of its options appear in many other build jobs. For all projects, you can use it. Follow the following steps: You have to login to your Jenkins Dashboard by visiting2) Create New item: Click on the New Item on the left-hand side of the dashboard.3) Fill the project description: You can enter the job details as per your need.4) Source Code Management: Under source code management, enter the repository URL.You can also use a Local repository. 5) Build Environment: Now in the Build section, Click on the “Add build Setup” Select "Execute Windows batch command".Now, add the java commands. In this article, we have used javac HelloWorld.java and java HelloWorld.   6) Save the project: Click Apply and save the project. 7) Build Source Code and check its status: Click on “Build Now” on the left-hand side of the screen to create the source code. 8) Console Output: Select the build number and click on “Console Output” to check the status of the build run. When it shows success, it means that we have successfully run the HelloWorld program from the cGitHub Repository. In case of failure, you can check the job logs by clicking on failure icon and debug the root cause.Uninstall Jenkins Follow the instructions to uninstall Jenkins: $ sudo apt-get remove jenkins Uninstall Jenkins: $ sudo apt-get remove --auto-remove jenkins Purging your data: $ sudo apt-get purge jenkins or you can use: $ sudo apt-get purge --auto-remove jenkins Conclusion: Installing Jenkins on Ubuntu is really that easy. Jenkins has a low learning curve and so you can start to work with it as quickly as possible. In the above article we have learned how to install Jenkins in an Ubuntu machine where all the steps are explained clearly. In case you want to learn more about the core concepts of Jenkins Jobs, Pipelines, Distributed System, Plugins, and how to use Jenkins in depth you can enroll for our course Jenkins Certification Course. 
5532
How To Install Jenkins on Ubuntu

Jenkins is a Java-built open-source Continuous In... Read More

Useful links