Search

Docker Vs Virtual Machines(VMs)

Let’s have a quick warm up on the resource management before we dive into the discussion on virtualization and dockers.In today’s multi-technology environments, it becomes inevitable to work on different software and hardware platforms simultaneously.The need to run multiple different machines (Desktops, Laptops, handhelds, and Servers) platforms with customized hardware and software requirements has given the rise to a new world of virtualization in IT industry.What a machine need?Each computing environment(machine) needs its own component of hardware resources and software resources.As more and more machines are needed, building up and administering many such stand-alone machines is not only cumbersome, time consuming but also adds up to the cost and energy.Apparently; to run a customized High-power Scalable Server is a better idea to consolidate all the hardware and software requirements into one place and have a single server run and distribute the resources to many machines over a network.That saves us time, resources, energy and revenue.These gigantic servers are stored in a data warehouse called a Datacenter.Below Diagram (2) indicates a single server serving and sharing resources and data among multiple client machinesDoes this look simplified enough? Yes of course!So, this setup looks feasible we have a high-power, high-storage Server that gives resources to many smaller(resources) machines over a network.How to manage huge data - ServersWith Internet Of Things in boom, Information is overflowing with a huge amount of data; handling tremendous data needs more system resources which means more Dedicated servers are needed.Many Servers approach challenge:Running several Dedicated servers for specific services such as Web service, application or database service as indicated in Diagram (3) is difficult to administer and consumes more energy, resources, manpower and is highly expensive.In addition; resource utilization of servers is very poor resulting in resource wastage.This is where simulating different environments and running them all on a single server is a smart choice; rather than having to run multiple physically distinct servers.This is how Diagram (3) would change after consolidating different servers into one as shown in Diagram (4).Sheet 2VirtualizationWhat is VirtualizationThe above single server implementation can be defined as the following term.Virtualization is a technique used to simulate and pretend a single infrastructure resource (hardware resources and software resources) to be acting as many providing multiple functionalities or services without the need to physically build, install and configure.In other words;Running multiple simulated environments in a single machine without installing and configuring them is called Virtualization.Technically speaking;Virtualization is an abstract layer that shares the infrastructure resources among various simulated virtual machines without the need to physically set up these environments.Diagram (5) displays different virtual Operating systems are running on the same machine and using the same hardware architecture of the underlying machine.What is a Virtual machineThe simulated virtualized environments are called virtual machines or VM.Virtual machine is a replication/simulation of an actual physical machine.A VM acts like a real physical machine and uses the physical resources of the underlying host OS.A VM is a running instance of a real physical machine.Need for virtualizationSo; we have an overview of virtualization, let us examine when should we virtualize and what are the benefits of virtualization?Better resource management and cost-effective: as indicated in Diagram (6) and Diagram (7); hardware resources are distributed wisely on need basis to different environments; all the virtual machines share the same resources and reduce resource wastage.Ease of quick administration and maintenance: It is easier to build, install, configure one server rather than multiple servers. Updating a patch on various machines from a single virtualized server is much more feasible.Disaster recovery: Since all the virtualized machines reside on the same server and are treated as mounted volumes of data files, it is easier to back up these machines. In case of a disaster failure (power failure, network down, cyber-attacks, failed test code, etc) VM screenshots are used to recover the running state of the machine and the whole setup can be built up within minutes.Isolated and independent secure test environment: virtualization provide an isolated independent virtual test environment to test the legacy code or a vendor-specific product or even a beta release or say a corrupt code without affecting the main hardware and software platform. (This is a contradictory statement though; will discuss more under types of virtualization)These test environments like dev, uat, preprod, prod etc..can be easily tested and discarded.Easily scalable and upgradable: Building up more simulated environments means spinning up more virtual machines. Also upgrading VMs is as good as to run a patch in all VMs.Portable: Virtual machines are lightweight compared to the actual running physical machines; in addition, a VM that includes its own OS, drivers, and other installation files is portable on any machine. One can access the data virtually from any location.The screenshot of activity monitor below compares the CPU load:Implementation a) What is hypervisor and its types?As discussed in the previous section; virtualization is achieved by means of a virtualized layer on top of hardware or a software resource.This abstract layer is called a hypervisor.A hypervisor is a virtual machine monitor (VMM)There are 2 types of hypervisors: Diagram (8)Type-1 or bare-metal hypervisorType-2 or hosted hypervisorType-1 or bare-metal hypervisor is installed directly on the system hardware, thus abstracting and sharing the hardware components with the VMs.Type-2 or hosted hypervisor is installed on top of the system bootable OS called host OS; this hypervisor abstracts the system resources visible to the host OS and distributes it among the VMs.Both have their own role to play in virtualization.b) Comparing hypervisor typesType-1 or bare-metal hypervisorType-2 or hosted hypervisorInstalled directly on the infrastructure-OS independent and more secure against software issues.Installed on top of the host OS-more prone to software failures.Better resource flexibility: Have direct access to the hardware infrastructure (Hard-drive partition, RAM, embedded cards such as NIC). Provide more flexibility and scalability to the VMs and assign resources on a need basis.Limited resource allocation: Have access to just the resources exposed by the host OS.VMs installed will have limited access to hardware resources allocated and exposed by the host OS.Single point of failure: A compromised VM may affect the kernel. Extra security layers needed.A compromised VM may affect only the host OS, kernel still remains unreachable.Low latency due to direct link to the infrastructure.High latency as all the VMs have to pass through the OS layer to access the system resources.Generally used in ServersGenerally used on small client machinesExpensiveLess expensiveType-1 Hypervisors in market:VMWare ESX/ESXiHyperkit (OSX)Microsoft Hyper-V (Windows)KVM(Linux)Oracle VM ServerType-2 Hypervisors in market:Oracle VM VirtualBoxVMWare WorkstationParallels desktop for MACTypes of virtualizationBased on what resource is virtualized, there are different classifications of virtualization.Server, Storage device, operating system, networkDesktop virtualization: Entire desktop environment is simulated and distributed to run on a single server all at once. A desktop virtualization allows administrators to manage, install, configure similar setups on many machines. Upgrading all the machines with a single patch update or security checks becomes easier and faster.Server virtualization: Many dedicated servers can be virtualized into a single server that provides multi-server functionality.Example:Many virtual machines can be built up sharing the same underlying system resources.Storage, RAM, disks, CPUOperating system virtualization: This happens at the kernel levelHypervisor on hardware type 2 bare-metalOne machine: Can boot up as multiple OS like Windows or Linux side-by-sideApplication virtualization: Apps are packaged and stored in a virtual environment and are distributed across different VMs. Example Microsoft applications like excel, MS word, Powerpoint etc, Citrix applications.Network functions virtualization: Physical network components such as NIC cards, switches, routers, servers, hubs, and cables are all assembled in a single server and used virtually by multiple machines without having the load of installing them on every machine.Virtualization is one of the building blocks and driving force behind cloud computing.Cloud computing provide virtualized need-based services. This has given an uplift to the concept of virtualization.A quick mention of various cloud computing models/services are listed below:SaaS – Software as a Service– end-user applications are maintained and run by service providers and easily distributed and used by the end users without having to install them.Top SaaS providers: Microsoft (Office suite, CRM, SQL server databases), AWS, Adobe, Oracle (ERP, CRM, SCM), Cisco’s Webex, GitHub ( git hosting web service)PaaS – Platform as a Service – computing infrastructure(hardware/software) is maintained and updated by the service provider and the user just have to run the product over this platform.Top Paas providers: AWS beanstalk, Oracle Cloud Platform (OCP), Google App EngineIaaS – Infrastructure as a Service – Provide infrastructure such as servers, physical storage, networking, memory devices etc. Users can build their own platform with customized operating system and applications.Key IaaS providers: Amazon Web Services, Microsoft Azure, Google compute engine, CitrixConclusion:We now have a fair understanding of types of virtualization and how they are implemented.ContainerizationThough virtualization has its pros; there are certain downsides of virtualization such as:Not all systems can be virtualized always.A corrupt VM is sometimes contagious and may affect other VMs or the kernel in-case of a Type-1 or bare-metal hypervisor.Latency of virtual disks due to increased payload on the CPU resources with a higher number of VMsUnstable performanceAn alternative approach to overcome the above flaws of virtualization is to Containerize the applications and the run-time environment together.What is containerization  Containerization is an OS-level virtualization; wherein the entire build of an application along with run-time environment is encapsulated or bundled up in a package.These packages are called containers.Containers are lightweight virtualized environments. These are independent of the infrastructure both hardware and software.The run-time environment includes the operating system, binaries, libraries, configuration files and other applications as shown in Diagram (9).What is DockersDockers provide an excellent framework for containerization and allow to build, ship, and run distributed applications over multiple platforms.Docker framework is setup as a docker engine installed on host OS and a docker daemon (background process) process is started that manage the virtual containers.Refer Diagram (10) that shows a Docker engine with 3 containers residing on host OS (MAC OS).An instruction file called dockerfile is written with a set of system commands that change the filesystem such as add, copy or delete commands, run commands, install utilities, system calls etc…This dockerfile is built and packaged along with its run-time environment as an executable file called a docker image.Docker daemon services run these images to create docker containers.Docker container is a run-time instance of an imageIt is wise to say that many images (or layers of instruction files) make up a container.Docker containers have a compact packaging and each container is well isolated.We can run, start, stop, attach, move or delete containers as these runs as services on the host OS.Each image is made up of different layers; each image based on top of the other with the customized command changes that we make.Every time we make a change in the filesystem, each change related to the image is encapsulated in a new layer of filesystem and stacked up above the parent image.Only the changed layers are rebuilt, rest of the unchanged image layers are reused.Certain docker commands ADD, RUN and COPY create a new layer with increased byte size; rest of the commands simply adds up a new layer with zero-byte size.These layers are re-used to build a new image, hence faster and lightweight.Docker images are alsoThe layer approach of an image every time there is a change in the image makes it possible to Version control the docker images.Here is a terminal recording that shows docker engine process and how images and containers are created.Docker documentation - to create containers.Ppt diagram:Code -> package -> build images -> registry hub -> download/pull image -> run containerAnimation: sheet4Let’s consider the docker container: divyabhushan/learn_docker hosted on docker hub.Latest tagged image: centOS_release1.2What is the container environment?Base OS: Centos:7Utilities: vim, yum, gitApps/files: Dockerfile, myApp.sh, runtests.sh, data and other supporting files.Git source code: dockerImagesDownload as: git clone https://github.com/divyabhushan/DockerImages_Ubuntu.gitWhat does the container do?Container launches “myApp.sh” in Ubuntu:14.04 environment and run some scripts along with a set of post test_suites in the container (Ubuntu:14.04) and saves the output log file.How to modify and build your own appStep 1: pull 1.1: Pull the docker image1.2: Run image to create a container and exitStep 2: modify2.1: Start the container2.2: Attach to the container and make some changesStep 3: commit3.1: Examine the history logs and changes in the container3.2: Commit the changes in containerStep 4: push4.1: Push new image to docker hubLet us see the steps in action:Step 1: pull docker image on your machine1.1: Pull the docker imageCommand:docker pull divyabhushan/learn_docker:myApp_ubuntu_14.04View the image on systemdocker imagesscreenshotCommand:docker run -it --name ubuntu14.04 0a6f949131a6Run command in ubuntu container and exit, the container is stopped on exiting out.View the stopped container with the ‘ps -a’ command.Step 2: modifyStart the containerCommand:docker start <container_id>Now the container is listed as a running processAttach to the container and make some changesCommand:docker attach 7d0d0225778cedit the ‘git configuration’ file and ‘myApp.sh’ scriptContainer is modified and stoppedStep 3: commitExamine the history logs and changes in the containerThe changes done inside the container filesystem can be viewed using the ‘docker diff’ command as:Command: docker diff 7d0d0225778cCommit the changes in containerDocker commit:Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]docker commit -m 'new Ubuntu image' 7d0d0225778c divyabhushan/learn_docker:ubuntu14.04_v2New image is created and listedStep 4: pushPush new image to docker hubCommand:docker push divyabhushan/learn_docker:ubuntu14.04_v2Point to note: just the latest commit change layer ‘50a5ce553bba’ has been pushed, while the other layers were re-used.Image available on docker hub:The latest tagged image can now be pulled from other machines; and run to create the same container environment.Conclusion: An image was pulled and run to create a container to replicate the environment. Container was modified, new changes were committed to form a new image. New Image pushed back on the docker hub and now available as a new tag ready to be pulled by other machines.Difference between Dockers and Virtual machinesTabular differences on various parametersParametersVMsDockersarchitectureHardware level virtualization. Each VM has its own copy of OS.Software level virtualization. Dockers have no own OS, run on host OSIsolationFully isolatedProcess or application-level isolation.  InstallationHypervisor can run directly on the hardware resources or on the host OS.Docker engine is installed on top of the host OS. A docker daemon process is initiated on the host OS. There is no separate OS for every container.CPU processing + performanceSlower: A VM contains the entire run-time environment that has to be loaded every time. Uses more CPU cycles; gives unstable performance.Faster: Docker images are pre-built and share host resources as a result running an image as a container is lightweight and consumes less CPU cycle; gives a stable performanceHardware storageMore storage space as each VM is an independent machine (OS). Example: 3 VMs of 800MB each will take 2.4 GB of space.Docker containers are lightweight since do not require to load OS+drivers, run on host OS as processes.PortableDependency on host OS and hardware makes VM less portable. Importing a VM still requires manual setup such storage, RAM and network.Highly portable since lightweight and zero dependency on hardware.Scalable and code-reusabilitySpinning up more VMs still need administrative tasks such as distributing resources to VM. Running a new machine puts extra load on the system resources also re-managing earlier VMs becomes a task. Every VM keeps its own copy of resources-poor code-reusability.Spinning up new docker containers simply means running pre-built images into containers as a process inside host OS. Containers are also configured on-the-fly passing parameters and run-time. Single image can be run and used to create many containers; encourage code-reusabilityResource utilizationStatic allocation results in resource wastage in case of idle VMs or if a VM’s resource requirement increases.Resources are dynamically allocated and de-allocated on the need basis by the docker engine.Docker system prune or garbage collectionVirtual machines do not have an in-built prune mechanism, these have to be administered manually.Docker image and containers can be pruned; which frees up a sensible amount of storage and memory space and CPU cycles.New environmentCreating new VM from the scratch is a tedious, repetitive tasks. It involves installing a new OS, loading kernel drivers and other tools and configurations.Package the code and dependency files, build into an image, run the image to create a new container. Use an existing or a base image (dockerhub- scratch) to run and create more containers on the go.Web-hosted HubNo web hosted hub for VMsdockerHub provides an open-source reliable trusted source of pre-built images that can be downloaded to run new containers.Version control (backup, restore,track history)(refer git)Snapshot of VMs are not very user-friendly and consume more space.Docker images are version controlled. Every delta difference in each docker container can easily be viewed (demo: docker diff <container_id>). Any change in the image is stored as a different layered version. A reference link to older images saves build time and space.Auto-buildAutomation of creating VMs is not very feasible.Docker images can also be auto-built from every source code check-in to GitHub (Automated builds on Dockerhub)Disaster recoveryTedious to recover from VM backup files.Easier to restore docker images (like files) just like git source files in case images are version controlled. Backup images only have to be run to create containers. (refer: screenshot).UpdateAll the VMs have to updated with the release patch.A single image is updated, re-built and distributed across multiple platforms.Memory usage+speedSlower: Entire snapshot of a machine and the OS is loaded into the cache memory.Real-time and fast: pre-built images. Only the instance, i.e, a container has to be run as a process and uses memory like an executableData integrityVM behavior may change if the dependency includes beyond the VM boundaries. (example: an app depends on production host network settings)Same behavior of apps in any environmentsecurityMore secure: A failure inside a VM may reach its guest OS but not the host OS or other virtual machines. Type-2 hypervisor though has a risk of kernel attack.Less secure: If a docker container compromised; underlying OS and hence all the containers may be affected since they share the same host kernel. OS Kernel may also be risked.Key providersRed hat KVM, VMWare, Oracle VM VirtualBox, Mircrosoft Hyper-V, Citrix XenServerDockers, Google kubernetes Engine, AWS Elastic Container serviceData authenticationLot of software licenses.Docker maintains inbuilt content trust to verify published images. When to use VM or a DockerWhen the need is an isolated OS, go for VMs.For a hardware and software independent isolated application that needs fast distribution on multiple environments, use dockers.Docker use-case:Example: A database application along with its databaseConsider the docker image - Oracle WebLogic Server on Docker Hub.This image is pre-built Oracle WebLogic Server runtime environment, including Oracle Linux 7 and Oracle JDK 8 for deploying Java EE applications.To create Server configurations on any machine, just download this image and run to create and start a container.There is no need to install and configure JDK, Linux or other run-time environment.Do not use Docker use-case:The application depends on utility outside the docker container.Code developed on dev machine with base OS as MAC; needs certain firewall setting on say Ubuntu OS.How can the code be tested on the production ubuntu OS firewall while running from MAC OS docker container?Solution:  Install a virtualization software on host OS-MAC; Create a VM (Virtual machine) with host OS as Ubuntu (same as production environment).Configure the desired firewall settings on host VM – Ubuntu; import the test code inside Ubuntu and test.Use a VM:For Embedded systems programming, a VM is installed that connects to the system device drivers, controllers and kernel.Virtualization used along with docker:An extension to the previous scenario would be if you would want to also test your python application in the host OS-Ubuntu VM without having to set up the python exe and its libraries and binaries.All you have to do is: Install Docker engine for Ubuntu OS and pull the python image from Docker hub as:docker pull python:tag [ tag is the python version-choose the appropriate version ]docker pull python:2.7Refer: Python imageEither write a Dockerfile to import/copy entire source code to python environment or directly run the image passing the script path as below:Command:$docker run -it --name my-python-script -v “$PWD”:/usr/src/myapp -w /usr/src/myapp python:2.7 python my-application.pyCommand options:-v = volume list-bind mount a volume [mount present working directory onto /usr/src/myapp inside container]-w = workdir string-working directory inside the containerMoreover; you can also test your python code in more than one version by downloading different python images, running them to create different containers and running your app in each container.What’s exciting here is that once the code tested in each python environment; you could quickly work on the test results and drop the containers. And deploy the code to production only once code tested against various python versions.Final thoughtsVMs and dockers are compatible with each other. Dockers are not here to replace Virtual machines.Both serve the same purpose of virtualizing the computing and infrastructure resources for optimized utilization.Using both Virtual machines and dockers together can yield better results in virtualization.When one desires a fast, lightweight, portable and highly scalable hardware-independent environment for multiple applications isolation; wherein security is not the major concern; Dockers is the best choice.Use a VM for embedded systems that are integrated with hardware; such as device driver or kernel coding.A scenario simulating an infrastructure setup with a high resource control and dependency on system resources; VMs are a better choice.Use of Dockers inside VMCI/CD pipelines scenario:Virtualization enables a smooth CI/CD process flow by promoting the users to concentrate only on developing the code on a working system that is set up for automated continuous integration and deployment without having to duplicate the entire setup each time.A virtualized environment is set up; either using a VM or a docker image that takes care of the automatic code check-ins, builds, regression testing, and deployments on the server.
Rated 4.5/5 based on 2 customer reviews

Docker Vs Virtual Machines(VMs)

8K
Docker Vs Virtual Machines(VMs)

Let’s have a quick warm up on the resource management before we dive into the discussion on virtualization and dockers.

In today’s multi-technology environments, it becomes inevitable to work on different software and hardware platforms simultaneously.

The need to run multiple different machines (Desktops, Laptops, handhelds, and Servers) platforms with customized hardware and software requirements has given the rise to a new world of virtualization in IT industry.

What a machine need?

Each computing environment(machine) needs its own component of hardware resources and software resources.

As more and more machines are needed, building up and administering many such stand-alone machines is not only cumbersome, time consuming but also adds up to the cost and energy.

Apparently; to run a customized High-power Scalable Server is a better idea to consolidate all the hardware and software requirements into one place and have a single server run and distribute the resources to many machines over a network.

That saves us time, resources, energy and revenue.

A Server with many hardware components installed in a datacenter

These gigantic servers are stored in a data warehouse called a Datacenter.

Below Diagram (2) indicates a single server serving and sharing resources and data among multiple client machines

Single server sharing data with many machines

Does this look simplified enough? Yes of course!

So, this setup looks feasible we have a high-power, high-storage Server that gives resources to many smaller(resources) machines over a network.

How to manage huge data - Servers

With Internet Of Things in boom, Information is overflowing with a huge amount of data; handling tremendous data needs more system resources which means more Dedicated servers are needed.

Many servers for different computing needs

Many Servers approach challenge:

Running several Dedicated servers for specific services such as Web service, application or database service as indicated in Diagram (3) is difficult to administer and consumes more energy, resources, manpower and is highly expensive.

In addition; resource utilization of servers is very poor resulting in resource wastage.

This is where simulating different environments and running them all on a single server is a smart choice; rather than having to run multiple physically distinct servers.

This is how Diagram (3) would change after consolidating different servers into one as shown in Diagram (4).

Sheet 2

Servers after virtualization

Servers after virtualization

Virtualization

What is Virtualization

The above single server implementation can be defined as the following term.

Virtualization is a technique used to simulate and pretend a single infrastructure resource (hardware resources and software resources) to be acting as many providing multiple functionalities or services without the need to physically build, install and configure.

In other words;

Running multiple simulated environments in a single machine without installing and configuring them is called Virtualization.

Technically speaking;

Virtualization is an abstract layer that shares the infrastructure resources among various simulated virtual machines without the need to physically set up these environments.

A single machine running multiple operating systems

Diagram (5) displays different virtual Operating systems are running on the same machine and using the same hardware architecture of the underlying machine.

What is a Virtual machine

The simulated virtualized environments are called virtual machines or VM.

Virtual machine is a replication/simulation of an actual physical machine.

A VM acts like a real physical machine and uses the physical resources of the underlying host OS.

A VM is a running instance of a real physical machine.

Need for virtualization

So; we have an overview of virtualization, let us examine when should we virtualize and what are the benefits of virtualization?

  1. Better resource management and cost-effective: as indicated in Diagram (6) and Diagram (7); hardware resources are distributed wisely on need basis to different environments; all the virtual machines share the same resources and reduce resource wastage.
  2. Ease of quick administration and maintenance: It is easier to build, install, configure one server rather than multiple servers. Updating a patch on various machines from a single virtualized server is much more feasible.
  3. Disaster recovery: Since all the virtualized machines reside on the same server and are treated as mounted volumes of data files, it is easier to back up these machines. In case of a disaster failure (power failure, network down, cyber-attacks, failed test code, etc) VM screenshots are used to recover the running state of the machine and the whole setup can be built up within minutes.
  4. Isolated and independent secure test environment: virtualization provide an isolated independent virtual test environment to test the legacy code or a vendor-specific product or even a beta release or say a corrupt code without affecting the main hardware and software platform. (This is a contradictory statement though; will discuss more under types of virtualization)
    These test environments like dev, uat, preprod, prod etc..can be easily tested and discarded.
  5. Easily scalable and upgradable: Building up more simulated environments means spinning up more virtual machines. Also upgrading VMs is as good as to run a patch in all VMs.
  6. Portable: Virtual machines are lightweight compared to the actual running physical machines; in addition, a VM that includes its own OS, drivers, and other installation files is portable on any machine. One can access the data virtually from any location.

Sheet 3 Resource management of resources

The screenshot of activity monitor below compares the CPU load:

Percentage of CPU resources without and with OS virtualization

Implementation 

a) What is hypervisor and its types?

As discussed in the previous section; virtualization is achieved by means of a virtualized layer on top of hardware or a software resource.

This abstract layer is called a hypervisor.

A hypervisor is a virtual machine monitor (VMM)

There are 2 types of hypervisors: Diagram (8)

  1. Type-1 or bare-metal hypervisor
  2. Type-2 or hosted hypervisor

Type-1 or bare-metal hypervisor is installed directly on the system hardware, thus abstracting and sharing the hardware components with the VMs.

Type-2 or hosted hypervisor is installed on top of the system bootable OS called host OS; this hypervisor abstracts the system resources visible to the host OS and distributes it among the VMs.

Both have their own role to play in virtualization.

b) Comparing hypervisor types

Type-1 or bare-metal hypervisorType-2 or hosted hypervisor

Installed directly on the infrastructure-OS independent and more secure against software issues.

Installed on top of the host OS-more prone to software failures.

Better resource flexibility: Have direct access to the hardware infrastructure (Hard-drive partition, RAM, embedded cards such as NIC). Provide more flexibility and scalability to the VMs and assign resources on a need basis.

Limited resource allocation: Have access to just the resources exposed by the host OS.

VMs installed will have limited access to hardware resources allocated and exposed by the host OS.

Single point of failure: A compromised VM may affect the kernel. Extra security layers needed.

A compromised VM may affect only the host OS, kernel still remains unreachable.

Low latency due to direct link to the infrastructure.

High latency as all the VMs have to pass through the OS layer to access the system resources.

Generally used in Servers

Generally used on small client machines

Expensive

Less expensive

Type-1 Hypervisors in market:

VMWare ESX/ESXi

Hyperkit (OSX)

Microsoft Hyper-V (Windows)
KVM(Linux)

Oracle VM Server

Type-2 Hypervisors in market:

Oracle VM VirtualBox

VMWare Workstation

Parallels desktop for MAC

Type-1 and type-2 hypervisor

Types of virtualization

Based on what resource is virtualized, there are different classifications of virtualization.

Server, Storage device, operating system, network

Desktop virtualization: Entire desktop environment is simulated and distributed to run on a single server all at once. A desktop virtualization allows administrators to manage, install, configure similar setups on many machines. Upgrading all the machines with a single patch update or security checks becomes easier and faster.

Server virtualization: Many dedicated servers can be virtualized into a single server that provides multi-server functionality.

Example:

Many virtual machines can be built up sharing the same underlying system resources.

Storage, RAM, disks, CPU

Operating system virtualization: This happens at the kernel level

Hypervisor on hardware type 2 bare-metal

One machine: Can boot up as multiple OS like Windows or Linux side-by-side

Application virtualization: Apps are packaged and stored in a virtual environment and are distributed across different VMs. Example Microsoft applications like excel, MS word, Powerpoint etc, Citrix applications.

Network functions virtualization: Physical network components such as NIC cards, switches, routers, servers, hubs, and cables are all assembled in a single server and used virtually by multiple machines without having the load of installing them on every machine.

Virtualization is one of the building blocks and driving force behind cloud computing.

Cloud computing provide virtualized need-based services. This has given an uplift to the concept of virtualization.

A quick mention of various cloud computing models/services are listed below:

SaaS – Software as a Service– end-user applications are maintained and run by service providers and easily distributed and used by the end users without having to install them.

Top SaaS providers: Microsoft (Office suite, CRM, SQL server databases), AWS, Adobe, Oracle (ERP, CRM, SCM), Cisco’s Webex, GitHub ( git hosting web service)

PaaS – Platform as a Service – computing infrastructure(hardware/software) is maintained and updated by the service provider and the user just have to run the product over this platform.

Top Paas providers: AWS beanstalk, Oracle Cloud Platform (OCP), Google App Engine

IaaS – Infrastructure as a Service – Provide infrastructure such as servers, physical storage, networking, memory devices etc. Users can build their own platform with customized operating system and applications.

Key IaaS providers: Amazon Web Services, Microsoft Azure, Google compute engine, Citrix

Conclusion:

We now have a fair understanding of types of virtualization and how they are implemented.

Containerization

Though virtualization has its pros; there are certain downsides of virtualization such as:

  • Not all systems can be virtualized always.
  • A corrupt VM is sometimes contagious and may affect other VMs or the kernel in-case of a Type-1 or bare-metal hypervisor.
  • Latency of virtual disks due to increased payload on the CPU resources with a higher number of VMs
  • Unstable performance

An alternative approach to overcome the above flaws of virtualization is to Containerize the applications and the run-time environment together.

What is containerization  

Containerization is an OS-level virtualization; wherein the entire build of an application along with run-time environment is encapsulated or bundled up in a package.

These packages are called containers.

Containers are lightweight virtualized environments. These are independent of the infrastructure both hardware and software.

The run-time environment includes the operating system, binaries, libraries, configuration files and other applications as shown in Diagram (9).

Packaged code

What is Dockers

Dockers provide an excellent framework for containerization and allow to build, ship, and run distributed applications over multiple platforms.

Docker framework is setup as a docker engine installed on host OS and a docker daemon (background process) process is started that manage the virtual containers.

Docker architecture

Refer Diagram (10) that shows a Docker engine with 3 containers residing on host OS (MAC OS).

An instruction file called dockerfile is written with a set of system commands that change the filesystem such as add, copy or delete commands, run commands, install utilities, system calls etc…

This dockerfile is built and packaged along with its run-time environment as an executable file called a docker image.

Docker daemon services run these images to create docker containers.

Docker container is a run-time instance of an image

It is wise to say that many images (or layers of instruction files) make up a container.

Docker containers have a compact packaging and each container is well isolated.

We can run, start, stop, attach, move or delete containers as these runs as services on the host OS.

Each image is made up of different layers; each image based on top of the other with the customized command changes that we make.

Every time we make a change in the filesystem, each change related to the image is encapsulated in a new layer of filesystem and stacked up above the parent image.

Only the changed layers are rebuilt, rest of the unchanged image layers are reused.

Certain docker commands ADD, RUN and COPY create a new layer with increased byte size; rest of the commands simply adds up a new layer with zero-byte size.

These layers are re-used to build a new image, hence faster and lightweight.

Docker images are also

The layer approach of an image every time there is a change in the image makes it possible to Version control the docker images.

Here is a terminal recording that shows docker engine process and how images and containers are created.

Docker documentation - to create containers.

Ppt diagram:

Code -> package -> build images -> registry hub -> download/pull image -> run container

Docker architecture

Animation: sheet4

Let’s consider the docker container: divyabhushan/learn_docker hosted on docker hub.

Latest tagged image: centOS_release1.2

What is the container environment?
Base OS: Centos:7

Utilities: vim, yum, git

Apps/files: Dockerfile, myApp.sh, runtests.sh, data and other supporting files.

Git source code: dockerImages

Download as: git clone https://github.com/divyabhushan/DockerImages_Ubuntu.git

What does the container do?
Container launches “myApp.sh” in Ubuntu:14.04 environment and run some scripts along with a set of post test_suites in the container (Ubuntu:14.04) and saves the output log file.

How to modify and build your own app

Step 1: pull 

1.1: Pull the docker image

1.2: Run image to create a container and exit

Step 2: modify

2.1: Start the container

2.2: Attach to the container and make some changes

Step 3: commit

3.1: Examine the history logs and changes in the container

3.2: Commit the changes in container

Step 4: push

4.1: Push new image to docker hub

Let us see the steps in action:

Step 1: pull 

docker image on your machine

1.1: Pull the docker image

Command:

docker pull divyabhushan/learn_docker:myApp_ubuntu_14.04

View the image on system

docker images

screenshot

Run image to create a container and exit

Command:

docker run -it --name ubuntu14.04 0a6f949131a6

Run command in ubuntu container and exit, the container is stopped on exiting out.

View the stopped container with the ‘ps -a’ command.

Step 2: modify

Start the container

Command:

docker start <container_id>

Now the container is listed as a running process

Attach to the container and make some changes

Command:

docker attach 7d0d0225778c

edit the ‘git configuration’ file and ‘myApp.sh’ script

Container is modified and stopped

Step 3: commit

Examine the history logs and changes in the container

The changes done inside the container filesystem can be viewed using the ‘docker diff’ command as:

Command: 

docker diff 7d0d0225778c

Commit the changes in container

Docker commit:

Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]

docker commit -m 'new Ubuntu image' 7d0d0225778c divyabhushan/learn_docker:ubuntu14.04_v2

New image is created and listed

Step 4: push

Push new image to docker hub

Command:

docker push divyabhushan/learn_docker:ubuntu14.04_v2

Point to note: just the latest commit change layer ‘50a5ce553bba’ has been pushed, while the other layers were re-used.

Image available on docker hub:

The latest tagged image can now be pulled from other machines; and run to create the same container environment.

Conclusion: An image was pulled and run to create a container to replicate the environment. Container was modified, new changes were committed to form a new image. New Image pushed back on the docker hub and now available as a new tag ready to be pulled by other machines.

Difference between Dockers and Virtual machines

Tabular differences on various parameters

ParametersVMsDockers
architecture

Hardware level virtualization. Each VM has its own copy of OS.

Software level virtualization. Dockers have no own OS, run on host OS


IsolationFully isolatedProcess or application-level isolation.  
Installation

Hypervisor can run directly on the hardware resources or on the host OS.


Docker engine is installed on top of the host OS. A docker daemon process is initiated on the host OS. There is no separate OS for every container.


CPU processing + performance


Slower: A VM contains the entire run-time environment that has to be loaded every time. Uses more CPU cycles; gives unstable performance.


Faster: Docker images are pre-built and share host resources as a result running an image as a container is lightweight and consumes less CPU cycle; gives a stable performance


Hardware storage


More storage space as each VM is an independent machine (OS). Example: 3 VMs of 800MB each will take 2.4 GB of space.Docker containers are lightweight since do not require to load OS+drivers, run on host OS as processes.
PortableDependency on host OS and hardware makes VM less portable. Importing a VM still requires manual setup such storage, RAM and network.Highly portable since lightweight and zero dependency on hardware.
Scalable and code-reusabilitySpinning up more VMs still need administrative tasks such as distributing resources to VM. Running a new machine puts extra load on the system resources also re-managing earlier VMs becomes a task. Every VM keeps its own copy of resources-poor code-reusability.Spinning up new docker containers simply means running pre-built images into containers as a process inside host OS. Containers are also configured on-the-fly passing parameters and run-time. Single image can be run and used to create many containers; encourage code-reusability
Resource utilizationStatic allocation results in resource wastage in case of idle VMs or if a VM’s resource requirement increases.Resources are dynamically allocated and de-allocated on the need basis by the docker engine.
Docker system prune or garbage collection

Virtual machines do not have an in-built prune mechanism, these have to be administered manually.


Docker image and containers can be pruned; which frees up a sensible amount of storage and memory space and CPU cycles.
New environmentCreating new VM from the scratch is a tedious, repetitive tasks. It involves installing a new OS, loading kernel drivers and other tools and configurations.Package the code and dependency files, build into an image, run the image to create a new container. Use an existing or a base image (dockerhub- scratch) to run and create more containers on the go.
Web-hosted HubNo web hosted hub for VMsdockerHub provides an open-source reliable trusted source of pre-built images that can be downloaded to run new containers.

Version control (backup, restore,track history)

(refer git)
Snapshot of VMs are not very user-friendly and consume more space.

Docker images are version controlled. 

Every delta difference in each docker container can easily be viewed (demo: docker diff <container_id>). 

Any change in the image is stored as a different layered version. A reference link to older images saves build time and space.

Auto-buildAutomation of creating VMs is not very feasible.Docker images can also be auto-built from every source code check-in to GitHub (Automated builds on Dockerhub)
Disaster recoveryTedious to recover from VM backup files.Easier to restore docker images (like files) just like git source files in case images are version controlled. Backup images only have to be run to create containers. (refer: screenshot).
UpdateAll the VMs have to updated with the release patch.A single image is updated, re-built and distributed across multiple platforms.
Memory usage+speedSlower: Entire snapshot of a machine and the OS is loaded into the cache memory.Real-time and fast: pre-built images. Only the instance, i.e, a container has to be run as a process and uses memory like an executable
Data integrityVM behavior may change if the dependency includes beyond the VM boundaries. (example: an app depends on production host network settings)Same behavior of apps in any environment
securityMore secure: A failure inside a VM may reach its guest OS but not the host OS or other virtual machines. Type-2 hypervisor though has a risk of kernel attack.Less secure: If a docker container compromised; underlying OS and hence all the containers may be affected since they share the same host kernel. OS Kernel may also be risked.
Key providersRed hat KVM, VMWare, Oracle VM VirtualBox, Mircrosoft Hyper-V, Citrix XenServerDockersGoogle kubernetes EngineAWS Elastic Container service
Data authenticationLot of software licenses.Docker maintains inbuilt content trust to verify published images.

Architecture comparison

 When to use VM or a Docker

When the need is an isolated OS, go for VMs.

For a hardware and software independent isolated application that needs fast distribution on multiple environments, use dockers.

  • Docker use-case:

Example: A database application along with its database

Consider the docker image - Oracle WebLogic Server on Docker Hub.

This image is pre-built Oracle WebLogic Server runtime environment, including Oracle Linux 7 and Oracle JDK 8 for deploying Java EE applications.

To create Server configurations on any machine, just download this image and run to create and start a container.

There is no need to install and configure JDK, Linux or other run-time environment.

  • Do not use Docker use-case:

The application depends on utility outside the docker container.

Code developed on dev machine with base OS as MAC; needs certain firewall setting on say Ubuntu OS.

How can the code be tested on the production ubuntu OS firewall while running from MAC OS docker container?

Solution:  Install a virtualization software on host OS-MAC; Create a VM (Virtual machine) with host OS as Ubuntu (same as production environment).

Configure the desired firewall settings on host VM – Ubuntu; import the test code inside Ubuntu and test.

  • Use a VM:

For Embedded systems programming, a VM is installed that connects to the system device drivers, controllers and kernel.

  • Virtualization used along with docker:

An extension to the previous scenario would be if you would want to also test your python application in the host OS-Ubuntu VM without having to set up the python exe and its libraries and binaries.

All you have to do is: Install Docker engine for Ubuntu OS and pull the python image from Docker hub as:

docker pull python:tag [ tag is the python version-choose the appropriate version ]

docker pull python:2.7

Refer: Python image

Either write a Dockerfile to import/copy entire source code to python environment or directly run the image passing the script path as below:

Command:

$docker run -it --name my-python-script -v “$PWD”:/usr/src/myapp -w /usr/src/myapp python:2.7 python my-application.py

Command options:

-v = volume list-bind mount a volume [mount present working directory onto /usr/src/myapp inside container]

-w = workdir string-working directory inside the container

Moreover; you can also test your python code in more than one version by downloading different python images, running them to create different containers and running your app in each container.

What’s exciting here is that once the code tested in each python environment; you could quickly work on the test results and drop the containers. And deploy the code to production only once code tested against various python versions.

Final thoughts

VMs and dockers are compatible with each other. Dockers are not here to replace Virtual machines.

Both serve the same purpose of virtualizing the computing and infrastructure resources for optimized utilization.

Using both Virtual machines and dockers together can yield better results in virtualization.

When one desires a fast, lightweight, portable and highly scalable hardware-independent environment for multiple applications isolation; wherein security is not the major concern; Dockers is the best choice.

Use a VM for embedded systems that are integrated with hardware; such as device driver or kernel coding.

A scenario simulating an infrastructure setup with a high resource control and dependency on system resources; VMs are a better choice.

Use of Dockers inside VM

CI/CD pipelines scenario:

Virtualization enables a smooth CI/CD process flow by promoting the users to concentrate only on developing the code on a working system that is set up for automated continuous integration and deployment without having to duplicate the entire setup each time.

A virtualized environment is set up; either using a VM or a docker image that takes care of the automatic code check-ins, builds, regression testing, and deployments on the server.

Divya

Divya Bhushan

Content developer/Corporate Trainer

  • Content Developer and Corporate Trainer with a 10-year background in Database administration, Linux/Unix scripting, SQL/PL-SQL coding, Git VCS. New skills acquired-DevOps and Dockers.
  • A skilled and dedicated trainer with comprehensive abilities in the areas of assessment, 
requirement understanding, design, development, and deployment of courseware via blended environments for the workplace. 

  • Excellent communication, demonstration, and interpersonal skills.

Website : https://www.knowledgehut.com/tutorials/git-tutorial

Join the Discussion

Your email address will not be published. Required fields are marked *

2 comments

saurabh 13 May 2019

Well Written, thanks

Navneet 14 May 2019

Excellent article ... concept of dockers is well articulated and explained.

Suggested Blogs

How to Become a DevOps Engineer

Who is DevOps engineer?        DevOps engineers are a group of influential individuals who encapsulates depth of knowledge and years of hands-on experience around a wide variety of open source technologies and tools. They come with core attributes which involve an ability to code and script, data management skills as well as a strong focus on business outcomes. They are rightly called “Special Forces” who hold core attributes around collaboration, open communication and reaching across functional borders.DevOps engineer always shows interest and comfort working with frequent, incremental code testing and deployment. With a strong grasp of automation tools, these individuals are expected to move the business quicker and forward, at the same time giving a stronger technology advantage. In nutshell, a DevOps engineer must have a solid interest in scripting and coding,  skill in taking care of deployment automation, framework computerization and capacity to deal with the version control system.Qualities of a DevOps Engineer Collated below are the characteristics/attributes of the DevOps Engineer.Experience in a wide range of open source tools and techniquesA Broad knowledge on Sysadmin and Ops rolesExpertise in software coding, testing, and deploymentExperiences on DevOps Automation tools like Ansible, Puppet, and ChefExperience in Continuous Integration, Delivery & DeploymentIndustry-wide experience in implementation of  DevOps solutions for team collaborationsA firm knowledge of the various computer programming languagesGood awareness in Agile Methodology of Project ManagementA Forward-thinker with an ability to connect the technical and business goals     Demand for people with DevOps skills is growing rapidly because businesses get great results from DevOps. Organizations using DevOps practices are overwhelmingly high-functioning: They deploy code up to 30 times more frequently than their competitors, and 50 percent fewer of their deployments fail.What exactly DevOps Engineer do?DevOps is not a way to get developers doing operational tasks so that you can get rid of the operations team and vice versa.  Rather it is a way of working that encourages the Development and Operations teams to work together in a highly collaborative way towards the same goal. In nutshell, DevOps integrates developers and operations team to improve collaboration and productivity.The main goal of DevOps is not only to increase the product’s quality to a greater extent but also to increase the collaboration of Dev and Ops team as well so that the workflow within the organization becomes smoother & efficient at the same time.DevOps Engineer has an end-to-end responsibility of the Application (Software) right from gathering the requirement to development, to testing, to infrastructure deployment, to application deployment and finally monitoring & gathering feedback from the end users, then again implementing the changes. These engineers spend more time researching new technologies that will improve efficiency and effectiveness.They Implement highly scalable applications and integrate infrastructure builds with application deployment processes. Let us spend some time in understanding the list of most important DevOps Engineers’ roles and responsibilities.1) The first and foremost critical role of a DevOps Engineer is to be an effective communicator i.e Soft Skills. A DevOps Engineer is required to be a bridge between the silos and bring different teams together to work towards a common goal. Hence, you can think of DevOps Engineers as “IT Project Managers”. They typically work on a DevOps team with other professionals in a similar role, each managing their own piece of the infrastructure puzzle.2) The second critical role of DevOps Engineer is to be Expert Collaborators. This is because their role requires them to build upon the work of their counterparts on the development and IT teams to scale cloud programs, create workflow processes, assign tenants and more.3) Thirdly, they can be rightly called “Mentors” as they spend most of the time in mentoring and educating software developers and architecture teams within an organization on how to create software that is easily scalable. They also collaborate with IT and security teams to ensure quality releases.Next, they need to be a “customer-service oriented” individuals. The DevOps Engineer is a customer-service oriented, team player who can emerge from a number of different work and educational backgrounds, but through their experience has developed the right skillset to move into DevOps.The DevOps Engineer is an important IT team member because they work with an internal customer. This includes QC personnel, software and application developers, project managers and project stakeholders usually from within the same organization. Even though they rarely work with external customers or end-users, but they keep close eye on  a “customer first” mindset to satisfy the needs of their internal clients.Not to miss out, DevOps engineer holds broad knowledge and experience with Infrastructure automation tools. A key element of DevOps is automation.  A lot of the manual tasks performed by the more traditional system administrator and engineering roles can be automated by using scripting languages like Python, Ruby, Bash, Shell, Node.js. This ensures a consistent performance of manual tasks by removing the human component and allowing teams to spend the saved time on more of the broader goals of the team and company.Hence, a DevOps engineer must possess the ability to implement automation technologies and tools at any level, from requirements to development to testing and operations.Few of other responsibilities of DevOps Engineer include -Manage and maintain infrastructure systemMaintaining and developing highly automated services landscape and open source servicesTake over the ownership for integral components of technology and make sure it grows aligned with company successScale systems and ensure the availability of services with developers on changes to the infrastructure required by new features and products.How to become a devops engineer?DevOps is less about doing things a particular way, and more about moving the business forward and giving it a stronger technological advantage. There is not a single cookbook or path to become a devops professional . It's a continuous learning and consulting process . Every DevOps tasks have been originated from various development , testing , ops team  consulting through consultants and running pilots, therefore it’s hard to give a generic playbook for how to get it implemented. Everyone should start with learning about the values, principles, methods, and practices of DevOps and trying to share it via any channel  and keep learning.Here’s my 10 golden tips to become a DevOps Engineer:    1.  Develop Your Personal Brand with Community Involvement    2. Get familiar with IaC(Infrastructure-as-Code) - CM    3. Understand DevOps Principles & Frameworks    4. Demonstrate Curiosity & Empathy    5. Get certified on Container Technologies - Docker | Kubernetes| Cloud    6. Get Expert in Public | Private | Hybrid Cloud offering    7. Become an Operations Expert before you even THINK DevOps    8. Get Hands-on with various Linux Distros & Tools    9. Arm Yourself with CI-CD, Automation & Monitoring Tools(Github, Jenkins, Puppet, Ansible etc)    10.Start with Process Re-Engineering and Cross-collaboration within your teams.Skills that DevOps engineer need to have If you’re aiming to land a job as a DevOps engineer in 2018, it’s not only about having a deep specialized skill but understanding how a variety of technologies and skills come together.One of the things that makes DevOps both challenging to break into is that you need to be able to write code, and also to work across and integrate different systems and applications. Based on my experience, I have finalized on the list of top 5 skill sets  which you might require to be a successful DevOps engineer:#1 - SysAdmin with Virtualization ExperienceDeployment is a major requirement in devops role and ops engineer are good at that , All is needed is a deployments automation engine(chef ,puppet ,ansible) knowledge  and its use-cases implementations . Nowadays , most of public clouds are running multiple flavors of virtualization so a must have 3 – 5 years of virtualization experience with VMware, KVM, Xen, Hyper-V is required along .#2 - Solution Architect RoleAlong with deployments or virtualization experience, understanding and implementation of all the hardware technologies in breadth is a must like storage and networking. Nowadays  there is a very high-demand for people who can design a solution that scales and performs with high availability and uptime with minimal amount of resources to feed on (Max utilization) .#3 - A Passionate Programmer/API ExpertiseBash, Powershell, Perl, Ruby, JavaScript, Go, Python etc are few of popular scripting languages one need to have expertise on  to become an effective DevOps Engineer. A DevOps engineer must be able to write code to automated repeatable processes. One need to be familiar with RESTFUL APIs.#4 - Integration Skillset around CI-CD toolA DevOps engineer should be able to use all his expertise to integrate all the open source tools and technique to create an environment that is fully automated and integrated. The goal should be for zero manual intervention from source code management to deployment state, i.e. Continuous Integration, Continuous Delivery and Continuous Deployment.#5 - Bigger Picture & Customer FocusWhile the strong focus on coding chops makes software engineering a natural path to a career in DevOps, the challenge for candidates who are coming from this world is that they need to be able to prove that they can look outside their immediate team and project. DevOps engineers are responsible for facilitating collaboration and communication between the Development and IT teams within an organization, so to succeed in an interview, you’ll need to be able to demonstrate your understanding of how disparate parts of the technical organization fit and work together.In nutshell, all you need are the list of tools and technologies listed below -Source Control (like Git, Bitbucket, Svn, VSTS etc)Continuous Integration (like Jenkins, Bamboo, VSTS )Infrastructure Automation (like Puppet, Chef, Ansible)Deployment Automation & Orchestration (like Jenkins, VSTS, Octopus Deploy)Container Concepts (LXD, Docker)Orchestration (Kubernetes, Mesos, Swarm)Cloud (like AWS, Azure, Google Cloud, Openstack)What are DevOps certifications available in the market? Are they really useful?In 2018, DevOps professionals are in huge demand. The demand for DevOps professionals in the current IT marketplace has increased exponentially over the years. A certification in DevOps is a complete win-win scenario, with both the individual professional and the organization as a whole standing to gain from its implementation. Completing a certification in the same will not only provide added value to one’s profile as an IT specialist but also advance career prospects faster than would usually be possible.The certifications related to DevOps are categorized into         1)  Foundation,         2) Certified Agile Process Owner &         3) Certified Agile Service ManagerThe introductory DevOps Certification is Foundation and certified individuals are able to execute the concepts and best practices of DevOps and enhance workflow and communication in the enterprise.Yes, these DevOps  certifications hold numerous benefits in the following ways:1. Better Job OpportunitiesDevOps is a relatively new idea in the IT domain with more businesses looking at employing DevOps processes and practices. There is a major gap between the demand for DevOps Certified professionals and the availability of the required DevOps professionals. IT professionals can take advantage of this huge deficit in highly skilled professionals by taking up a certification in DevOps for validation of DevOps skill set. This will ensure and guarantee much better job options.2. Improved Skills & KnowledgeThe core concept of DevOps revolves around brand new decision-making methods and thought processes. DevOps comes with a host of technical and business benefits which upon learning can be implemented in an enterprise. The fundamentals of DevOps consist of professionals working in teams of a cross-functional nature. Such teams consist of multi-disciplinary professionals ranging from business analysts, QA professionals, Operation Engineers, and Developers.3. Handsome SalaryRapid penetration of DevOps best practices in organizations and their implementation in the mentioned organizations is seeing massive hikes in the pay of DevOps professionals.This trend is seen to be consistent and sustainable according to industry experts the world over. DevOps professionals are the highest paid in the IT industry.4. Increased Productivity & EffectivenessConventional IT workplaces see employees and staff being affected by downtime which can be attributed to waiting for other employees or staff and other software and software related issues. The main objective of an IT professional at the workplace would be to be productive for a larger part of the time he/she will spend at the workplace. This can be achieved by minimizing the time spent waiting for other employees or software products and eliminating the unproductive and unsatisfying part of the work process. This will boost the effectiveness of the work done and will add greatly to the value of the enterprise and the staff as well.If you are looking out for the “official” certification programs for DevOps, below are some of the useful links:1) AWS Certified DevOps Engineer - Professional2) Azure certifications | Microsoft3) Google Cloud Certifications4) Chef Certification5) Red Hat Certificate of Expertise in Ansible Automation6) Certification - SaltStack7) Puppet certification8) Jenkins Certification9) NGINX University10) Docker - Certification11) Kubernetes Certified Administrator12) Kubernetes Certified Application Developer13) Splunk | Education Programs14) Certifications | AppDynamics15) New Relic University Certification Center16) Elasticsearch Certification Programme17)SAFe DevOps courseDevOps engineer examBelow are the list of popular DevOps Engineer exams and certifications details -DevOps Exam Syllabus Training Duration Minimal Attempts Exam Re-Take InformationAWS Certified DevOps EngineeAWS_certified_devops_engineer_professional_blueprint.pdf3 MonthsNo Minimal RequirementWaiting Period: 14 days before they are eligible to retake the exam.No limit on exam attempts until the test taker has passedRHCA certification with a DevOpsRED HAT CERTIFIED3 Days for each training• Red Hat Certificate ofWaiting Period: 1 weekconcentrationARCHITECT: DEVOPScourseExpertise in Platform-as-a-Service • Red Hat Certificate of Expertise in Atomic Host Container Administration • Red Hat Certificate of Expertise in Containerized Application Development• Red Hat Certificate of Expertise in Ansible Automation • Red Hat Certificate of Expertise in Configuration ManagementDocker Certification Associate ExamDCA ExamNo Minimal AttemptsWait 14 days from the day you fail to take the exam againCertified Kubernetes Associate ExamCKA Exam4-5 WeeksNo Minimal AttemptsWait 14 days from the day you fail to take the exam againChef Certification ExamChef Cert Exam8 HoursLinkMinimal 1 week time
Rated 4.0/5 based on 29 customer reviews
3073
How to Become a DevOps Engineer

Who is DevOps engineer?        DevOps enginee... Read More

Top Devops Tools You Must Know

In the last decade for most of the enterprises, the term DevOps has transformed from just a buzzword to a way of working. The concept of DevOps originated in 2008 following a discussion on agile infrastructure by Patrick Debois and Andrew Clay Shafer. The idea started to gain momentum in 2009 after the first DevOpsDays in Belgium. What initially began as a practice to bring more efficiency in software infrastructure management, is now evolved into a continuous feedback model which has redefined every aspect of software development from requirement engineering to deployment. With this change, evolved new frameworks, practices and tools rooted in the core values of lean and agile. This paper discusses in detail the various tools that evolved during the DevOps movement. Readers would get a comprehensive understanding of what and where to apply these tools in their day to day DevOps journey.What is DevOps and what are DevOps tools?DevOps is a culture where active collaboration between development, operations, and business teams are achieved. It’s not all about tools and DevOps in an organisation is to create value to end customer respecting human all team members. Tools are only aids to build this culture. DevOps increases organizations capability to deliver high-quality products or services at a swift pace. It automates all processes starting from build to deployment phase of an application. There are many tools available in the market to help us achieve this.Why is DevOps needed?DevOps helps to remove silos in organisations and enable the creation of cross-functional teams, thus reducing reliance on any one person or team during the delivery process. Frequent communication between teams improves the confidence and efficiency of the team members. Through automation, DevOps team increase their productivity making satisfied customers. According to State of DevOps report 2016 “Teams that practice DevOps deploy 30x more frequently, have 60x fewer failures, and recover 160x faster“. It also provides better work environments with increased trust, better management of issues reducing unplanned works.How to implement DevOps?The “DevOps Handbook” defines the “Three Ways: The principles of underpinning DevOps” as a way to implement DevOps in large enterprises. In this session, we will detail these three ways and three core pillars.The First Way: Systems ThinkingThe First Way emphasizes the need for global optimisation as opposed to local optimisation, hence the focus is on optimising all business value streams enabled by IT.The Second Way: Amplify feedback loopsThe Second Way is about discovering and injecting right feedback loops so that necessary corrections can be made before it’s too late.The Third Way: Culture of Experimentation and learningThe third way is all about creating the right culture that fosters two things, continual experimentation and learning from failures. It emphasises the understanding that repetition and practice make teams perfect.While the three ways focus on the key principles, we also have three pillars which are keys to any successful DevOps adoption.The three pillars of any DevOps adoption are,Culture and PeopleTools and TechnologyProcesses and practicesImportant DevOps practicesContinuous IntegrationContinuous integration is a software engineering practice where software development team members frequently merge and build their code changes. The key benefit is to detect and fix code merge conflicts and integration bugs in the early stages of software development. Hence reducing the cost to detect and fix the issues.Continuous DeliveryContinuous delivery is a software engineering practice in which changes are automatically built, tested, and made release ready to production. In order to get into a continuous delivery state, it is very crucial to define a test strategy. The main goal is to identify functional and non-functional defects at a much earlier stage thus reducing the cost to fix defects. It also enables teams to come up with working software as defined in the agile manifesto. Continuous delivery as a practice depends on continuous integration and test automation. Hence it is crucial that teams need to ensure that they practice continuous integration along with test automation religiously, to effectively practice continuous delivery.Continuous DeploymentContinuous deployment is a software engineering practice in which codes committed by the developers are automatically built, tested and deployed to production. Continuous deployment as a practice, require that teams have already adopted continuous integration and continuous delivery approach. The primary advantage of this practice is reducing time-to-market and early feedback from users.Continuous TestingContinuous Testing can be defined as a software testing practice that involves a process of testing early, testing often and test automation. The primary goal of Continuous Testing is to shift left the test phase as much as possible to identify defects and reduce the cost of fixing.MicroservicesMicroservices architecture helps to create an application as a set of small services independent of each other. Any language could be used to create microservices and typically an HTTP based API is used to interact between services. Microservices as a design approach helps to achieve fewer risk deployments and enables continuous delivery.Infrastructure as codeInfrastructure as a code is an engineering practice in which infrastructure is developed and managed through code. Thus creating a consistent, reproducible and versioned infrastructure. Since the infrastructure is implemented as the code it’s easy for the team members to update and change it. Infrastructure as a code no more considers scaling as a major problem.Policy as codePolicy as a code is a software engineering practice where compliance rules or policies of the organisation could be monitored and verified. Policy as code enables organizations to enforce the compliance rules more strictly and helps to bring the non-compliant resource into compliance mode. This practice gained importance during the DevSecOps movement.Continuous Monitoring and LoggingMonitoring and logging as a best practice to help organizations to analyse the products’ end user experience. This helps the software teams to get to know about the root cause of the defects and latencies in the software development process. More transparency into the actions performed by the team members causes increased responsibility among the teams causing increased performance.Communication and collaborationEffective communication and collaboration are one of the key values emphasised by DevOps.Devops tools in the field of communication and collaborations bring together collective responsibility for the products delivered.   Major DevOps tool categoriesCollaboration Tools :DevOps teams rely on regular feedback and constant communication. Hence traditional email communication mechanism becomes less effective. Thus DevOps teams rely on more integrated collaboration suites that help in continuous communication and feedback loops. Some of these new generation collaboration tools include Slack, Teams, CA Flowdock etc.SlackSlack is a messaging tool for the teams providing a common place for all communications. We can set different channels for different kinds of work. Voice and video call options are also available with Slack. Atlassian and Slack have created a partnership and will be discontinuing other collaboration tools like Hipchat and Stride and will provide migration to Slack.Availability: Free version with limited features are available for users.For more details click here  CA FlowdockCA Flowdock is yet another collaboration tool from CA Technologies. It brings all conversations, chats, work items, etc to one place making it easier to prioritize work and solve problems.Availability: CA Flowdock is free for up to 5 member teams and free for non-profit organizations and student projects.Learn more about CA Flowdock here.     TeamsTeams is a unified communication platform by Microsoft. Teams combine workplace chats, video meetings, file storage, and application integration. The service also integrates with the company's existing Office 365 productivity suite and features extensions to integrate with non-Microsoft products and features.Availability: Teams is free for a small number of users.Learn more about Teams here.SL NoTool NameProsConsAvailability1.SlackIntuitiveSaaS productGood integration with other toolsThe video conferencing feature is not as great as its competitorsFreemium2.CA FlowdockEasy to configureIntegration with tools beyond CA tools is to be improvedFreemium for small users3.TeamOne stop shop -  Integrates file sharing, messaging, meetings, and other tools.Still early and could be a little buggyFreemium for small usersApplication Life Management and Issue Tracking ToolsALM and planning tools help team members to plan their iterations by constantly getting feedback from the customers and prioritizing them. This helps to achieve visualisation of the works in hand, share plans, and track the progress. These tools make sure that all the team members are heard and addressed. Customer feedback is taken seriously and increases the responsiveness within the team. The tools enable teams to identify and track dependencies. It helps teams to plan their releases and sprints in a systematic way. Issue tracking tools enable features like auto triaging and assignment. Some of the tools are:JIRAJIRA is an issue tracking and project management tool from Atlassian. It could be used by small or large companies. Kanban and scrum boards which are simple and flexible are available with JIRA. It’s not free software.Availability: The pricing varies with the number of users.Learn more about JIRA here.  Mantis Bug TrackerMantis BT is an open source web-based issue tracker. It’s simple to use dashboard, helps to assign issues to developers and keep track of the issue progress. It is empowered with a built-in time tracking mechanism that helps the user to analyse the time spent by a developer on an issue.Availability: Paid version is available.For more details click here.TrelloTrello is a free project collaboration tool. It helps to manage projects with it’s simple and easy to work for boards. All tasks are defined as individual cards. These cards can be moved around helping the teams to visualise the work in progress.Click here for more information.CollabNet VersionOneCollabNet VersionOne is agile management  It helps in collaboration between teams at all levels to have a unified vision for software delivery.For more details about CollabNet VersionOne click here.  RallyRally is formerly known as CA Agile Central. It provides a platform to plan, track, prioritize work collaboratively. Thus improving visibility.Click here for more details.  OpsGenieOpsGenie is an incident management tool that helps to determine who should respond to events. It’s from Atlassian. It also helps in defining collaboration methods like video conferences etc. It’s free for small teams up to 5 users.Availability: Paid version is available which varies with the number of users and add on features.Learn more about OpsGenie here.Pivotal TrackerPivotal Tracker is an agile project management tool. Pivotal tracker helps to create public and private projects. Private projects are accessible only to the collaborators and it's the default setting. Public projects are available via URL in read-only mode. Edit permissions are given only to an invitee to the project. Open source software development process makes use of public projects.Availability: Pivotal Tracker for two projects,2GB of file storage, and a total of three collaborators. An upgrade from this could be only in the paid version.For more details click here.Azure BoardAzure Board is a tracking tool from Microsoft Azure. It helps to track and plan your projects via kanban boards, team dashboards etc. It supports all agile methodologies. Built-in analytics provide information about project progress and status.Availability: Azure Board is free for up to 5 users and unlimited stakeholders.Click here to know more about Azure Board.  TasktopTasktop is a stream management tool to integrate and synchronize development and operations tools together. It helps in tracking tasks across different task tracking systems.Availability: Tasktop is not a free tool but paid.Learn more about Tasktop here.KanboardKanboard is an open-source project used for project management. It is known for its super easy installations, great visualisation of the project tasks and drag and drops feature for project management.Availability: Free version of Kanboard is available.Click here for more details about Kanboard.SL NoTool NameProsConsAvailability1JIRAWidely  usedEnterprise-gradeLearning curveComplex to configurePaid2Mantis Bug TrackerFree and good communityPaid Hosting option availableNeed experts to configureGood for defects and simple projectsFree & open source3.TrelloEasy to useEasy to configureNot ideal for large teams /programsFreemium4CollabNet VersionOneWidely  usedEnterprise-gradeRich featuresLearning curveLess intuitivePai5RallyEnterprise-gradeEasy to set upLess intuitive and complex to learnPaid6OpsGenieRich features for issue tracking and on-call managementFeatures are limited to issue trackingFreemium7Pivotal TrackerRich feature set for trackingIntuitive and easy to use  Integrability with other toolsFreemium8Azure BoardIntegrates well with Microsoft toolchainLacks richness in feature set  in comparison with other enterprise-grade tools in the same segmentPaid9TasktopGood for Value Stream ManagementIntegrability with other toolsPaid10KanboardSimple to useLimited feature setNot ideal for large teams/programsFree & open sourceCloud /iaas/paas/serverless toolsCloud along with Infrastructure as service and platform as service produces a platform for developing, testing and deployment of applications. Using such features DevOps reduces the much latency overload in acquiring and accessing assets. All private and public clouds provide support to DevOps tooling and thus reducing the cost spent for on-premises systems.Some of the platforms areAWSAmazon Web Services (AWS) is a cloud services platform, offering to compute power, database storage, content delivery, and other cloud-related functionalities.Availability: AWS is an on-demand cloud computing platform where we are charged on as you go basis.Learn more about AWS here.AWS LambdaLambda is a serverless computing platform from Amazon Web Services (AWS). It is a service that manages the computing resources and runs code in response to events.Availability: We are charged only on the computing time.Learn more about AWS Lambda here.AzureMicrosoft Azure is an enterprise-grade cloud computing service that helps in managing applications through Microsoft-managed data centers. It provides software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS).Click here to learn more about Azure.Google Cloud PlatformGoogle Cloud Platform, offered by Google, is a suite of cloud computing services. Platform as service, Infrastructure as a service, and serverless computing are provided by GCP.Click here to learn more about Google Cloud Platform.IBM cloudIBM Cloud is a suite of cloud computing services from IBM. It also provides infrastructure as a service (IaaS) and platform as a service (PaaS).Availability: Lite version of IBM Cloud is free and allows one instance per plan.Click here to learn more about IBM Cloud.OpenStackOpenStack is a free and open-source software platform for cloud computing, mostly deployed as infrastructure-as-a-service, whereby all virtual servers and other resources are made available to customers. It’s written in python.Learn more about OpenStack here.Cloud FoundryCloud Foundry is an open source cloud platform that helps to develop cloud applications. It’s from Pivotal.Learn more about Cloud Foundry here.HerokuHeroku is a  platform as a service cloud environment. Thus help developers to work entirely on cloud.Availability: Free version is available with limited features.Learn more about Heroku here.OpenWhiskApache OpenWhisk is an open source, distributed Serverless platform. OpenWhisk manages the infrastructure, servers and scaling using Docker containers.Click here for more details about OpenWhisk.SL NoTool NameProsConsAvailability1.AWSEnterprise-ready providerComplex cost structurePay as you go2.AWS LambdaServerless computingReduced operational costsLimit on concurrent executions after which causing Denial Of ServiceCharged for computing time3.AzureIntegrates better with many Microsoft toolsServices provided still needs to be improvedPay for resources used4.Google Cloud PlatformScalableBetter load balancingServerless computingCurrently, GCP has fewer services and features compared to AWS or AzurePay as you go5.IBM CloudEasy setupConsistent performanceDifficulty in scalingFree Lite version6.OpenStackMassive scalabilityEasy implementationComplex configurationsFreemium7.Cloud FoundrySupports on-premises and multi-cloud deploymentGreat privacy and securityLess feature set compared to AWS or AzurePaid8.HerokuAdvanced Continuous Integration PlatformHighly scalableLess reconfigurabilityFreemium9.OpenWhiskOpen event provider systemServerless computingNot efficient for long running applicationsPaid as per computing timeSource control managementSource control management as practice stores and tracks the application and infrastructure code. Even delivery pipelines for an application is nowadays stored in source code repositories. Some of the tools are GitHub, Bit Bucket, Subversion, Mercurial, Rational ClearCase.GitHubGitHub is a popular repository hosting service using Git. Git is a  free and open source system. It’s of the ease with which it performs branching and merge operations. It’s a distributed version control system that adds to its preferences.Click here for more details.  MercurialMercurial is a free distributed version control system. It is very easy to learn compared to Git but the branching feature of Git is more widely loved. Big and small projects could be handled in Mercurial.Click here for more details.BitbucketBitbucket is a repository hosting service from Atlassian. It could be used to store source code using  Mercurial or Git revision control systems. It’s free for teams with a maximum of 5 users. The paid version is available for bigger teams.Learn more about Bit Bucket here.Rational ClearCaseRational ClearCase is a source control management tool from IBM. It helps in the parallel development of software. Software artefacts whether it be source code or design documents etc could be managed by ClearCase. Enterprise version is available for ClearCase.Learn more about ClearCase here.SubversionSubversion is a version control system from Apache. It’s a free tool and open source.It helps to track down all the changes done to files and directories.Click here to learn more about Subversion.  JFrog ArtifactoryArtifactory is an artefact repository management tool from JFrog. It’s a paid tool. It primarily stores binary files which are typically the product of our build process.Click here to learn more about Artifactory.SL NoTool NameProsConsAvailability1.GitHubEasy to navigate user interfaceDifficult to learnFree2.MercurialCannot rewrite commit historySlower network operationsFree3.BitbucketSupports Git and MercurialDifficult integration with other toolsFreemium4Rational ClearCaseIntegrates with Microsoft Visual StudioNot suitable for projects with a big code baseDifficult to work withPaid5.SubversionEasy to learn even for non-technical usersSlower because of centralised version control systemFree6.ArtifactorySupports many languages and toolsEasy to useExpensivePaidPackage managersPackage managers build or package code with all metadatas like software’s name, purpose, version and all dependencies needed by the software to function correctly. It lessens the burden of manual installs especially in big enterprises where we need to install big software. Some of the tools available areMavenMaven is an open-source build automation tool from Apache used mainly for Java applications. Main features are it provides easy and uniform builds. It also keeps aside a parallel space for test code.Learn more details about Maven here.  GradleGradle is also an open source build tool from Apache. It is built on Groovy domain-specific language. It is more like a combination of Ant and Maven.Learn more about Gradle here.  MSBuildMicrosoft Build automation tool is a free and open source mainly for  C++ and .NET applications. Visual Studio makes use of MSBuild to build its applications.Learn more about MSBuild here.  SL NoTool NameProsConsAvailability1.MavenAll dependencies are downloaded automaticallyBetter suited for java projectsComplex to work withLarge learning curveFree2.GradleCan write build script ourselvesPoor integration with eclipseFree3.MSBuildGreat community supportMainly for .NET applications onlyFreeContinuous IntegrationIn continuous integration, a code is checked into the source code repository whenever a developer finishes a requirement or user story. Continuous Integration tools enable teams to build software application automatically in a decided time. Thus reducing the time elapsed in a manual build. Some of the popular tools available are   GitLab CIGitLab CI is an integrated part of GitLab, GitLab offers a continuous integration service.Availability: A free version is available with limited features.Learn more about GitLab CI here.  SemaphoreSemaphore is the fastest hosted continuous integration and delivery solution as claimed by its developers.Availability: Open source projects can use Semaphore for free in its full capacity, free use for private projects is limited to 100 builds per month.Learn more about Semaphore here.Circle CICircle CI's continuous integration and delivery platform make it easy for teams of all sizes to rapidly build and release quality software at scale. It is built for Linux servers and automates build, test and deployment processes.Availability: Circle CI has a free version available for a single container.Click here for more details about Circle CI.JenkinsJenkins is an open-source continuous integration tool written in Java. Jenkins is a fork by the core developers of Hudson after a dispute with Oracle. Jenkins is the most widely used CI tool. Availability: Both free and enterprise versions are available.Click here for more details about Jenkins.HudsonHudson is a continuous integration tool written in Java that runs in a servlet container such as Apache Tomcat or GlassFish.Click here for more details about Hudson.   CruiseControlCruiseControl is an open source continuous integration tool and extensible framework for facilitating a continuous build process. Distributed under a BSD-style license.Learn more about CruiseControl here.  BambooBamboo is a continuous integration (CI) server produced by Atlassian. Bamboo ties automated builds, tests, and releases together in a single workflow.Availability: Licensed version is available at a starting price of $10.Learn more about Bamboo here.Team Foundation BuildTeam Foundation Build (TFB) is part of the Team Foundation system and provides the functionality of a public build lab. With TFB, build managers can synchronize sources and compile.Click here to know more about Team Foundation BuildGumpApache Gump is an open-source continuous integration tool, designed with the overarching aim of ensuring that projects are compatible at both the API level and regarding.Learn more about Gump here.Travis CITravis CI is an open-source distributed continuous integration (CI) service used to build and test projects hosted on GitHub. Open source projects can freely avail Travis CI.Availability: Travis CI is free for first 100 builds but after which it is priced.Learn more about Travis CI here.TeamCityTeamCity is an open-source CI platform from Jet Brains. It’s known for easy user interface and support for Microsoft stack.Availability: Free version of TeamCity is available with the limited feature set.Click here to know more about TeamCity.  Puppet PipelinesPuppet Pipelines makes software delivery easy and unites silos of automation across Dev and Ops teams. It automates your application builds and deployments.The community edition of Puppet Pipelines is available free of cost for up to three users.Click here for more details about Puppet Pipelines.  SL NoTool NameProsConsAvailability1.GitLab CIEasy to configureSource control and continuous integration in one placeNeed GitLab integrationFreemium2.Semaphore CISimple and to the pointLess user base and community supportFreemium3.CircleCIEasy to useLess known in the community4.JenkinsUses plugin model to integrate with several DevOps toolsGreater community supportCumbersome groovy syntaxesFreemium5.HudsonJenkins forked from Hudson so has all basic features of JenkinsNo much development of new features taking placeLess community supportOpen source6.CruiseControlGoes well with .NET applicationsDifficult setupOpen source7.BambooA lot of tasks available as a built-in option and not as pluginsGoes well with Atlassian products like Bitbucket and JIRAOnly paid option availablePaid8.Team Foundation BuildWorks smoothly with .NET applicationsIntuitive easy to installInteroperability with other stacks is a challengePaid9.GumpIntegrates well with Apache tools like MavenLess plugin supportOpen source10.Travis CIEasy to set up and configureSupports most technological stacks using Node, Ruby, etcDoesn’t  support BitbucketFreemium11.TeamCityGreat user interfaceEasy to learnCommunity support is good but not greatFreemium12.Puppet PipelinesEasy setup and installationPlugin availabilityFreemiumContinuous Delivery and Deployment toolsContinuous deployment tools automate the delivery pipeline of application development, thus reducing the wastage of time caused by transfer between different teams like development and release teams. Few of the most popular deployment tools used by DevOps teams areChefChef is a tool used to manage and develop infrastructure. It could be used for application deployment also. It is an open-source tool but with an enterprise version available. Chef uses a  domain-specific language based on ruby to define and configure infrastructure. Chef allows high flexibility and typically preferred by developers. It has a higher learning curve compared to other tools in this space. Chef is known to be the most preferred tool for large scale, complex enterprise systems.Availability: Chef is free for up to a limited number of nodes which is five nodes now after which it’s priced.Learn more about Chef here.PuppetPuppet is another configuration management tool to define infrastructure as code. Puppet is an enterprise-grade tool. Puppet uses a more declarative language and hence makes it easier to work with. It’s preferred by operations teams as it doesn’t require programming skills.Learn more about Puppet here.Octopus DeployOctopus Deploy is a release management server from XebiaLabs. It’s used mainly for .NET applications and windows services. It’s a paid deployment as a service.Click here for more about Octopus Deploy.SpinnakerSpinnaker is an open source free release platform that increases the number of good-quality releases. This platform helps in deployment across multi-cloud providers like AWS EC2, Google Kubernetes Engine etc.Learn more about Spinnaker here.  GoCDGoCD is a free and open source server that helps in continuous delivery. It helps in creating a continuous delivery pipeline in cloud environments like Docker, AWS etc.Learn more about GoCD here.  UrbanCode DeployUrbanCode Deploy or uDeploy is a tool used to automate application deployment from IBM. It’s a licensed version and available as hosted services also.Click here to know more about UrbanCode Deploy.  XebiaLabs XL DeployXL Deploy is a release automation tool for any environment. It is a licensed version by XebiaLabs.Click here to know more about XL Deploy.AnsibleAnsible is an open source configuration management tool and application deployment tool. In comparison with Chef, Ansible works with a decentralised agentless architecture and hence it’s easy to get started with Ansible.Availability: CLI based Ansible is free for no limit on nodes.Learn more about Ansible here.  SaltStackSaltStack is an open-source configuration management software written in Python. It enables teams to craft  "Infrastructure as Code".SaltStack in comparison with Ansible is quickly scalable but enforces teams to learn python.Learn more about SaltStack here.  SL NoTool NameProsConsAvailability1.ChefGreat documentation availableHard to learnNeed programming skillsFreemium2.PuppetProgramming skills are not a mustNot much suitable for applications where updates are frequentPaid3.Octopus DeployEasy configurationIntegrates smoothly with TeamCityA quick and flexible deployment pipelineLess community support especially for non-Microsoft applicationsPaid4.SpinnakerGreatly preferred for cloud-based deploymentsLess community supportOpen source5.GoCDBetter suitable for end-to-end Continuous delivery pipeline where great visualisation needed.Less cost efficientA Steep learning curve with a confusing user interfaceOpen source6.UrbanCode DeploySimple and easy to useSlower deploymentsPaid7.XL DeployA large number of plugins availableLesser visibility for the deployment processPaid8.AnsibleSimpler installationEasy to useGUI is not that greatNo support for windowsOpen source9.SaltStackQuickly scalableUnderdeveloped GUIOpen sourceTesting automationTesting automation tools are used in close proximity to continuous integration and deployment tools. It helps in performing repetitive tasks unable to perform by manual tests. The automated test gives a more clear picture of the health of the software product without any bias.unit testing:Unit testing tools help to test a single unit or component of the software. Thus detecting the errors earlier and fixingUnit testing helps in smooth integration.Integration testingIntegration testing tools help validate every integration that happens in the integration phase. Only successful build of the code move to the next stage.End-to-end testingIn end-to-end testing, the entire system or application is checked from start to finish. The tools generate the reports which can be used to verify whether the new change is causing  any unexpected behavior from the entire systemPerformance testingPerformance testing tools analyse the system in an expected workload. The tools measure the responsiveness of the system, scalability, and stability. Tools also provide details on where the system is failing and where the system needs improvement.Infrastructure testing and auditingInfrastructure testing plays a very important part as an error in the infrastructure code can even alter the production environment creating unseen repercussions. Ensuring the compliance of an organisation is an integral part of such tools keeping security in mind.Some of the popular tools used areSeleniumSelenium is a free and open source testing framework for web applications. It’s a suite of four tools Selenium WebDriver, Selenium RC or Remote   Selenium IDEAnd Selenium-Grid.Learn more about Selenium here.CucumberCucumber is an open source testing tool. It’s the best choice for behavior driven development popularly known as BDD as it tests business readable requirements. Free and enterprise version of Cucumber is available.Click here for more details about Cucumber.InSpecInSpec is a free and open source testing framework from Chef. InSpec tests infrastructure. It’s also a compliance framework.Click here to know more about InSpec.  KarmaKarma is a free test runner created for testing, applications made with Angular CLI. It’s from the AngularJS team.Learn more about Karma here.Jasmine  Jasmine is an open source testing framework. It is used mainly for JavaScript applications. It’s used for behavior driven development also.Click here to learn more about Jasmine.UFTUFT or Unified Functional Testing is a test automation tool for web, desktop, mobile   Micro Focus. There is a 60-day free trial version available but after which it’s not a free tool.Learn more about UFT here.  SoapUISoapUI is an open-source testing tool for web applications. It’s the market leader in API testing. It’s a licensed tool.Click here to know more about Soap UI.  JMeterJMeter is an open source load testing tool from Apache. It helps in analyzing the performance of services mainly for web applications. It’s a free test suite.Learn more about JMeter here.  SL NoTool NameProsConsAvailability1.SeleniumWide range of languages supportedEasy integration with Jenkins, MavenDifficult to useNo support officiallyOpen source2.CucumberGreat documentationSupports Behavior-driven developmentSlow compared to other testing toolsOpen source3.InSpecHighly flexible and can be used cross any Infra As code framework.Need to know the scripting languageOpen source4.KarmaEasy debuggingLesser user baseOpen source5.JasmineDifficult to debugEasy to set up and useOpen source6.UFTEasily integrated with continuous integration DevOps toolsLess compatibility with different operating systems ExpensivePaid7.SoapUIUser-friendlyPlugin availability is lessOpen source8.JMeterEasy installation Great user-friendly interfaceHigher learning curve Doesn’t support javascriptOpen sourceRelease orchestrationRelease orchestration tools are used to achieve automation of the application release process. Some of the popular tools are Xebialabs XL release, Plutora Release, AWS Codepipeline, CACD Director, OpenMake, Spinnaker, HashiCorp Vault, SonarQube, BlackDuck, Signal Sciences, Checkmarx SAST.ContainerologyContainerology tools help to run an application on the virtual environment as a package with all dependencies. It avoids the situation “it doesn’t work in my system”.Some of the tools are:DockerDocker is a platform for working with containers, from Docker, Inc.Docker is an open source and available as free and enterprise version. Containers help to develop applications and package it with its dependencies and libraries, thus ensuring the application runs in any environment. Docker containers are like virtual machines but share the same OS resources like file system etc.It has less overhead unlike VM 's. The building block of a container is an image which is the executable package including libraries, dependencies, environment variables etc needed to run the application. Running instance of an image is a container.Learn more about Docker here.  KubernetesKubernetes is an open source production grade container orchestration tool. It helps in managing multiple containers in an application. Kubernetes is the market leader in this category. It is often compared with Docker Swarm which is the native clustering method for docker.  Click here to know more about Kubernetes.  OpenShiftOpenShift from Red Hat is a group of containerization software. OpenShift Container Platform is the major software in the group that provides a platform as a service built around Docker containers. These docker containers are managed for experimenting by an individual a free version is available for one project.Learn more about OpenShift here.  SL NoTool NameProsConsAvailability1.DockerContainers are lightweight compared to virtual machinesSecurity is a concernOpen source2.KubernetesHighly scalableWork better with CI/CD pipelinesLess user-friendlyOpen source3.OpenShiftGreat community supportOnly supports Red Hat Enterprise LinuxFreemiumMonitoring ToolsMonitoring tools help to pinpoint and track issues and verify the health of the system. This enables fast recovery of the system with minimum or no human interventions.Popular tools arePrometheusPrometheus is an open source monitoring tool from SoundCloud.Its mainly used with systems using microservices as it has a multi-dimensional data collection feature. It uses a flexible query language PromQL.All Prometheus server is standalone and doesn’t depend on network storage helping us to understand the defect especially during outages. Collected data through the multi-dimensional collection feature may not be too detailed and having complete information. So it is not suitable for systems where 100% accuracy is required.Click here for more details about Prometheus.SplunkSplunk as a monitoring tool is used across application management, security, compliance, web analytics etc. Splunk tools listen and store data, index the same and correlate the captured real-time data in a searchable repository from which it can generate useful graphs, reports, alerts, and various other visualizations. One can create and configure relevant dashboards based on various visualizations/ graphs.  Learn more about Splunk here.NagiosNagios is an open source and free tool to monitor services, applications and infrastructure. It’s known for its auto-discovery feature. Its user interface is a bit difficult for beginnersLearn more about Nagios here.  ZabbixSimilar to Nagios, Zabbix is an enterprise open source monitoring solution. Compared to Nagios, Zabbix is user-friendly and is comparatively easy to configure. The main disadvantage is that it doesn’t support plugins.Learn more about Zabbix here.ZenossZenoss is a free, open-source tool used for services and network monitoring. It is written in Python language.Click here to know more about Zenoss.ELK Stack"ELK" is the acronym for three free open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine based on Java. Logstash is a server‑side data processing platform with the ability to clean, transform data and send it to Elasticsearch.Kibana is a virtualization tool that helps to visualize data with charts and graphs in Elasticsearch.The Elastic Stack is the next evolution of the ELK Stack.Learn more about ELK Stack here.SL NoTool NameProsConsAvailability1.PrometheusEasy to useEasy integration with other DevOps toolsBad user interfaceOpen source2.NagiosNumber of plugins available in the marketDifficult configurations needed to make the system stableOpen source3.ZabbixEasy configuration based on a web-based user interfaceNon-availability of pluginsOpen source4.ZenossGreat community supportLimitation on the number of devices monitoredOpen source5.ELK StackEasy to installHighly customisableDifficult to configureOpen source6SplunkEasy to installUser-friendlyEasy to configure  simple graphsFor complex configurations, the learning curve is a bit steepPaidAnalyticsAnalytics tools give a clear picture of what is happening in the team, be it code development or team interaction, code coverage and efficiency etc. Some tools used are XebiaLabs XL Impact,New Relic,Dynatrace,Datadog,AppDynamics, ElasticSearch .How to choose the right DevOps tools?Today the DevOps market is overcrowded with tools across different stages of software development life cycle. As enterprises, it's extremely crucial to select the right tools in order to get maximum benefit. Saying so choosing the right tool is an extremely difficult and time-consuming process given the spectrum of tools available today. Hence enterprises should have a five-point strategy towards deciding the right tools. Five point strategy would include dimensions likeAbility to integrateScalabilitySecurityTechnical know howReliabilityAbility to integrateThe ability to integrate is extremely crucial and is one of the fundamental requirements while checking out tools. Certain tools integrate smoothly with a particular technology stack in comparison with others. Hence it is vital for the DevOps architect to compare different tools on the basis of integration ability and ensure that tool that is been selected seamlessly integrates with the team’s technology stack. Another aspect that needs to be considered is how a particular tool integrates with other tools that are selected in the ecosystem. For eg., you would want your continuous integration system to constantly talk to the reporting system and alert prediction system in a smooth way. Hence integration between tools also becomes a very important factor while choosing tools.ScalabilityScalability is the second most important factor in choosing the right tools. Based on the need for scalability an enterprise might choose an enterprise version over a community version. Scalability also is a key factor why certain companies go for SaaS-based products. SaaS-based products are easily scalable and hence without any overhead, it can be adopted across large enterprises.SecurityThese days a lot of enterprises are emphasising on the need for security in the DevOps tooling space. Hence enterprise versions by various tooling companies have taken special care towards addressing these security-related issues. Thus enterprise versions are comparatively more preferred in comparison with that of open source solutions. Saying so this doesn't mean that all open source DevOps tools have security vulnerabilities. Certain open source DevOps tools fair much better than available enterprise versionsTechnical know howThis people dimension is one of the factors that is typically overlooked by enterprises. Knowing the skill levels and capability of team members is a key towards choosing the right tool. Often the tools available in the market wouldn't work out of the box and would need a substantial level of customisation to smoothly integrate with existing systems and workflows. Also, certain tools require a certain specific skill set towards configuration and customisation. Typical eg. is Chef, which is chosen by developers who are comfortable in ruby language whereas Puppet is preferred by system admins as it does not require much of programming skills.ReliabilityLast but not the least, reliability is extremely crucial for any successful tool adoption. Most of the tools available in the market, both enterprise and open source needs to be checked using this quality wheel. Tools should be reliable even during large scale and complex operational conditions. ConclusionIn this paper, we discussed the what, why, and how of DevOps.We also deep dived into various tool categories and tools available across the spectrum in today's DevOps market. Tools are definitely the key ingredients in successful DevOps adoption but saying so a lot of companies only invest in tool part without focusing on cultural and people dimensions. In order for tools to bear fruit its vital that the people operating and analysing the tools/data understand and realise the true spirit of DevOps. To conclude, would like to resonate with the wise words “yes, we need all the tools that can help us, but just tools will not help us get there!”.
Rated 4.5/5 based on 5 customer reviews
9398
Top Devops Tools You Must Know

In the last decade for most of the enterprises, th... Read More

What is DevOps

You landed up here which means that you are willing to know more about DevOps and hey, you can admit it! And of course, the business community has been taking this trend to the next level not because it looks fancy but of course, this process has proven the commitment. The growth and adoption of this disruptive model are increasing the productivity of the company.  So here, we will get an idea of how this model works and how we can enable this across the organization. According to DevOps.com, during the period of the year 2015 to 2016, the adoption rate of DevOps increased significantly, with a rise of 9 per cent in its usage rate. You may have a look at DevOps Foundation training course to know more about the benefits of learning DevOps.1. What is DevOpsDevOps is a practice culture having a union of people, process, and tools which enables faster delivery. This culture helps to automate the process and narrow down the gaps between development and IT. DevOps is a kind of abstract impression that focuses on key principles like Speed, Rapid Delivery, Scalability, Security, Collaboration & Monitoring etc.A definition mentioned in Gartner says:“DevOps represents a change in IT culture, focusing on rapid IT service delivery through the adoption of agile, lean practices in the context of a system-oriented approach. DevOps emphasizes people (and culture) and seeks to improve collaboration between operations and development teams. DevOps implementations utilize technology— especially automation tools that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective.”2. History of DevOpsLet's talk about a bit of history in cloud computing. The word “cloud compute” got coined in early 1996 by Compaq computer, but it took a while to make this platform easily accessible even though they claimed its a $2 billion market per year.  In August 2006 Amazon had introduced cloud infra which was easily accessible, and then it became a trend, and post this, in April 2008 Google, early 2010 Microsoft and then April 2011 IBM has placed his foot in this vertical. This showed a trend that all Giants are highly believed in this revolution and found potential in this technology.And in the same era DevOps process got pitched in Toronto Conference in 2008 by Patrick Debois and Andrew Shafer. He proposed that there is a better approach can be adopted to resolve the conflicts we have with dev and operation time so far. This again took a boom when 2 Flickr employee delivered a  seminar that how they are able to place 10+ deployment in a day. They came up with a proven model which can resolve the conflicts of Dev and operation having component build, test and deploy this should be an integrated development and operations process.3. Why market has adopted so aggressivelyLouis Columbus has mentioned in  Forbes that by the end of 2020 the 83% of the workload will be on the cloud where major market contributor will be AWS & Google. The new era is working more on AI, ML, Crypto, Big Data, etc. which is playing a key role in cloud computing adoption today but at the same time, IT professionals say that the security is their biggest concern in adopting a cloud computing. Moreover, Cloud helped so many startups to grow at initial stages and later they converted as a leader in there market place, which has given confidence to the fresh ideas.This entire cloud enablement has given more confidence to the team to adopt DevOps culture as cloud expansion is helping to experiment more with less risk.4. Why is Devop used?To reduce day to day manual work done by IT teamTo avoid manual error while designing infraEnables a smooth platform for technology or cloud migrationGives an amazing version control capabilitiesA better way to handle resources whether it’s cloud infra or manpowerGives an opportunity to pitch customer a realistic feature rollout commitmentTo adopt better infra scaling process even though you receive 4X trafficEnables opportunity to build a stable infrastructure5. Business need and value of DevOpsLet's understand this by sharing a story of, a leading  Online video streaming platform 'Netflix' which was to disclose the acquisition or we can say a company called Blockbuster LLC got an opportunity in 2000 to buy Netflix in $50 million. Netflix previously was working only on  DVD-by-mail service, but in 2016 Netflix made a business of $8.83 billion which was tremendous growth in this vertical. Any idea of how this happened? This started with an incident at Netflix office where due to a database mismatch a DVD shipping got disrupted for 3 days which Forced Management to move on the cloud from relational systems in their data centers because this incident made a massive impact on core values. The shift happened from vertical to horizontal, and AWS later provided with Cloud service even I have read that in an early stage they gathered with AWS team and worked to scale the infrastructure. And, today Netflix is serving across 180 countries with the amount of 15,00,00,000 hours of video content to less or more 8,60,00,000 members.6. Goal Of DevOpsControl of quality and increase the frequency of deploymentsAllows to enable a risk-free experimentEnables to improve mean time to recovery and backupHelps to handle release failure without losing live dataHelps to avoid unplanned work and technical failureTo achieve compliance process and control on the audit trailAlert and monitoring in an early stage on system failureHelps to maintain SLA in a uniform fashionEnabling control for the business team7. How Does DevOps workSo DevOp model usually keeps Development and operation team tightly coupled or sometimes they get merged and they roll out the entire release cycle. Sometime we may see that development, operation and security & Network team is also involved and slowly this got coined as DevSecOps. So the integration of this team makes sure that they are able to crack development, testing, deployment, provisioning infra, monitoring, network firewalling, infrastructure accessibility and accountability. This helps them to build a clean application development lifecycle to deliver a quality product.8. DevOps workflow/LifecycleDevOps Workflow (Process)DevOps workflow ensures that we are spending time on the right thing that means the right time is involved in building product/infrastructure. And how it enables we can analyze in below diagram. When we look into the below diagram, it seems DevOps process in an extended version of agile methodologies but it doesn’t mean that it can fit in other SDLC methodologies.  There is enough scope in other SDLC process as well. Once we merge process and Tools workflow diagram, it showcases a general DevOps environment. So the team puts an effort to keep pushing the releases and at the same time by enabling automation and tools we try maintaining the quality and speed.DevOps Workflow (Process)DevOps Workflow (Tool)9. DevOps valuesI would like to split DevOps values into  two groups: 1) Business Values 2) Organization ValuesBusiness values are moreover customer centric  a) How fast we recover if there is any failure?  b) How we can pitch the exact MRR to a customer and acquire more customers?  c) How fast we can deliver the product to customers  d) How to roll out beta access asap if there any on-demand requirement?Organizational Values  a) Building Culture  b) Enabling communication and collaboration  c) Optimize and automate the whole system  d) Enabling Feedbacks loops  e) Decreasing silos  f) Metrics and Measurement10. Principle of DevOpsAutomated: Automate as much as you can in a linear and agile manner so you can build an end to end automated pipeline for software development life cycle in an effective manner which includes quality, rework, manual work and cost. And it’s not only about the cycle it is also about the migration from one technology to another technology, one cloud to another cloud etc.Collaborative: The goal of this culture is to keep a hold on both development and operations. Keep an eye and fix the gaps to keep moving thing in an agile way, which needs a good amount of communication and coordination. By encouraging collaborative environment an organization gets ample of ideas which help to resolve issue way faster. The beauty of the collaboration is it really handles all unplanned and manual work at an early stage which ends up given a quality build and process.Customer Centric approach: DevOps team always reacts as a startup and must keep a finger on the pulse to measure customer demands. The metrics they generate give an insight to the business team to see the trend of usage and burn rate. But of course, to find a signal in the noise, you should be focused and collect only those metrics which really matters.Performance Orientation: Performance is a principle and a discipline which gives an insight to the team to understand the implication of bad performance. Before moving to production if we get metrics and reports handy it gives confidence to technology and business both. This gives an opportunity to plan how to scale infrastructure, how to handle if there is a huge spike or the high usage, and the utilization of the infrastructure.Quality Indicators per application: Another set which is a predefined principle is to set measurable quality Assigning a quality gate to indicators with predefined targets, covering fit for purpose and security gives an opportunity to deliver a complete quality application.11. DevOps Key practices:i) CI/CD When we say “continuous” it doesn’t translate that “always running” but of course “always ready to run”.Continuous integration is nothing but the development philosophy and practices that drive teams to check-in code to version control system as often as possible. So keeping your build clean and QA ready developer's changes need to be validated by running automated tests against the build that could be a Junit, iTest. The goal of CI is to place a consistent and automated way to build and test applications which results in better collaboration between teams, and eventually a better-quality product.Continuous Delivery is an adjunct of CI which enables a facility to make sure that we can release new changes to your customers quickly in a sustainable way.Typical CD involves the below steps:Pull code from version control system like bitbucket and execute build.Execute any required infrastructure steps command line/script to stand up or tear down cloud infrastructure.Move build to right compute environmentAble to handle all the configuration generation process.Pushing application components to their appropriate services, such as web servers, API services, and database services.Executing any steps required to restarts services or call service endpoints that are needed for new code pushes.Executing continuous tests and rollback environments if tests fail.Providing log data and alerts on the state of the delivery.Below there is a table which will help us better understand what we need to put as an effort and what we will gain if we enable CI/CD in place:Practice TypeEffort RequiredGainContinuous integrationa) Need to prepare automated your team will need to write automated tests for each new feature,improvement or bug fix.b) You need a continuous integration server that can monitor the main repository and run the tests automatically for every new commits pushed.c) Developers need to merge their changes as often as possible, at least once a day.a Will give control on regressions which can be captured in early stage of automated testing.b) Less context switching as developers are alerted as soon as they break the build and can work on fixing it before they move to another task.c) Building the release is easy as all integration issues have been solved early.d) Testing costs are reduced drastically -your Cl server can run hundreds of tests in the matter of seconds.e) Your QA team spend less time testing and can focus on significant improvements to the quality culture.Continuous Delivery a) You need a strong foundation in continuous integration and your test suite needs to cover enough of your codebase.b) Deployments need to be automated. The trigger is still manual but once a deployment is started  there shouldn't be a need for human intervention.c) Your team will most likely need to embrace feature flags so that incomplete features do not affect customers in productiona) The complexity of deploying software has been taken away. Your team doesnt't have to spend days preparing for a release anymore.b) You can release more often,thus accelerating the feedback loop with your  customers.c) There is much less pressure on decisions for small changes,hence encoraging iterating faster.ii) Maintaining infra in a secure and compliant wayKeeping infrastructure secure and compliant is also a responsibility of DevOps or some organization are nowadays pitching as SecOps. A general and traditional way of security methodologies or rules get fails when you have multi-layer or multi-cloud system running for your product organization and sometimes it really fail when you moving with the continuous delivery process. So the job is to ensure that the team is putting clear and neat visibility on risk and vulnerabilities which may cause a big problem. Below adding a basic rule to avoid small loophole (Specific to GCP) :Level 1:- Define your resource hierarchy.- Create an Organization node and define the project structure.- Automation of project creation which helps to achieve uniformity and testability.Level 2:- Manage your Google identities.- Synchronize your existing identity platform.- Have single-sign-on (SSO).- Don’t add multiple users instead of that select resource level permission.- Be specific while providing resource access also with action type (Read, Write & Admin).- Always use service accounts to access meta-data because to authenticate this it uses keys instead of password and GCP rotates the service account keys for code running on GCP.Level 3:- Add VPC for network definition and create firewall rules to manage traffic.- Define a specific rule to control external access and avoid having unwanted port opens.- Create a centralized network control template which can be applied across the project.Level 4:- Enable centralized logging and monitoring (Preferred Stackdriver).- Enable audit log which helps you to collect the activity of Admin, System Event and Data Access in a phrase of who did what, where and when?Level 5:- Enable cold line data storage if there is a need to keeping a copy for disaster management.- Further reference for placing security standard in AWS there is an article I have posted a few months back. 12. DevOps Myths or What DevOps is not?Before I mention the myths I would clarify the biggest myth which every early stage learner carries that “DevOps practice can be rolled out in a day and output will be available from the 1st day”. This is too early to reach this conclusion, as the definition of DevOps always says that it's a culture and process which can be built in a day. But of course, you will get an opportunity to overcome your mistakes at an early stage. Let's discuss a few more myths:It’s now only about the tool, (It’s a component of the whole DevOps practice)Dev and Ops team should have used the same set of tools (How to overcome- push them to integrate both)Only startups can follow this practice (Azure has published an article on best practices of DevOps which says it can be applied anywhere)Joining DevOps/ DevOps tool conf with fancy sticker (Good you join but don’t pretend that now you are carrying DevOps tag)Pushing build in production in every 5 mins (This is not what Continuous delivery)DevOps doesn’t fit the existing system (Overcome: You may need to find the right approach to make an attempt)13. Benefits of DevOpsBusiness Benefitsa) Horizontal and vertical growth: When I’m using “Horizontal and Vertical Growth” I’m keeping customer satisfaction on X, Business on Y2 and time on the Y-axis. Now the question is how it helps to populate growth in 2 axis, and my answer will be the quick turnaround time for minor and major issues. Once we adopt DevOps we scale and built in such a fashion that in less time the graph shows a rapid jump.b) Improving ROI of Data: Having DevOps in an organization ensures that we can design a decent ROI from data at an early stage more quickly. If we will do a raw survey now Software industry is playing with data and have control over there a team should have an end to end control on data. And if we define DevOps it will help the team to crunch data in various ways by automating small jobs. By automation, we can segregate and justify data and this helps to populate either in Dashboard or can present offline to the customer.Technical Benefitsc) Scalability & Quality: If a business starts reaching to more user we start looking to increase infrastructure and bandwidth. But on the other hand, it starts popping up a question whether we are scaling our infra in the right way and also if a lot of people are pushing changes (Your code commits/builds) are having the same or greater quality we have earlier. Both the questions are somehow now reserved by the DevOps team. If your business pitch that we might be going to hit 2000+ client and they will be having billion of traffic and we are ready to handle, DevOps take these responsibilities and says yes can scale infra at any point of time. And if the same time internal release team says I want to deliver 10 feature in next 10 days independently, DevOps says quality can be maintained.Culture  Benefitsd) Agility & Velocity: They key param of adopting DevOps is to improve the velocity of product development. DevOps enables Agility and if both are in sync we can easily observe the velocity. The expectation of end users are always high and at the same time, the deliverable time span is short. To achieve this we have to ensure that we are able to our rollout new features to customers at much higher frequencies otherwise your competitors may win the market.e) Enabling Transparency:  A Practice to enable total transparency is a key impact on the DevOps culture. Sharing knowledge across the team gives you an opportunity to work faster and get aligned with the goal. Transparency will encourage an increasingly well-rounded team with heightened understandings.14. How to adopt a DevOps modelThe ideal way is to pick up a small part of the project or product but sometimes we start adopting when we are touching the bottleneck. So whatever you start few things need to be taken care like Goal should be clean and the team is are in sync to chase the same, loop whole which turns to a downtime, how can testing (Stress, performance, load ) to avoid production glitches and at the same time enable automated deployment process. All this could have some basic plan and move forward it can be elaborated in detailed format. While adopting a DevOps model, need to make sure that the team is always looking into metrics so they can justify no’s and make assumption towards the goal. If you want to have a roadmap of DevOps adoption then you really need to find the gaps up front and the typical problem you face every day which really holds your release process or spoils your team time.15. DevOps automation toolJenkins: Jenkins is an open source automation server which is used to automate the software build, and deliver or deploy the build.  It can be installed through native system packages, Docker, or even run standalone by any machine with a Java Runtime Environment (JRE) installed. In short, Jenkins enables continuous integration which helps to accelerate the development. There are ample of plugins available which enable the integration for Various DevOps stages. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc. More in-depth information about the same can be found here in our training material on Jenkins and if you are inquisitive about what sort of questions related to Jenkins are asked in job interviews, then feel free to view our set of 22 Jenkins interview questions.Ansible: An open-source platform which helps to automate the IT engine which actually pushes off the slavery work from DevOps day to day life. Usually, Ansible helps in 3 day to day task, Provisioning, configuration management and application deployment. Beauty is it can automate traditional servers, virtualization platforms, or the cloud one. It is built on playbooks which can be applied to an extensive variety of systems for deploying your app. To know more, you may have a look at our Ansible training material here or go through our set of 20 interview questions on Ansible.Chef: It’s an open source config management system and works as a master-client model. It is having a transparent design and works based on instruction which needs to be defined properly. Before you plan using this tool you need to make sure that you have a proper Git practice going on and you have an idea about Ruby as this is completely built on Ruby. The industry says that this is good to have for development-focused environments and enterprise mature architecture. Our comprehensively detailed training course on Chef will give you more insight into this tool.Puppet:  So Puppet works as a master-client setup and utilized as model driven. This is built on Ruby but you can customize this as scripting language somewhere close to JSON. Puppet helps you to get control of full-fledged configuration management. This tool somewhere helps Admin (Part of DevOps) to add stability and maturity towards config management. A more detailed explanation of Puppet and its functionality can be found in our training material.Docker:  A tool designed to help developers and administrators to provide flexibility to reduce the count of the system as docker don’t create a complete virtual operating system, instead they allow applications to use the same Linux kernel as the system. So somewhere we can say we use Docker to create, deploy, and run applications by using containers. Just a stats submitted by docket that over 3.5 million applications placed in containers using docker and 37 billion containerized applications have been downloaded. Specifically, Docker CI /CD provided an opportunity to have exactly like a live server and run multiple dev infra form the same host with different config and OS. You may visit our training course on Docker to get more information.Kubernetes: Platform developed to manage containerized applications which provide high availability and scalability. According to the usage we can downgrade and upgrade the system, we can perform rolling updates, rollback feasibility, switch traffic between to different versions of the application.  So we can have multiple instances having Kubernetes installed and can be operated as Kubernetes cluster. Post this you get an API endpoint of Kube cluster, configure Kubel and you are ready to serve. Read our all-inclusive course on Kubernetes to gather more information on the same.Docker and Kubernetes, although being widely tools as DevOps automation tools, have notable differences between their setup, installation and their attributes, clearly mentioned in our blog stressing the differences between Docker and Kubernetes.Alert:Pingdom: Pingdom is a platform which enables monitoring to check the availability,  performance, transaction monitoring (website hyperlink) and incident collection of your websites, servers or web applications. Beauty is if you are using a collaboration tool like slack or flock you can just integrate by using the webhook (Pretty much simpler no code required )you can easily get notified at any time. Pingdom also provides an API so you can have your customized dashboard (Recently started) and the documentation is having enough details and self-explanatory.Nagios: It’s an Open Source Monitoring Tool to monitor the computer network. We can monitor server, applications, incident manager etc and you can certainly configure email, SMS, Slack notifications and phone calls even. Nagios is licensed under GNU GPLv2. Listing some major components which can be monitored with Nagios:Once we install Nagios we get a dashboard to monitor network services like SMTP, HTTP, SNMP, FTP, SSH, POP, etc and can view current network status, problem history, log files, notifications that have been triggered by the system, etc.We can monitor Servers resources like disk drives, memory, processor, server load usage, system logs, etc.Image copyright stackdriver- D-4Stackdriver: Stackdriver is again a Monitoring tool to get the visibility of performance, uptime, and overall health for cloud-powered applications. Stackdriver monitoring collects events and metadata from Google Cloud Platform, Amazon Web Services (AWS). Stackdriver consumes data and generates insights via dashboards, charts, and alerts. And for alerting we can integrate to collaboration tools like Slack, PagerDuty, HipChat, Campfire, and more.Image copyright stackdriver- D-2Adding one sample log where we can see what all parameter it collects and also i have just separated them in fashion which will help us to understand what all it logs it actually collects:Log InformationUser Details and Authorization InfoRequest Type and Caller IPResource and Operation DetailsTimeStamp and Status Details{  insertId:    logName:  operation: {   first:     id:   producer:  } protoPayload: {   @type:   authenticationInfo: {   principalEmail:       }   authorizationInfo: [   0: {   granted:     permission:       }   ]methodName:     request: {    @type:     } requestMetadata:  { callerIp: callerSuppliedUserAgent:   } resourceName: response: { @type:    Id:  insertTime:    name:  operationType:    progress:    selfLink:    status:      targetId:     targetLink:    user:    zone:     }   serviceName:    }receiveTimestamp:  resource: {  labels: {  instance_id:  project_id:      zone:      }  type:    }  severity:  timestamp:   }Monitoring:Grafana: It is an open source visualization tool and can be used on top of different data stores like InfluxDB,Elasticsearch and Logz.io.We can create comprehensive charts with smart axis formats (such as lines and points) as a result of Grafana’s fast, client-side rendering — even over long ranges of time — that uses Flot as a default option. We can get the 3 different levels of access, watcher, Editor and Admin, even we can enable G-Auth for having good access control. A detail information guide can be found hereImage copyright stackdriver- D-5Elasticsearch:It's an open source realtime distributed, RESTful search and analytics engine. It collects unstructured data and stores in a cultivated format which is optimized and available for language based search. The beauty of Elastic is scalability, speed, document-oriented, schema-free. It scales horizontally to handle kajillions of events per second, while automatically managing how indices and queries are distributed across the cluster for smooth operations.Cost OptimizationreOptimize.io : Once we run ample of servers we usually end up with burning good amount not intentionally but of course because of not have a clear visualization. At reOptimze helps thereby providing a detailed insight about the cloud expense the integration can be done with 3-4 simple steps but before that you might need to look into the prerequest which can be accessed here. Just a heads up that they only need read access for all these and permission docs can be found here . Image copyright reOptimizeD-616. DevOps vs AgileDevOpsAgileDevOps culture can be enabled in the software industry to deliver reliable build.Agile is a generic culture which can be deployed in any department.The key focus area is to have involvement at an end to end process.Agile helps the management team to push a frequent release.Enables quality build with rapid delivery.Keep team aware of frequent changes for any release and feature.Agile sprints work within the immediate future, A sprint life cycle varies between 7-30 days.DevOps don’t have such scheduled metrix, they work to avoid such unscheduled disruptions.Team size also differs, in Agile wee can minimal resource can be one as well.DevOps works on collaboration and bridge a big set of the team. 17. Future of DevOpsThe industry is moving more on cloud which enables a few more responsibilities to DevOps. And immediate hot topic could be DevSecOps because more automation tends to more connectivity means more exposure. AI or ML is more data-centric and learning based which gives an opportunity to DevOps to share hand to train ML modules, unique analysis correlating, code, test results, user behavior, and production quality & performance. There is also an opportunity to break the stereotype that DevOps can be only adopted by startup and surely the next 2-3 years this will become a general practice in Enterprise.18. Importance of DevOps training certificationCertifications work like an add-on, and add-on always gives some crisp and cumulative results. Certification works similar to that if someone adds any professional certificates to resume it gives an add-on value. In the process of certification, professionals help you to understand in detail and the deep dive of DevOps culture help individual to get a clear picture. While certification you may have to look into the vendor reputation, an academic who is giving approval, the transparency, session hour and many more components.19. Conclusion: I have been working closely and observing the DevOps team from a year & so and found every day we learn something new. The more we dive we see a lot can be done and can be achieved in a couple of different ways. As the industry is growing the responsibility of DevOps seems to increase which creates a possible chance for professional but always set a new bar of quality. Now, since you have come to know everything about DevOps, feel free to read our blog on how you can become a DevOps Engineer.
Rated 4.0/5 based on 11 customer reviews
3272
What is DevOps

You landed up here which means that you are willin... Read More