Search

Docker Vs Virtual Machines(VMs)

Let’s have a quick warm up on the resource management before we dive into the discussion on virtualization and dockers.In today’s multi-technology environments, it becomes inevitable to work on different software and hardware platforms simultaneously.The need to run multiple different machines (Desktops, Laptops, handhelds, and Servers) platforms with customized hardware and software requirements has given the rise to a new world of virtualization in IT industry.What a machine need?Each computing environment(machine) needs its own component of hardware resources and software resources.As more and more machines are needed, building up and administering many such stand-alone machines is not only cumbersome, time consuming but also adds up to the cost and energy.Apparently; to run a customized High-power Scalable Server is a better idea to consolidate all the hardware and software requirements into one place and have a single server run and distribute the resources to many machines over a network.That saves us time, resources, energy and revenue.These gigantic servers are stored in a data warehouse called a Datacenter.Below Diagram (2) indicates a single server serving and sharing resources and data among multiple client machinesDoes this look simplified enough? Yes of course!So, this setup looks feasible we have a high-power, high-storage Server that gives resources to many smaller(resources) machines over a network.How to manage huge data - ServersWith Internet Of Things in boom, Information is overflowing with a huge amount of data; handling tremendous data needs more system resources which means more Dedicated servers are needed.Many Servers approach challenge:Running several Dedicated servers for specific services such as Web service, application or database service as indicated in Diagram (3) is difficult to administer and consumes more energy, resources, manpower and is highly expensive.In addition; resource utilization of servers is very poor resulting in resource wastage.This is where simulating different environments and running them all on a single server is a smart choice; rather than having to run multiple physically distinct servers.This is how Diagram (3) would change after consolidating different servers into one as shown in Diagram (4).Sheet 2VirtualizationWhat is VirtualizationThe above single server implementation can be defined as the following term.Virtualization is a technique used to simulate and pretend a single infrastructure resource (hardware resources and software resources) to be acting as many providing multiple functionalities or services without the need to physically build, install and configure.In other words;Running multiple simulated environments in a single machine without installing and configuring them is called Virtualization.Technically speaking;Virtualization is an abstract layer that shares the infrastructure resources among various simulated virtual machines without the need to physically set up these environments.Diagram (5) displays different virtual Operating systems are running on the same machine and using the same hardware architecture of the underlying machine.What is a Virtual machineThe simulated virtualized environments are called virtual machines or VM.Virtual machine is a replication/simulation of an actual physical machine.A VM acts like a real physical machine and uses the physical resources of the underlying host OS.A VM is a running instance of a real physical machine.Need for virtualizationSo; we have an overview of virtualization, let us examine when should we virtualize and what are the benefits of virtualization?Better resource management and cost-effective: as indicated in Diagram (6) and Diagram (7); hardware resources are distributed wisely on need basis to different environments; all the virtual machines share the same resources and reduce resource wastage.Ease of quick administration and maintenance: It is easier to build, install, configure one server rather than multiple servers. Updating a patch on various machines from a single virtualized server is much more feasible.Disaster recovery: Since all the virtualized machines reside on the same server and are treated as mounted volumes of data files, it is easier to back up these machines. In case of a disaster failure (power failure, network down, cyber-attacks, failed test code, etc) VM screenshots are used to recover the running state of the machine and the whole setup can be built up within minutes.Isolated and independent secure test environment: virtualization provide an isolated independent virtual test environment to test the legacy code or a vendor-specific product or even a beta release or say a corrupt code without affecting the main hardware and software platform. (This is a contradictory statement though; will discuss more under types of virtualization)These test environments like dev, uat, preprod, prod etc..can be easily tested and discarded.Easily scalable and upgradable: Building up more simulated environments means spinning up more virtual machines. Also upgrading VMs is as good as to run a patch in all VMs.Portable: Virtual machines are lightweight compared to the actual running physical machines; in addition, a VM that includes its own OS, drivers, and other installation files is portable on any machine. One can access the data virtually from any location.The screenshot of activity monitor below compares the CPU load:Implementation a) What is hypervisor and its types?As discussed in the previous section; virtualization is achieved by means of a virtualized layer on top of hardware or a software resource.This abstract layer is called a hypervisor.A hypervisor is a virtual machine monitor (VMM)There are 2 types of hypervisors: Diagram (8)Type-1 or bare-metal hypervisorType-2 or hosted hypervisorType-1 or bare-metal hypervisor is installed directly on the system hardware, thus abstracting and sharing the hardware components with the VMs.Type-2 or hosted hypervisor is installed on top of the system bootable OS called host OS; this hypervisor abstracts the system resources visible to the host OS and distributes it among the VMs.Both have their own role to play in virtualization.b) Comparing hypervisor typesType-1 or bare-metal hypervisorType-2 or hosted hypervisorInstalled directly on the infrastructure-OS independent and more secure against software issues.Installed on top of the host OS-more prone to software failures.Better resource flexibility: Have direct access to the hardware infrastructure (Hard-drive partition, RAM, embedded cards such as NIC). Provide more flexibility and scalability to the VMs and assign resources on a need basis.Limited resource allocation: Have access to just the resources exposed by the host OS.VMs installed will have limited access to hardware resources allocated and exposed by the host OS.Single point of failure: A compromised VM may affect the kernel. Extra security layers needed.A compromised VM may affect only the host OS, kernel still remains unreachable.Low latency due to direct link to the infrastructure.High latency as all the VMs have to pass through the OS layer to access the system resources.Generally used in ServersGenerally used on small client machinesExpensiveLess expensiveType-1 Hypervisors in market:VMWare ESX/ESXiHyperkit (OSX)Microsoft Hyper-V (Windows)KVM(Linux)Oracle VM ServerType-2 Hypervisors in market:Oracle VM VirtualBoxVMWare WorkstationParallels desktop for MACTypes of virtualizationBased on what resource is virtualized, there are different classifications of virtualization.Server, Storage device, operating system, networkDesktop virtualization: Entire desktop environment is simulated and distributed to run on a single server all at once. A desktop virtualization allows administrators to manage, install, configure similar setups on many machines. Upgrading all the machines with a single patch update or security checks becomes easier and faster.Server virtualization: Many dedicated servers can be virtualized into a single server that provides multi-server functionality.Example:Many virtual machines can be built up sharing the same underlying system resources.Storage, RAM, disks, CPUOperating system virtualization: This happens at the kernel level Hypervisor on hardware type 2 bare-metal One machine: Can boot up as multiple OS like Windows or Linux side-by-sideApplication virtualization: Apps are packaged and stored in a virtual environment and are distributed across different VMs. Example Microsoft applications like excel, MS word, Powerpoint etc, Citrix applications.Network functions virtualization: Physical network components such as NIC cards, switches, routers, servers, hubs, and cables are all assembled in a single server and used virtually by multiple machines without having the load of installing them on every machine.Virtualization is one of the building blocks and driving force behind cloud computing.Cloud computing provide virtualized need-based services. This has given an uplift to the concept of virtualization.A quick mention of various cloud computing models/services are listed below:SaaS – Software as a Service– end-user applications are maintained and run by service providers and easily distributed and used by the end users without having to install them.Top SaaS providers: Microsoft (Office suite, CRM, SQL server databases), AWS, Adobe, Oracle (ERP, CRM, SCM), Cisco’s Webex, GitHub ( git hosting web service)PaaS – Platform as a Service – computing infrastructure(hardware/software) is maintained and updated by the service provider and the user just have to run the product over this platform.Top Paas providers: AWS beanstalk, Oracle Cloud Platform (OCP), Google App EngineIaaS – Infrastructure as a Service – Provide infrastructure such as servers, physical storage, networking, memory devices etc. Users can build their own platform with customized operating system and applications.Key IaaS providers: Amazon Web Services, Microsoft Azure, Google compute engine, CitrixConclusion:We now have a fair understanding of types of virtualization and how they are implemented.ContainerizationThough virtualization has its pros; there are certain downsides of virtualization such as:Not all systems can be virtualized always.A corrupt VM is sometimes contagious and may affect other VMs or the kernel in-case of a Type-1 or bare-metal hypervisor.Latency of virtual disks due to increased payload on the CPU resources with a higher number of VMsUnstable performanceAn alternative approach to overcome the above flaws of virtualization is to Containerize the applications and the run-time environment together.What is containerization  Containerization is an OS-level virtualization; wherein the entire build of an application along with run-time environment is encapsulated or bundled up in a package.These packages are called containers.Containers are lightweight virtualized environments. These are independent of the infrastructure both hardware and software.The run-time environment includes the operating system, binaries, libraries, configuration files and other applications as shown in Diagram (9).What is DockersDockers provide an excellent framework for containerization and allow to build, ship, and run distributed applications over multiple platforms.Docker framework is setup as a docker engine installed on host OS and a docker daemon (background process) process is started that manage the virtual containers.Refer Diagram (10) that shows a Docker engine with 3 containers residing on host OS (MAC OS).An instruction file called dockerfile is written with a set of system commands that change the filesystem such as add, copy or delete commands, run commands, install utilities, system calls etc…This dockerfile is built and packaged along with its run-time environment as an executable file called a docker image.Docker daemon services run these images to create docker containers.Docker container is a run-time instance of an imageIt is wise to say that many images (or layers of instruction files) make up a container.Docker containers have a compact packaging and each container is well isolated.We can run, start, stop, attach, move or delete containers as these runs as services on the host OS.Each image is made up of different layers; each image based on top of the other with the customized command changes that we make.Every time we make a change in the filesystem, each change related to the image is encapsulated in a new layer of filesystem and stacked up above the parent image.Only the changed layers are rebuilt, rest of the unchanged image layers are reused.Certain docker commands ADD, RUN and COPY create a new layer with increased byte size; rest of the commands simply adds up a new layer with zero-byte size.These layers are re-used to build a new image, hence faster and lightweight.Docker images are alsoThe layer approach of an image every time there is a change in the image makes it possible to Version control the docker images.Here is a terminal recording that shows docker engine process and how images and containers are created.Docker documentation - to create containers.Ppt diagram:Code -> package -> build images -> registry hub -> download/pull image -> run containerAnimation: sheet4Let’s consider the docker container: divyabhushan/learn_docker hosted on docker hub.Latest tagged image: centOS_release1.2What is the container environment?Base OS: Centos:7Utilities: vim, yum, gitApps/files: Dockerfile, myApp.sh, runtests.sh, data and other supporting files.Git source code: dockerImagesDownload as: git clone https://github.com/divyabhushan/DockerImages_Ubuntu.gitWhat does the container do?Container launches “myApp.sh” in Ubuntu:14.04 environment and run some scripts along with a set of post test_suites in the container (Ubuntu:14.04) and saves the output log file.How to modify and build your own appStep 1: pull 1.1: Pull the docker image1.2: Run image to create a container and exitStep 2: modify2.1: Start the container2.2: Attach to the container and make some changesStep 3: commit3.1: Examine the history logs and changes in the container3.2: Commit the changes in containerStep 4: push4.1: Push new image to docker hubLet us see the steps in action:Step 1: pull docker image on your machine1.1: Pull the docker imageCommand:docker pull divyabhushan/learn_docker:myApp_ubuntu_14.04View the image on systemdocker imagesscreenshotCommand:docker run -it --name ubuntu14.04 0a6f949131a6Run command in ubuntu container and exit, the container is stopped on exiting out.View the stopped container with the ‘ps -a’ command.Step 2: modifyStart the containerCommand:docker start <container_id>Now the container is listed as a running process Attach to the container and make some changesCommand:docker attach 7d0d0225778cedit the ‘git configuration’ file and ‘myApp.sh’ scriptContainer is modified and stoppedStep 3: commitExamine the history logs and changes in the containerThe changes done inside the container filesystem can be viewed using the ‘docker diff’ command as:Command: docker diff 7d0d0225778cCommit the changes in containerDocker commit:Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]docker commit -m 'new Ubuntu image' 7d0d0225778c divyabhushan/learn_docker:ubuntu14.04_v2New image is created and listedStep 4: pushPush new image to docker hubCommand:docker push divyabhushan/learn_docker:ubuntu14.04_v2Point to note: just the latest commit change layer ‘50a5ce553bba’ has been pushed, while the other layers were re-used.Image available on docker hub:The latest tagged image can now be pulled from other machines; and run to create the same container environment.Conclusion: An image was pulled and run to create a container to replicate the environment. Container was modified, new changes were committed to form a new image. New Image pushed back on the docker hub and now available as a new tag ready to be pulled by other machines.Difference between Dockers and Virtual machinesTabular differences on various parametersParametersVMsDockersarchitectureHardware level virtualization. Each VM has its own copy of OS.Software level virtualization. Dockers have no own OS, run on host OSIsolationFully isolatedProcess or application-level isolation.  InstallationHypervisor can run directly on the hardware resources or on the host OS.Docker engine is installed on top of the host OS. A docker daemon process is initiated on the host OS. There is no separate OS for every container.CPU processing + performanceSlower: A VM contains the entire run-time environment that has to be loaded every time. Uses more CPU cycles; gives unstable performance.Faster: Docker images are pre-built and share host resources as a result running an image as a container is lightweight and consumes less CPU cycle; gives a stable performanceHardware storageMore storage space as each VM is an independent machine (OS). Example: 3 VMs of 800MB each will take 2.4 GB of space.Docker containers are lightweight since do not require to load OS+drivers, run on host OS as processes.PortableDependency on host OS and hardware makes VM less portable. Importing a VM still requires manual setup such storage, RAM and network.Highly portable since lightweight and zero dependency on hardware.Scalable and code-reusabilitySpinning up more VMs still need administrative tasks such as distributing resources to VM. Running a new machine puts extra load on the system resources also re-managing earlier VMs becomes a task. Every VM keeps its own copy of resources-poor code-reusability.Spinning up new docker containers simply means running pre-built images into containers as a process inside host OS. Containers are also configured on-the-fly passing parameters and run-time. Single image can be run and used to create many containers; encourage code-reusabilityResource utilizationStatic allocation results in resource wastage in case of idle VMs or if a VM’s resource requirement increases.Resources are dynamically allocated and de-allocated on the need basis by the docker engine.Docker system prune or garbage collectionVirtual machines do not have an in-built prune mechanism, these have to be administered manually.Docker image and containers can be pruned; which frees up a sensible amount of storage and memory space and CPU cycles.New environmentCreating new VM from the scratch is a tedious, repetitive tasks. It involves installing a new OS, loading kernel drivers and other tools and configurations.Package the code and dependency files, build into an image, run the image to create a new container. Use an existing or a base image (dockerhub- scratch) to run and create more containers on the go.Web-hosted HubNo web hosted hub for VMsdockerHub provides an open-source reliable trusted source of pre-built images that can be downloaded to run new containers.Version control (backup, restore,track history)(refer git)Snapshot of VMs are not very user-friendly and consume more space.Docker images are version controlled. Every delta difference in each docker container can easily be viewed (demo: docker diff <container_id>). Any change in the image is stored as a different layered version. A reference link to older images saves build time and space.Auto-buildAutomation of creating VMs is not very feasible.Docker images can also be auto-built from every source code check-in to GitHub (Automated builds on Dockerhub)Disaster recoveryTedious to recover from VM backup files.Easier to restore docker images (like files) just like git source files in case images are version controlled. Backup images only have to be run to create containers. (refer: screenshot).UpdateAll the VMs have to updated with the release patch.A single image is updated, re-built and distributed across multiple platforms.Memory usage+speedSlower: Entire snapshot of a machine and the OS is loaded into the cache memory.Real-time and fast: pre-built images. Only the instance, i.e, a container has to be run as a process and uses memory like an executableData integrityVM behavior may change if the dependency includes beyond the VM boundaries. (example: an app depends on production host network settings)Same behavior of apps in any environmentsecurityMore secure: A failure inside a VM may reach its guest OS but not the host OS or other virtual machines. Type-2 hypervisor though has a risk of kernel attack.Less secure: If a docker container compromised; underlying OS and hence all the containers may be affected since they share the same host kernel. OS Kernel may also be risked.Key providersRed hat KVM, VMWare, Oracle VM VirtualBox, Mircrosoft Hyper-V, Citrix XenServerDockers, Google kubernetes Engine, AWS Elastic Container serviceData authenticationLot of software licenses.Docker maintains inbuilt content trust to verify published images. When to use VM or a DockerWhen the need is an isolated OS, go for VMs.For a hardware and software independent isolated application that needs fast distribution on multiple environments, use dockers.Docker use-case:Example: A database application along with its databaseConsider the docker image - Oracle WebLogic Server on Docker Hub.This image is pre-built Oracle WebLogic Server runtime environment, including Oracle Linux 7 and Oracle JDK 8 for deploying Java EE applications.To create Server configurations on any machine, just download this image and run to create and start a container.There is no need to install and configure JDK, Linux or other run-time environment.Do not use Docker use-case:The application depends on utility outside the docker container.Code developed on dev machine with base OS as MAC; needs certain firewall setting on say Ubuntu OS.How can the code be tested on the production ubuntu OS firewall while running from MAC OS docker container?Solution:  Install a virtualization software on host OS-MAC; Create a VM (Virtual machine) with host OS as Ubuntu (same as production environment).Configure the desired firewall settings on host VM – Ubuntu; import the test code inside Ubuntu and test.Use a VM:For Embedded systems programming, a VM is installed that connects to the system device drivers, controllers and kernel.Virtualization used along with docker:An extension to the previous scenario would be if you would want to also test your python application in the host OS-Ubuntu VM without having to set up the python exe and its libraries and binaries.All you have to do is: Install Docker engine for Ubuntu OS and pull the python image from Docker hub as:docker pull python:tag [ tag is the python version-choose the appropriate version ]docker pull python:2.7Refer: Python imageEither write a Dockerfile to import/copy entire source code to python environment or directly run the image passing the script path as below:Command:$docker run -it --name my-python-script -v “$PWD”:/usr/src/myapp -w /usr/src/myapp python:2.7 python my-application.pyCommand options:-v = volume list-bind mount a volume [mount present working directory onto /usr/src/myapp inside container]-w = workdir string-working directory inside the containerMoreover; you can also test your python code in more than one version by downloading different python images, running them to create different containers and running your app in each container.What’s exciting here is that once the code tested in each python environment; you could quickly work on the test results and drop the containers. And deploy the code to production only once code tested against various python versions.Final thoughtsVMs and dockers are compatible with each other. Dockers are not here to replace Virtual machines.Both serve the same purpose of virtualizing the computing and infrastructure resources for optimized utilization.Using both Virtual machines and dockers together can yield better results in virtualization.When one desires a fast, lightweight, portable and highly scalable hardware-independent environment for multiple applications isolation; wherein security is not the major concern; Dockers is the best choice.Use a VM for embedded systems that are integrated with hardware; such as device driver or kernel coding.A scenario simulating an infrastructure setup with a high resource control and dependency on system resources; VMs are a better choice.Use of Dockers inside VMCI/CD pipelines scenario:Virtualization enables a smooth CI/CD process flow by promoting the users to concentrate only on developing the code on a working system that is set up for automated continuous integration and deployment without having to duplicate the entire setup each time.A virtualized environment is set up; either using a VM or a docker image that takes care of the automatic code check-ins, builds, regression testing, and deployments on the server.

Docker Vs Virtual Machines(VMs)

8K
Docker Vs Virtual Machines(VMs)

Let’s have a quick warm up on the resource management before we dive into the discussion on virtualization and dockers.

In today’s multi-technology environments, it becomes inevitable to work on different software and hardware platforms simultaneously.

The need to run multiple different machines (Desktops, Laptops, handhelds, and Servers) platforms with customized hardware and software requirements has given the rise to a new world of virtualization in IT industry.

What a machine need?

Each computing environment(machine) needs its own component of hardware resources and software resources.

As more and more machines are needed, building up and administering many such stand-alone machines is not only cumbersome, time consuming but also adds up to the cost and energy.

Apparently; to run a customized High-power Scalable Server is a better idea to consolidate all the hardware and software requirements into one place and have a single server run and distribute the resources to many machines over a network.

That saves us time, resources, energy and revenue.

A Server with many hardware components installed in a datacenter

These gigantic servers are stored in a data warehouse called a Datacenter.

Below Diagram (2) indicates a single server serving and sharing resources and data among multiple client machines

Single server sharing data with many machines

Does this look simplified enough? Yes of course!

So, this setup looks feasible we have a high-power, high-storage Server that gives resources to many smaller(resources) machines over a network.

How to manage huge data - Servers

With Internet Of Things in boom, Information is overflowing with a huge amount of data; handling tremendous data needs more system resources which means more Dedicated servers are needed.

Many servers for different computing needs

Many Servers approach challenge:

Running several Dedicated servers for specific services such as Web service, application or database service as indicated in Diagram (3) is difficult to administer and consumes more energy, resources, manpower and is highly expensive.

In addition; resource utilization of servers is very poor resulting in resource wastage.

This is where simulating different environments and running them all on a single server is a smart choice; rather than having to run multiple physically distinct servers.

This is how Diagram (3) would change after consolidating different servers into one as shown in Diagram (4).

Sheet 2

Servers after virtualization

Servers after virtualization

Virtualization

What is Virtualization

The above single server implementation can be defined as the following term.

Virtualization is a technique used to simulate and pretend a single infrastructure resource (hardware resources and software resources) to be acting as many providing multiple functionalities or services without the need to physically build, install and configure.

In other words;

Running multiple simulated environments in a single machine without installing and configuring them is called Virtualization.

Technically speaking;

Virtualization is an abstract layer that shares the infrastructure resources among various simulated virtual machines without the need to physically set up these environments.

A single machine running multiple operating systems

Diagram (5) displays different virtual Operating systems are running on the same machine and using the same hardware architecture of the underlying machine.

What is a Virtual machine

The simulated virtualized environments are called virtual machines or VM.

Virtual machine is a replication/simulation of an actual physical machine.

A VM acts like a real physical machine and uses the physical resources of the underlying host OS.

A VM is a running instance of a real physical machine.

Need for virtualization

So; we have an overview of virtualization, let us examine when should we virtualize and what are the benefits of virtualization?

  1. Better resource management and cost-effective: as indicated in Diagram (6) and Diagram (7); hardware resources are distributed wisely on need basis to different environments; all the virtual machines share the same resources and reduce resource wastage.
  2. Ease of quick administration and maintenance: It is easier to build, install, configure one server rather than multiple servers. Updating a patch on various machines from a single virtualized server is much more feasible.
  3. Disaster recovery: Since all the virtualized machines reside on the same server and are treated as mounted volumes of data files, it is easier to back up these machines. In case of a disaster failure (power failure, network down, cyber-attacks, failed test code, etc) VM screenshots are used to recover the running state of the machine and the whole setup can be built up within minutes.
  4. Isolated and independent secure test environment: virtualization provide an isolated independent virtual test environment to test the legacy code or a vendor-specific product or even a beta release or say a corrupt code without affecting the main hardware and software platform. (This is a contradictory statement though; will discuss more under types of virtualization)
    These test environments like dev, uat, preprod, prod etc..can be easily tested and discarded.
  5. Easily scalable and upgradable: Building up more simulated environments means spinning up more virtual machines. Also upgrading VMs is as good as to run a patch in all VMs.
  6. Portable: Virtual machines are lightweight compared to the actual running physical machines; in addition, a VM that includes its own OS, drivers, and other installation files is portable on any machine. One can access the data virtually from any location.

Sheet 3 Resource management of resources

The screenshot of activity monitor below compares the CPU load:

Percentage of CPU resources without and with OS virtualization

Implementation 

a) What is hypervisor and its types?

As discussed in the previous section; virtualization is achieved by means of a virtualized layer on top of hardware or a software resource.

This abstract layer is called a hypervisor.

A hypervisor is a virtual machine monitor (VMM)

There are 2 types of hypervisors: Diagram (8)

  1. Type-1 or bare-metal hypervisor
  2. Type-2 or hosted hypervisor

Type-1 or bare-metal hypervisor is installed directly on the system hardware, thus abstracting and sharing the hardware components with the VMs.

Type-2 or hosted hypervisor is installed on top of the system bootable OS called host OS; this hypervisor abstracts the system resources visible to the host OS and distributes it among the VMs.

Both have their own role to play in virtualization.

b) Comparing hypervisor types

Type-1 or bare-metal hypervisorType-2 or hosted hypervisor

Installed directly on the infrastructure-OS independent and more secure against software issues.

Installed on top of the host OS-more prone to software failures.

Better resource flexibility: Have direct access to the hardware infrastructure (Hard-drive partition, RAM, embedded cards such as NIC). Provide more flexibility and scalability to the VMs and assign resources on a need basis.

Limited resource allocation: Have access to just the resources exposed by the host OS.

VMs installed will have limited access to hardware resources allocated and exposed by the host OS.

Single point of failure: A compromised VM may affect the kernel. Extra security layers needed.

A compromised VM may affect only the host OS, kernel still remains unreachable.

Low latency due to direct link to the infrastructure.

High latency as all the VMs have to pass through the OS layer to access the system resources.

Generally used in Servers

Generally used on small client machines

Expensive

Less expensive

Type-1 Hypervisors in market:

VMWare ESX/ESXi

Hyperkit (OSX)

Microsoft Hyper-V (Windows)
KVM(Linux)

Oracle VM Server

Type-2 Hypervisors in market:

Oracle VM VirtualBox

VMWare Workstation

Parallels desktop for MAC

Type-1 and type-2 hypervisor

Types of virtualization

Based on what resource is virtualized, there are different classifications of virtualization.

Server, Storage device, operating system, network

Desktop virtualization: Entire desktop environment is simulated and distributed to run on a single server all at once. A desktop virtualization allows administrators to manage, install, configure similar setups on many machines. Upgrading all the machines with a single patch update or security checks becomes easier and faster.

Server virtualization: Many dedicated servers can be virtualized into a single server that provides multi-server functionality.

Example:

Many virtual machines can be built up sharing the same underlying system resources.

Storage, RAM, disks, CPU

Operating system virtualization: This happens at the kernel level Hypervisor on hardware type 2 bare-metal One machine: Can boot up as multiple OS like Windows or Linux side-by-side

Application virtualization: Apps are packaged and stored in a virtual environment and are distributed across different VMs. Example Microsoft applications like excel, MS word, Powerpoint etc, Citrix applications.

Network functions virtualization: Physical network components such as NIC cards, switches, routers, servers, hubs, and cables are all assembled in a single server and used virtually by multiple machines without having the load of installing them on every machine.

Virtualization is one of the building blocks and driving force behind cloud computing.

Cloud computing provide virtualized need-based services. This has given an uplift to the concept of virtualization.

A quick mention of various cloud computing models/services are listed below:

SaaS – Software as a Service– end-user applications are maintained and run by service providers and easily distributed and used by the end users without having to install them.

Top SaaS providers: Microsoft (Office suite, CRM, SQL server databases), AWS, Adobe, Oracle (ERP, CRM, SCM), Cisco’s Webex, GitHub ( git hosting web service)

PaaS – Platform as a Service – computing infrastructure(hardware/software) is maintained and updated by the service provider and the user just have to run the product over this platform.

Top Paas providers: AWS beanstalk, Oracle Cloud Platform (OCP), Google App Engine

IaaS – Infrastructure as a Service – Provide infrastructure such as servers, physical storage, networking, memory devices etc. Users can build their own platform with customized operating system and applications.

Key IaaS providers: Amazon Web Services, Microsoft Azure, Google compute engine, Citrix

Conclusion:

We now have a fair understanding of types of virtualization and how they are implemented.

Containerization

Though virtualization has its pros; there are certain downsides of virtualization such as:

  • Not all systems can be virtualized always.
  • A corrupt VM is sometimes contagious and may affect other VMs or the kernel in-case of a Type-1 or bare-metal hypervisor.
  • Latency of virtual disks due to increased payload on the CPU resources with a higher number of VMs
  • Unstable performance

An alternative approach to overcome the above flaws of virtualization is to Containerize the applications and the run-time environment together.

What is containerization  

Containerization is an OS-level virtualization; wherein the entire build of an application along with run-time environment is encapsulated or bundled up in a package.

These packages are called containers.

Containers are lightweight virtualized environments. These are independent of the infrastructure both hardware and software.

The run-time environment includes the operating system, binaries, libraries, configuration files and other applications as shown in Diagram (9).

Packaged code

What is Dockers

Dockers provide an excellent framework for containerization and allow to build, ship, and run distributed applications over multiple platforms.

Docker framework is setup as a docker engine installed on host OS and a docker daemon (background process) process is started that manage the virtual containers.

Docker architecture

Refer Diagram (10) that shows a Docker engine with 3 containers residing on host OS (MAC OS).

An instruction file called dockerfile is written with a set of system commands that change the filesystem such as add, copy or delete commands, run commands, install utilities, system calls etc…

This dockerfile is built and packaged along with its run-time environment as an executable file called a docker image.

Docker daemon services run these images to create docker containers.

Docker container is a run-time instance of an image

It is wise to say that many images (or layers of instruction files) make up a container.

Docker containers have a compact packaging and each container is well isolated.

We can run, start, stop, attach, move or delete containers as these runs as services on the host OS.

Each image is made up of different layers; each image based on top of the other with the customized command changes that we make.

Every time we make a change in the filesystem, each change related to the image is encapsulated in a new layer of filesystem and stacked up above the parent image.

Only the changed layers are rebuilt, rest of the unchanged image layers are reused.

Certain docker commands ADD, RUN and COPY create a new layer with increased byte size; rest of the commands simply adds up a new layer with zero-byte size.

These layers are re-used to build a new image, hence faster and lightweight.

Docker images are also

The layer approach of an image every time there is a change in the image makes it possible to Version control the docker images.

Here is a terminal recording that shows docker engine process and how images and containers are created.

Docker documentation - to create containers.

Ppt diagram:

Code -> package -> build images -> registry hub -> download/pull image -> run container

Docker architecture

Animation: sheet4

Let’s consider the docker container: divyabhushan/learn_docker hosted on docker hub.

Latest tagged image: centOS_release1.2

What is the container environment?

Base OS: Centos:7

Utilities: vim, yum, git

Apps/files: Dockerfile, myApp.sh, runtests.sh, data and other supporting files.

Git source code: dockerImages

Download as: git clone https://github.com/divyabhushan/DockerImages_Ubuntu.git

What does the container do?
Container launches “myApp.sh” in Ubuntu:14.04 environment and run some scripts along with a set of post test_suites in the container (Ubuntu:14.04) and saves the output log file.

How to modify and build your own app

Step 1: pull 

1.1: Pull the docker image



1.2: Run image to create a container and exit


Step 2: modify

2.1: Start the container

2.2: Attach to the container and make some changes

Step 3: commit

3.1: Examine the history logs and changes in the container

3.2: Commit the changes in container

Step 4: push

4.1: Push new image to docker hub

Let us see the steps in action:

Step 1: pull 

docker image on your machine

1.1: Pull the docker image

Command:

docker pull divyabhushan/learn_docker:myApp_ubuntu_14.04

View the image on system

docker images

screenshot

Run image to create a container and exit

Command:

docker run -it --name ubuntu14.04 0a6f949131a6

Run command in ubuntu container and exit, the container is stopped on exiting out.

Docker Vs Virtual Machines(VMs)

View the stopped container with the ‘ps -a’ command.

Docker Vs Virtual Machines(VMs)

Step 2: modify

Start the container

Command:

docker start <container_id>

Docker Vs Virtual Machines(VMs)

Now the container is listed as a running process Attach to the container and make some changes

Command:

docker attach 7d0d0225778c

edit the ‘git configuration’ file and ‘myApp.sh’ script

Docker Vs Virtual Machines(VMs)

Container is modified and stopped

Step 3: commit

Examine the history logs and changes in the container

Docker Vs Virtual Machines(VMs)

The changes done inside the container filesystem can be viewed using the ‘docker diff’ command as:

Command: 

docker diff 7d0d0225778c

Docker Vs Virtual Machines(VMs)

Commit the changes in container

Docker commit:

Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]

docker commit -m 'new Ubuntu image' 7d0d0225778c divyabhushan/learn_docker:ubuntu14.04_v2

Docker Vs Virtual Machines(VMs)

New image is created and listed

Docker Vs Virtual Machines

Step 4: push

Push new image to docker hub

Command:

docker push divyabhushan/learn_docker:ubuntu14.04_v2

Docker Vs Virtual Machines

Point to note: just the latest commit change layer ‘50a5ce553bba’ has been pushed, while the other layers were re-used.

Image available on docker hub:

Docker Vs Virtual Machines

The latest tagged image can now be pulled from other machines; and run to create the same container environment.

Conclusion: An image was pulled and run to create a container to replicate the environment. Container was modified, new changes were committed to form a new image. New Image pushed back on the docker hub and now available as a new tag ready to be pulled by other machines.

Difference between Dockers and Virtual machines

Tabular differences on various parameters

ParametersVMsDockers
architecture

Hardware level virtualization. Each VM has its own copy of OS.

Software level virtualization. Dockers have no own OS, run on host OS


IsolationFully isolatedProcess or application-level isolation.  
Installation

Hypervisor can run directly on the hardware resources or on the host OS.


Docker engine is installed on top of the host OS. A docker daemon process is initiated on the host OS. There is no separate OS for every container.


CPU processing + performance


Slower: A VM contains the entire run-time environment that has to be loaded every time. Uses more CPU cycles; gives unstable performance.


Faster: Docker images are pre-built and share host resources as a result running an image as a container is lightweight and consumes less CPU cycle; gives a stable performance


Hardware storage


More storage space as each VM is an independent machine (OS). Example: 3 VMs of 800MB each will take 2.4 GB of space.Docker containers are lightweight since do not require to load OS+drivers, run on host OS as processes.
PortableDependency on host OS and hardware makes VM less portable. Importing a VM still requires manual setup such storage, RAM and network.Highly portable since lightweight and zero dependency on hardware.
Scalable and code-reusabilitySpinning up more VMs still need administrative tasks such as distributing resources to VM. Running a new machine puts extra load on the system resources also re-managing earlier VMs becomes a task. Every VM keeps its own copy of resources-poor code-reusability.Spinning up new docker containers simply means running pre-built images into containers as a process inside host OS. Containers are also configured on-the-fly passing parameters and run-time. Single image can be run and used to create many containers; encourage code-reusability
Resource utilizationStatic allocation results in resource wastage in case of idle VMs or if a VM’s resource requirement increases.Resources are dynamically allocated and de-allocated on the need basis by the docker engine.
Docker system prune or garbage collection

Virtual machines do not have an in-built prune mechanism, these have to be administered manually.


Docker image and containers can be pruned; which frees up a sensible amount of storage and memory space and CPU cycles.
New environmentCreating new VM from the scratch is a tedious, repetitive tasks. It involves installing a new OS, loading kernel drivers and other tools and configurations.Package the code and dependency files, build into an image, run the image to create a new container. Use an existing or a base image (dockerhub- scratch) to run and create more containers on the go.
Web-hosted HubNo web hosted hub for VMsdockerHub provides an open-source reliable trusted source of pre-built images that can be downloaded to run new containers.

Version control (backup, restore,track history)

(refer git)
Snapshot of VMs are not very user-friendly and consume more space.

Docker images are version controlled. 

Every delta difference in each docker container can easily be viewed (demo: docker diff <container_id>). 

Any change in the image is stored as a different layered version. A reference link to older images saves build time and space.

Auto-buildAutomation of creating VMs is not very feasible.Docker images can also be auto-built from every source code check-in to GitHub (Automated builds on Dockerhub)
Disaster recoveryTedious to recover from VM backup files.Easier to restore docker images (like files) just like git source files in case images are version controlled. Backup images only have to be run to create containers. (refer: screenshot).
UpdateAll the VMs have to updated with the release patch.A single image is updated, re-built and distributed across multiple platforms.
Memory usage+speedSlower: Entire snapshot of a machine and the OS is loaded into the cache memory.Real-time and fast: pre-built images. Only the instance, i.e, a container has to be run as a process and uses memory like an executable
Data integrityVM behavior may change if the dependency includes beyond the VM boundaries. (example: an app depends on production host network settings)Same behavior of apps in any environment
securityMore secure: A failure inside a VM may reach its guest OS but not the host OS or other virtual machines. Type-2 hypervisor though has a risk of kernel attack.Less secure: If a docker container compromised; underlying OS and hence all the containers may be affected since they share the same host kernel. OS Kernel may also be risked.
Key providersRed hat KVM, VMWare, Oracle VM VirtualBox, Mircrosoft Hyper-V, Citrix XenServerDockersGoogle kubernetes EngineAWS Elastic Container service
Data authenticationLot of software licenses.Docker maintains inbuilt content trust to verify published images.

Architecture comparison

 When to use VM or a Docker

When the need is an isolated OS, go for VMs.

For a hardware and software independent isolated application that needs fast distribution on multiple environments, use dockers.

  • Docker use-case:

Example: A database application along with its database

Consider the docker image - Oracle WebLogic Server on Docker Hub.

This image is pre-built Oracle WebLogic Server runtime environment, including Oracle Linux 7 and Oracle JDK 8 for deploying Java EE applications.

To create Server configurations on any machine, just download this image and run to create and start a container.

There is no need to install and configure JDK, Linux or other run-time environment.

  • Do not use Docker use-case:

The application depends on utility outside the docker container.

Code developed on dev machine with base OS as MAC; needs certain firewall setting on say Ubuntu OS.

How can the code be tested on the production ubuntu OS firewall while running from MAC OS docker container?

Solution:  Install a virtualization software on host OS-MAC; Create a VM (Virtual machine) with host OS as Ubuntu (same as production environment).

Configure the desired firewall settings on host VM – Ubuntu; import the test code inside Ubuntu and test.

  • Use a VM:

For Embedded systems programming, a VM is installed that connects to the system device drivers, controllers and kernel.

  • Virtualization used along with docker:

An extension to the previous scenario would be if you would want to also test your python application in the host OS-Ubuntu VM without having to set up the python exe and its libraries and binaries.

All you have to do is: Install Docker engine for Ubuntu OS and pull the python image from Docker hub as:

docker pull python:tag [ tag is the python version-choose the appropriate version ]

docker pull python:2.7

Refer: Python image

Either write a Dockerfile to import/copy entire source code to python environment or directly run the image passing the script path as below:

Command:

$docker run -it --name my-python-script -v “$PWD”:/usr/src/myapp -w /usr/src/myapp python:2.7 python my-application.py

Command options:

-v = volume list-bind mount a volume [mount present working directory onto /usr/src/myapp inside container]

-w = workdir string-working directory inside the container

Moreover; you can also test your python code in more than one version by downloading different python images, running them to create different containers and running your app in each container.

What’s exciting here is that once the code tested in each python environment; you could quickly work on the test results and drop the containers. And deploy the code to production only once code tested against various python versions.

Final thoughts

VMs and dockers are compatible with each other. Dockers are not here to replace Virtual machines.

Both serve the same purpose of virtualizing the computing and infrastructure resources for optimized utilization.

Using both Virtual machines and dockers together can yield better results in virtualization.

When one desires a fast, lightweight, portable and highly scalable hardware-independent environment for multiple applications isolation; wherein security is not the major concern; Dockers is the best choice.

Use a VM for embedded systems that are integrated with hardware; such as device driver or kernel coding.

A scenario simulating an infrastructure setup with a high resource control and dependency on system resources; VMs are a better choice.

Use of Dockers inside VM

CI/CD pipelines scenario:

Virtualization enables a smooth CI/CD process flow by promoting the users to concentrate only on developing the code on a working system that is set up for automated continuous integration and deployment without having to duplicate the entire setup each time.

A virtualized environment is set up; either using a VM or a docker image that takes care of the automatic code check-ins, builds, regression testing, and deployments on the server.

Divya

Divya Bhushan

Content developer/Corporate Trainer

  • Content Developer and Corporate Trainer with a 10-year background in Database administration, Linux/Unix scripting, SQL/PL-SQL coding, Git VCS. New skills acquired-DevOps and Dockers.
  • A skilled and dedicated trainer with comprehensive abilities in the areas of assessment, 
requirement understanding, design, development, and deployment of courseware via blended environments for the workplace. 

  • Excellent communication, demonstration, and interpersonal skills.

Website : https://www.knowledgehut.com/tutorials/git-tutorial

Join the Discussion

Your email address will not be published. Required fields are marked *

3 comments

saurabh 13 May 2019 1 likes

Well Written, thanks

Navneet 14 May 2019 1 likes

Excellent article ... concept of dockers is well articulated and explained.

Hugo 21 Jun 2019

I absolutely love your blog and find the majority of your post's to be exactly what I'm looking for

Suggested Blogs

Best Practices For Hiring DevOps Engineer

DevOps, as the name suggests, has originated from the expressions ‘software DEVelopment’ and ‘information technology OPerationS’. It describes the methodology required for continuous collaboration between the software developers and IT operations professionals. According to the market research firm Technavio,  Worldwide DevOps market will expand at a compound annual growth rate of 19 percent through 2020. The success methodologies in DevOps create intense competitions in the hunt for DevOps talent. To begin with, let’s understand the best practices while hiring a DevOps professionals. Articulating your own DevOps vision The meaning of DevOps is different for different people. It becomes really essential to know the vision of DevOps within the organization you work. DevOps goals are basically to automate its processes in order to speed up the deployment process. This can commence only after getting the clarity about how you want DevOps to be applied to your company. You should actively start searching for DevOps engineer. If the goal of the selected DevOps engineer does not align with organizational goals, then there are chances that their employment with you will be short-lived.| Getting the right skills Defining the ideal skills for a DevOps engineer is really difficult. For technical DevOps skills, knowledge on administration, virtualization experience, coding skills, and a strong IT background are needed. On the soft-skills side, a DevOps engineer should be interactive, communicative and should be service oriented, bringing maximum value to the project. Finding what favors DevOps Like any other IT engineer, the DevOps professionals can also join groups and forums where they can share their ideas and develop networks benefiting their career. The social media platform Linkedin is one of the best places where you can find a DevOps engineer. You can surely take help of your own IT staff for finding DevOps engineer. They could be spotted in DevOps conferences, groups and on social media outlets. Take help from professional IT employment agency The process of finding a DevOps engineer is sometimes a bit difficult. Using an IT employment agency is worthwhile, since they will help you find one for a specific period of time. In due course of time, you will create new channels in the DevOps professional market. Do not limit yourself geographically Many larger companies will research both nationwide and internationally for the perfect talent to fill in the key positions in their organization. But the only thing that is discouraged is their relocation. If your organization is ready to hire an expensive DevOps engineer, then you should start preparing to cast them geographically and making the company ready for the funding is also necessary. Recruiting the correct candidate is never an easy job. But successfully implementing it would be vital for the company growth. A qualified candidate is always a hot commodity for the market. Follow these basic ethical practices, and you will find the DevOps engineer who is a right fit for your organization.
Best Practices For Hiring DevOps Engineer

DevOps, as the name suggests, has originated from ... Read More

5 Benefits Of Getting A Devops Foundation

DevOps is a popular term in the IT world today and is a combination of development and operations. Engineers from both the fields work together to offer clients what they’re looking for—from design and development to implementation and support. An Insight Into The World of DevOps DevOps was created out of ‘Agile’ software development where software solutions and requirements evolve from the collaboration of different cross-functional teams. Agile software development promotes evolutionary development, adaptive planning, continuous improvement, and early delivery and also promotes flexible and fast response to changes. DevOps and Agile are somewhat similar but differ in various aspects. Agile is nothing but a set of principles followed while developing software. DevOps takes into account into account the operational and functional aspect of development as well. Many developers consider DevOps to be an extension or subset of Agile. DevOps Training & Certification The popularity of DevOps has increased significantly since its inception and there is a huge demand for DevOps certified professionals in the industry. Number of institutes have come up that offer DevOps certification and solid training. There are three certifications related to DevOps—Foundation, Certified Agile Service Manager, and Certified Agile Process Owner. Here, we’ll look at the Foundation certification in detail and the benefits associated with it. Learning Objectives of DevOps Foundation The learning objectives include getting an understanding of the following: ● DevOps objectives and vocabulary related to it ● How DevOps benefits a business? ● Concept and practices in DevOps and its relation to Agile, ITSM (IT Service Management), and Lean ● Performance measures and results ● Culture, collaboration, and communication ● Key performance indicators and critical success factors ● Real-life results and examples Foundation is the introductory DevOps Certification and certified individuals are able to implement the practices and concepts of DevOps and improve workflow and communication in the organisation. DevOps Foundation certification is a great option for individuals involved in areas of service and product lifecycle. It is also useful for managers and employees working in IT firms and bolsters the designing and development processes. DevOps Foundation certification can also help developmental consultants and external and internal suppliers. Getting this certification renders a solid understanding of everything related to DevOps and makes you more desirable in the job market. Candidates need to have sound IT and service management knowledge and are required to complete a 16 hour instructor-led course from an institution offering DevOps courses. Let’s look at the benefits of earning this certification. Benefits of DevOps Certification Benefits of DevOps Foundation certification are immense and some of them include: 1. Better Job Opportunities DevOps is more or less a very new concept in the industry and more and more companies are deploying DevOps practices. There is a dearth of certified professionals who can bring in their DevOps expertise to organisations. A DevOps certification will expand your horizon as an IT professional and better job opportunities will come your way. 2. Improved Skills & Knowledge The DevOps ideology encourages a complete new way of thinking and decision-making. The business and technical benefits of DevOps are many and you learn how to implement them in your organization. You learn to work in a team consisting of cross-functional team members—QA, developers, operation engineers, and business analysts. 3. Increased Salary According to a recent survey, DevOps certified professionals are among the highest paid in the IT industry. The market demand is increasing rapidly with its increased implementation worldwide and this trend is not going to change any time soon. 4. Increased Production & Effectiveness With a DevOps certification, your productivity as an IT professional will increase. In a typical IT environment, a lot of time is wasted waiting for other people and other software. Everyone likes to be productive at work and the time you waste waiting is sure to cause frustration. With DevOps, you can get rid of this unsatisfying part of your job and spend the time adding more value to your company and your staff. 5. Benefit Your Organisation By earning a DevOps certification, you can offer your organisation loads of measurable benefits. DevOps ideology promotes increased collaboration and communication between the operation and development teams. The frequency for release code that goes into production is increased due to a shorter development cycle. What took 3-6 months before, will take only a few hours with DevOps implementation. Defect detection will also become easy. So, be among the first in your peer group to get a DevOps Foundation certification to climb up the career ladder rapidly. Make sure to choose an accredited training organisation.
5 Benefits Of Getting A Devops Foundation

DevOps is a popular term in the IT world today and... Read More

Best Practices For Successful Implementation Of DevOps

What is DevOps?DevOps is nothing but the combination of process and philosophies which contains four basic component culture, collaboration, tools, and practices. In return, this gives a good automated system and infrastructure which helps an organisation to deliver a quality and reliable build. The beauty of this culture is it enables a quality for organizations to better serve their customers and compete more effectively in the market and also add some promised benefits which include confidence and trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.                                       “DevOps is not a goal, but a never-ending process of continual improvement.”                                                                           Jez Humble Here are the key DevOps best practices that can help you for successful implementation of DevOps.1. Understand Your Infrastructure need: Before building the infrastructure, spend some good time to understand the application and then align your goals to design your Infrastructure and this implementation of DevOps should be business-driven. While understanding infra, make sure you are capturing below components:Cycle Time : Your software cycle needs to be defined in a generic way where you need to know the limitations, ability and if there is any down time then the exact time need to be noted.Versioning Environments: While planning DevOps, always be ready for an alternative solution and versioning your environments helps you to roll out/back your plan. If you are having multiple module and tightly coupled then it requires a clean and neat plan to identify each and every patch and release.Infra as a code: When we say infra as a code it means a solution to addressing both needs – minimizing cycle time and versioning environments can be addressed by capturing and managing your Infrastructure as code. What you built should scalable for a long run.2. Don’t jump start : There is no need to automate the complete cycle in one shot, always take a small entity and apply your philosophy and get this validated. Once you feel your POC is justified, start scaling up now and create a complete pipeline and define a process so anytime you can go back and check what all need to improve and where. All these small success will help you to get confidence internally in your team and builds a trust to stakeholder and your customers.                                                        “DevOps isn’t magic, and transformations never happen overnight”3. Continuous Integration and Continuous Deployment: If  your team is not planning to implement this continuous integration and continuous delivery, then it is not fair with DevOps. Even I’ll say the beauty of DevOps is how frequently your team can deliver without disturbance and how much you are automated in this process. Let’s take a use case: You and your team members are working in an Agile team. In fact, there are multiple teams and multiple modules which are tightly coupled  in which you are also involved. Every day you work on your stories and at the end of the day, you push your ‘private build’ of your work to verify if it builds and ‘deliver’ it to a team build server and same applies to other individuals. Which indicates you all ‘integrate’ your work in the common build area and do an ‘Integration Build’. Doing these integrations and builds to verify them on a regular, preferably daily basis, is what is known as Continuous Integration.Continuous Deployment doesn’t mean every change is deployed to production as soon as possible. It means every change is proven to be deployable at any time.  What it takes is your all validated feature and build from CI and deploys them into the production environment. And here we can follow some of the practices. a) Maintain a Staging Environment that Emulates Production b) Always deploy in staging then move to production c) Automate Testing of Features and Nonfunctional Requirements d) Automatically fetch version-controlled development artifacts.4. Define Performance and do benchmarking : Always do some performance testing and get a collective benchmarking report for the latest build shared by your team because this will only justify the quality of your build and the required infra as well.For example : We have done one performance testing a few days back and got good results, explaining in details. So we did some benchmarking for our CFM machines because we are having a global footprint and at the same time, for us, latency matters and we need CFM in the nearest region. We have verified with our current build how many requests we can handle and we found we are firing more than 200 RPS (request per second). So we planned to check our build capability and fired a good number of requests and noted the number where our build got crashed and noted the RPS and then we did autoscaling of CFM. We might have upgraded our CFM but we planned for auto scaling because the number of requests is an assumption and we don’t want to spend amount for that but at the same time we are ready to consume the experimented traffic. And then we found 7 out 2 CFM are only consuming exact or little less number configuration and request (181 to 191 RPS). So we shared a report to the business team to focus on other regions where we were having very less traffic because we were paying the same amount.Conclusion: We verified our build which has given good confidence to our dev team and we shared the report to the business team which helped them to plan their marketing strategies, meanwhile we completed auto scaling the process as well.  5. Communicate and Collaborate : Collaboration and communication are the X-factors to help organisation grow and assess for DevOps. Collaboration with business and development team helps DevOps team to understand to design & define a culture. This helps to speed up the development, operations, and even other teams like marketing or sales, allowing all parts of the organization to align more closely with goals and projects.6. Start Documenting : Document everything (All your work done) which you are spreading across the process and infrastructure and specially the reports, RCA’s (Root cause Analysis), change management. This helps you to go back and see if all issues we faced can be automated in the next cycle or other ways to handle them smoothly without interrupting your production environment.7. Keep your eyes on cost burning: It has been experienced many a time that if we don’t keep an eye on cloud bills it will keep increasing and will tend to be proportional to the growth of your business till the time you don’t look for optimization. Always do an audit in 2 months and evaluate your cloud computation to optimize. Do some experiment with infra because you should not spend not more than 5  to 10 % of cost for cloud infra if you are completely dependent. Tools you can try : Reoptimize, Cloudyn, Orbitera etc.                                                                                 “If you are DevOps you should account the no’s.”8. Secure your infra : If your team follows certain compliances from day 1 then there is very less chance to compromise with your data and this can be easily enabled by providing a setup where you can verify your vulnerabilities. Before moving your build to the production team you may need to follow the standard at an early stage of development by using configured tools like: SonarQube, VeraCode, Codacy, CodeClimate etc.9. Tool Selection : Always select tools which all are compatible with rest of the toolchain you are planning to use. Why you should have to be so careful is because you have to capture each and every request capture. Once you are done with the tool selection, draft a tools metrics you are willing to capture or will be going to help you to debug. Start logging and monitoring them and have some clear definition for those logs so you can justify and determine that your processes are working as expected. Tools you can have a look : Nagios, Grafana, Pingdom, Monit, OpsGenie, Observium, Logstash etc.                                                                                                        Tool chain for DevOps process:                                                                             “If you are not monitoring,  you are not in the production”Conclusion:An organization that follows all the above best practices creates the right culture, which finally gets the ending it deserves i.e DevOps organization. "A good DevOps organization will free up developers to focus on doing what they do best: write software," says Rob Steward, Vice President of product development at Progress Software. "DevOps should take away the work and worry involved in deploying, securing and running the software once it is written."
2157
Best Practices For Successful Implementation Of De...

What is DevOps?DevOps is nothing but the combinati... Read More

Useful links