Search

Docker Vs Virtual Machines(VMs)

Let’s have a quick warm up on the resource management before we dive into the discussion on virtualization and dockers.In today’s multi-technology environments, it becomes inevitable to work on different software and hardware platforms simultaneously.The need to run multiple different machines (Desktops, Laptops, handhelds, and Servers) platforms with customized hardware and software requirements has given the rise to a new world of virtualization in IT industry.What a machine need?Each computing environment(machine) needs its own component of hardware resources and software resources.As more and more machines are needed, building up and administering many such stand-alone machines is not only cumbersome, time consuming but also adds up to the cost and energy.Apparently; to run a customized High-power Scalable Server is a better idea to consolidate all the hardware and software requirements into one place and have a single server run and distribute the resources to many machines over a network.That saves us time, resources, energy and revenue.These gigantic servers are stored in a data warehouse called a Datacenter.Below Diagram (2) indicates a single server serving and sharing resources and data among multiple client machinesDoes this look simplified enough? Yes of course!So, this setup looks feasible we have a high-power, high-storage Server that gives resources to many smaller(resources) machines over a network.How to manage huge data - ServersWith Internet Of Things in boom, Information is overflowing with a huge amount of data; handling tremendous data needs more system resources which means more Dedicated servers are needed.Many Servers approach challenge:Running several Dedicated servers for specific services such as Web service, application or database service as indicated in Diagram (3) is difficult to administer and consumes more energy, resources, manpower and is highly expensive.In addition; resource utilization of servers is very poor resulting in resource wastage.This is where simulating different environments and running them all on a single server is a smart choice; rather than having to run multiple physically distinct servers.This is how Diagram (3) would change after consolidating different servers into one as shown in Diagram (4).Sheet 2VirtualizationWhat is VirtualizationThe above single server implementation can be defined as the following term.Virtualization is a technique used to simulate and pretend a single infrastructure resource (hardware resources and software resources) to be acting as many providing multiple functionalities or services without the need to physically build, install and configure.In other words;Running multiple simulated environments in a single machine without installing and configuring them is called Virtualization.Technically speaking;Virtualization is an abstract layer that shares the infrastructure resources among various simulated virtual machines without the need to physically set up these environments.Diagram (5) displays different virtual Operating systems are running on the same machine and using the same hardware architecture of the underlying machine.What is a Virtual machineThe simulated virtualized environments are called virtual machines or VM.Virtual machine is a replication/simulation of an actual physical machine.A VM acts like a real physical machine and uses the physical resources of the underlying host OS.A VM is a running instance of a real physical machine.Need for virtualizationSo; we have an overview of virtualization, let us examine when should we virtualize and what are the benefits of virtualization?Better resource management and cost-effective: as indicated in Diagram (6) and Diagram (7); hardware resources are distributed wisely on need basis to different environments; all the virtual machines share the same resources and reduce resource wastage.Ease of quick administration and maintenance: It is easier to build, install, configure one server rather than multiple servers. Updating a patch on various machines from a single virtualized server is much more feasible.Disaster recovery: Since all the virtualized machines reside on the same server and are treated as mounted volumes of data files, it is easier to back up these machines. In case of a disaster failure (power failure, network down, cyber-attacks, failed test code, etc) VM screenshots are used to recover the running state of the machine and the whole setup can be built up within minutes.Isolated and independent secure test environment: virtualization provide an isolated independent virtual test environment to test the legacy code or a vendor-specific product or even a beta release or say a corrupt code without affecting the main hardware and software platform. (This is a contradictory statement though; will discuss more under types of virtualization)These test environments like dev, uat, preprod, prod etc..can be easily tested and discarded.Easily scalable and upgradable: Building up more simulated environments means spinning up more virtual machines. Also upgrading VMs is as good as to run a patch in all VMs.Portable: Virtual machines are lightweight compared to the actual running physical machines; in addition, a VM that includes its own OS, drivers, and other installation files is portable on any machine. One can access the data virtually from any location.The screenshot of activity monitor below compares the CPU load:Implementation a) What is hypervisor and its types?As discussed in the previous section; virtualization is achieved by means of a virtualized layer on top of hardware or a software resource.This abstract layer is called a hypervisor.A hypervisor is a virtual machine monitor (VMM)There are 2 types of hypervisors: Diagram (8)Type-1 or bare-metal hypervisorType-2 or hosted hypervisorType-1 or bare-metal hypervisor is installed directly on the system hardware, thus abstracting and sharing the hardware components with the VMs.Type-2 or hosted hypervisor is installed on top of the system bootable OS called host OS; this hypervisor abstracts the system resources visible to the host OS and distributes it among the VMs.Both have their own role to play in virtualization.b) Comparing hypervisor typesType-1 or bare-metal hypervisorType-2 or hosted hypervisorInstalled directly on the infrastructure-OS independent and more secure against software issues.Installed on top of the host OS-more prone to software failures.Better resource flexibility: Have direct access to the hardware infrastructure (Hard-drive partition, RAM, embedded cards such as NIC). Provide more flexibility and scalability to the VMs and assign resources on a need basis.Limited resource allocation: Have access to just the resources exposed by the host OS.VMs installed will have limited access to hardware resources allocated and exposed by the host OS.Single point of failure: A compromised VM may affect the kernel. Extra security layers needed.A compromised VM may affect only the host OS, kernel still remains unreachable.Low latency due to direct link to the infrastructure.High latency as all the VMs have to pass through the OS layer to access the system resources.Generally used in ServersGenerally used on small client machinesExpensiveLess expensiveType-1 Hypervisors in market:VMWare ESX/ESXiHyperkit (OSX)Microsoft Hyper-V (Windows)KVM(Linux)Oracle VM ServerType-2 Hypervisors in market:Oracle VM VirtualBoxVMWare WorkstationParallels desktop for MACTypes of virtualizationBased on what resource is virtualized, there are different classifications of virtualization.Server, Storage device, operating system, networkDesktop virtualization: Entire desktop environment is simulated and distributed to run on a single server all at once. A desktop virtualization allows administrators to manage, install, configure similar setups on many machines. Upgrading all the machines with a single patch update or security checks becomes easier and faster.Server virtualization: Many dedicated servers can be virtualized into a single server that provides multi-server functionality.Example:Many virtual machines can be built up sharing the same underlying system resources.Storage, RAM, disks, CPUOperating system virtualization: This happens at the kernel levelHypervisor on hardware type 2 bare-metalOne machine: Can boot up as multiple OS like Windows or Linux side-by-sideApplication virtualization: Apps are packaged and stored in a virtual environment and are distributed across different VMs. Example Microsoft applications like excel, MS word, Powerpoint etc, Citrix applications.Network functions virtualization: Physical network components such as NIC cards, switches, routers, servers, hubs, and cables are all assembled in a single server and used virtually by multiple machines without having the load of installing them on every machine.Virtualization is one of the building blocks and driving force behind cloud computing.Cloud computing provide virtualized need-based services. This has given an uplift to the concept of virtualization.A quick mention of various cloud computing models/services are listed below:SaaS – Software as a Service– end-user applications are maintained and run by service providers and easily distributed and used by the end users without having to install them.Top SaaS providers: Microsoft (Office suite, CRM, SQL server databases), AWS, Adobe, Oracle (ERP, CRM, SCM), Cisco’s Webex, GitHub ( git hosting web service)PaaS – Platform as a Service – computing infrastructure(hardware/software) is maintained and updated by the service provider and the user just have to run the product over this platform.Top Paas providers: AWS beanstalk, Oracle Cloud Platform (OCP), Google App EngineIaaS – Infrastructure as a Service – Provide infrastructure such as servers, physical storage, networking, memory devices etc. Users can build their own platform with customized operating system and applications.Key IaaS providers: Amazon Web Services, Microsoft Azure, Google compute engine, CitrixConclusion:We now have a fair understanding of types of virtualization and how they are implemented.ContainerizationThough virtualization has its pros; there are certain downsides of virtualization such as:Not all systems can be virtualized always.A corrupt VM is sometimes contagious and may affect other VMs or the kernel in-case of a Type-1 or bare-metal hypervisor.Latency of virtual disks due to increased payload on the CPU resources with a higher number of VMsUnstable performanceAn alternative approach to overcome the above flaws of virtualization is to Containerize the applications and the run-time environment together.What is containerization  Containerization is an OS-level virtualization; wherein the entire build of an application along with run-time environment is encapsulated or bundled up in a package.These packages are called containers.Containers are lightweight virtualized environments. These are independent of the infrastructure both hardware and software.The run-time environment includes the operating system, binaries, libraries, configuration files and other applications as shown in Diagram (9).What is DockersDockers provide an excellent framework for containerization and allow to build, ship, and run distributed applications over multiple platforms.Docker framework is setup as a docker engine installed on host OS and a docker daemon (background process) process is started that manage the virtual containers.Refer Diagram (10) that shows a Docker engine with 3 containers residing on host OS (MAC OS).An instruction file called dockerfile is written with a set of system commands that change the filesystem such as add, copy or delete commands, run commands, install utilities, system calls etc…This dockerfile is built and packaged along with its run-time environment as an executable file called a docker image.Docker daemon services run these images to create docker containers.Docker container is a run-time instance of an imageIt is wise to say that many images (or layers of instruction files) make up a container.Docker containers have a compact packaging and each container is well isolated.We can run, start, stop, attach, move or delete containers as these runs as services on the host OS.Each image is made up of different layers; each image based on top of the other with the customized command changes that we make.Every time we make a change in the filesystem, each change related to the image is encapsulated in a new layer of filesystem and stacked up above the parent image.Only the changed layers are rebuilt, rest of the unchanged image layers are reused.Certain docker commands ADD, RUN and COPY create a new layer with increased byte size; rest of the commands simply adds up a new layer with zero-byte size.These layers are re-used to build a new image, hence faster and lightweight.Docker images are alsoThe layer approach of an image every time there is a change in the image makes it possible to Version control the docker images.Here is a terminal recording that shows docker engine process and how images and containers are created.Docker documentation - to create containers.Ppt diagram:Code -> package -> build images -> registry hub -> download/pull image -> run containerAnimation: sheet4Let’s consider the docker container: divyabhushan/learn_docker hosted on docker hub.Latest tagged image: centOS_release1.2What is the container environment?Base OS: Centos:7Utilities: vim, yum, gitApps/files: Dockerfile, myApp.sh, runtests.sh, data and other supporting files.Git source code: dockerImagesDownload as: git clone https://github.com/divyabhushan/DockerImages_Ubuntu.gitWhat does the container do?Container launches “myApp.sh” in Ubuntu:14.04 environment and run some scripts along with a set of post test_suites in the container (Ubuntu:14.04) and saves the output log file.How to modify and build your own appStep 1: pull 1.1: Pull the docker image1.2: Run image to create a container and exitStep 2: modify2.1: Start the container2.2: Attach to the container and make some changesStep 3: commit3.1: Examine the history logs and changes in the container3.2: Commit the changes in containerStep 4: push4.1: Push new image to docker hubLet us see the steps in action:Step 1: pull docker image on your machine1.1: Pull the docker imageCommand:docker pull divyabhushan/learn_docker:myApp_ubuntu_14.04View the image on systemdocker imagesscreenshotCommand:docker run -it --name ubuntu14.04 0a6f949131a6Run command in ubuntu container and exit, the container is stopped on exiting out.View the stopped container with the ‘ps -a’ command.Step 2: modifyStart the containerCommand:docker start <container_id>Now the container is listed as a running processAttach to the container and make some changesCommand:docker attach 7d0d0225778cedit the ‘git configuration’ file and ‘myApp.sh’ scriptContainer is modified and stoppedStep 3: commitExamine the history logs and changes in the containerThe changes done inside the container filesystem can be viewed using the ‘docker diff’ command as:Command: docker diff 7d0d0225778cCommit the changes in containerDocker commit:Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]docker commit -m 'new Ubuntu image' 7d0d0225778c divyabhushan/learn_docker:ubuntu14.04_v2New image is created and listedStep 4: pushPush new image to docker hubCommand:docker push divyabhushan/learn_docker:ubuntu14.04_v2Point to note: just the latest commit change layer ‘50a5ce553bba’ has been pushed, while the other layers were re-used.Image available on docker hub:The latest tagged image can now be pulled from other machines; and run to create the same container environment.Conclusion: An image was pulled and run to create a container to replicate the environment. Container was modified, new changes were committed to form a new image. New Image pushed back on the docker hub and now available as a new tag ready to be pulled by other machines.Difference between Dockers and Virtual machinesTabular differences on various parametersParametersVMsDockersarchitectureHardware level virtualization. Each VM has its own copy of OS.Software level virtualization. Dockers have no own OS, run on host OSIsolationFully isolatedProcess or application-level isolation.  InstallationHypervisor can run directly on the hardware resources or on the host OS.Docker engine is installed on top of the host OS. A docker daemon process is initiated on the host OS. There is no separate OS for every container.CPU processing + performanceSlower: A VM contains the entire run-time environment that has to be loaded every time. Uses more CPU cycles; gives unstable performance.Faster: Docker images are pre-built and share host resources as a result running an image as a container is lightweight and consumes less CPU cycle; gives a stable performanceHardware storageMore storage space as each VM is an independent machine (OS). Example: 3 VMs of 800MB each will take 2.4 GB of space.Docker containers are lightweight since do not require to load OS+drivers, run on host OS as processes.PortableDependency on host OS and hardware makes VM less portable. Importing a VM still requires manual setup such storage, RAM and network.Highly portable since lightweight and zero dependency on hardware.Scalable and code-reusabilitySpinning up more VMs still need administrative tasks such as distributing resources to VM. Running a new machine puts extra load on the system resources also re-managing earlier VMs becomes a task. Every VM keeps its own copy of resources-poor code-reusability.Spinning up new docker containers simply means running pre-built images into containers as a process inside host OS. Containers are also configured on-the-fly passing parameters and run-time. Single image can be run and used to create many containers; encourage code-reusabilityResource utilizationStatic allocation results in resource wastage in case of idle VMs or if a VM’s resource requirement increases.Resources are dynamically allocated and de-allocated on the need basis by the docker engine.Docker system prune or garbage collectionVirtual machines do not have an in-built prune mechanism, these have to be administered manually.Docker image and containers can be pruned; which frees up a sensible amount of storage and memory space and CPU cycles.New environmentCreating new VM from the scratch is a tedious, repetitive tasks. It involves installing a new OS, loading kernel drivers and other tools and configurations.Package the code and dependency files, build into an image, run the image to create a new container. Use an existing or a base image (dockerhub- scratch) to run and create more containers on the go.Web-hosted HubNo web hosted hub for VMsdockerHub provides an open-source reliable trusted source of pre-built images that can be downloaded to run new containers.Version control (backup, restore,track history)(refer git)Snapshot of VMs are not very user-friendly and consume more space.Docker images are version controlled. Every delta difference in each docker container can easily be viewed (demo: docker diff <container_id>). Any change in the image is stored as a different layered version. A reference link to older images saves build time and space.Auto-buildAutomation of creating VMs is not very feasible.Docker images can also be auto-built from every source code check-in to GitHub (Automated builds on Dockerhub)Disaster recoveryTedious to recover from VM backup files.Easier to restore docker images (like files) just like git source files in case images are version controlled. Backup images only have to be run to create containers. (refer: screenshot).UpdateAll the VMs have to updated with the release patch.A single image is updated, re-built and distributed across multiple platforms.Memory usage+speedSlower: Entire snapshot of a machine and the OS is loaded into the cache memory.Real-time and fast: pre-built images. Only the instance, i.e, a container has to be run as a process and uses memory like an executableData integrityVM behavior may change if the dependency includes beyond the VM boundaries. (example: an app depends on production host network settings)Same behavior of apps in any environmentsecurityMore secure: A failure inside a VM may reach its guest OS but not the host OS or other virtual machines. Type-2 hypervisor though has a risk of kernel attack.Less secure: If a docker container compromised; underlying OS and hence all the containers may be affected since they share the same host kernel. OS Kernel may also be risked.Key providersRed hat KVM, VMWare, Oracle VM VirtualBox, Mircrosoft Hyper-V, Citrix XenServerDockers, Google kubernetes Engine, AWS Elastic Container serviceData authenticationLot of software licenses.Docker maintains inbuilt content trust to verify published images. When to use VM or a DockerWhen the need is an isolated OS, go for VMs.For a hardware and software independent isolated application that needs fast distribution on multiple environments, use dockers.Docker use-case:Example: A database application along with its databaseConsider the docker image - Oracle WebLogic Server on Docker Hub.This image is pre-built Oracle WebLogic Server runtime environment, including Oracle Linux 7 and Oracle JDK 8 for deploying Java EE applications.To create Server configurations on any machine, just download this image and run to create and start a container.There is no need to install and configure JDK, Linux or other run-time environment.Do not use Docker use-case:The application depends on utility outside the docker container.Code developed on dev machine with base OS as MAC; needs certain firewall setting on say Ubuntu OS.How can the code be tested on the production ubuntu OS firewall while running from MAC OS docker container?Solution:  Install a virtualization software on host OS-MAC; Create a VM (Virtual machine) with host OS as Ubuntu (same as production environment).Configure the desired firewall settings on host VM – Ubuntu; import the test code inside Ubuntu and test.Use a VM:For Embedded systems programming, a VM is installed that connects to the system device drivers, controllers and kernel.Virtualization used along with docker:An extension to the previous scenario would be if you would want to also test your python application in the host OS-Ubuntu VM without having to set up the python exe and its libraries and binaries.All you have to do is: Install Docker engine for Ubuntu OS and pull the python image from Docker hub as:docker pull python:tag [ tag is the python version-choose the appropriate version ]docker pull python:2.7Refer: Python imageEither write a Dockerfile to import/copy entire source code to python environment or directly run the image passing the script path as below:Command:$docker run -it --name my-python-script -v “$PWD”:/usr/src/myapp -w /usr/src/myapp python:2.7 python my-application.pyCommand options:-v = volume list-bind mount a volume [mount present working directory onto /usr/src/myapp inside container]-w = workdir string-working directory inside the containerMoreover; you can also test your python code in more than one version by downloading different python images, running them to create different containers and running your app in each container.What’s exciting here is that once the code tested in each python environment; you could quickly work on the test results and drop the containers. And deploy the code to production only once code tested against various python versions.Final thoughtsVMs and dockers are compatible with each other. Dockers are not here to replace Virtual machines.Both serve the same purpose of virtualizing the computing and infrastructure resources for optimized utilization.Using both Virtual machines and dockers together can yield better results in virtualization.When one desires a fast, lightweight, portable and highly scalable hardware-independent environment for multiple applications isolation; wherein security is not the major concern; Dockers is the best choice.Use a VM for embedded systems that are integrated with hardware; such as device driver or kernel coding.A scenario simulating an infrastructure setup with a high resource control and dependency on system resources; VMs are a better choice.Use of Dockers inside VMCI/CD pipelines scenario:Virtualization enables a smooth CI/CD process flow by promoting the users to concentrate only on developing the code on a working system that is set up for automated continuous integration and deployment without having to duplicate the entire setup each time.A virtualized environment is set up; either using a VM or a docker image that takes care of the automatic code check-ins, builds, regression testing, and deployments on the server.
Rated 4.5/5 based on 3 customer reviews

Docker Vs Virtual Machines(VMs)

8K
Docker Vs Virtual Machines(VMs)

Let’s have a quick warm up on the resource management before we dive into the discussion on virtualization and dockers.

In today’s multi-technology environments, it becomes inevitable to work on different software and hardware platforms simultaneously.

The need to run multiple different machines (Desktops, Laptops, handhelds, and Servers) platforms with customized hardware and software requirements has given the rise to a new world of virtualization in IT industry.

What a machine need?

Each computing environment(machine) needs its own component of hardware resources and software resources.

As more and more machines are needed, building up and administering many such stand-alone machines is not only cumbersome, time consuming but also adds up to the cost and energy.

Apparently; to run a customized High-power Scalable Server is a better idea to consolidate all the hardware and software requirements into one place and have a single server run and distribute the resources to many machines over a network.

That saves us time, resources, energy and revenue.

A Server with many hardware components installed in a datacenter

These gigantic servers are stored in a data warehouse called a Datacenter.

Below Diagram (2) indicates a single server serving and sharing resources and data among multiple client machines

Single server sharing data with many machines

Does this look simplified enough? Yes of course!

So, this setup looks feasible we have a high-power, high-storage Server that gives resources to many smaller(resources) machines over a network.

How to manage huge data - Servers

With Internet Of Things in boom, Information is overflowing with a huge amount of data; handling tremendous data needs more system resources which means more Dedicated servers are needed.

Many servers for different computing needs

Many Servers approach challenge:

Running several Dedicated servers for specific services such as Web service, application or database service as indicated in Diagram (3) is difficult to administer and consumes more energy, resources, manpower and is highly expensive.

In addition; resource utilization of servers is very poor resulting in resource wastage.

This is where simulating different environments and running them all on a single server is a smart choice; rather than having to run multiple physically distinct servers.

This is how Diagram (3) would change after consolidating different servers into one as shown in Diagram (4).

Sheet 2

Servers after virtualization

Servers after virtualization

Virtualization

What is Virtualization

The above single server implementation can be defined as the following term.

Virtualization is a technique used to simulate and pretend a single infrastructure resource (hardware resources and software resources) to be acting as many providing multiple functionalities or services without the need to physically build, install and configure.

In other words;

Running multiple simulated environments in a single machine without installing and configuring them is called Virtualization.

Technically speaking;

Virtualization is an abstract layer that shares the infrastructure resources among various simulated virtual machines without the need to physically set up these environments.

A single machine running multiple operating systems

Diagram (5) displays different virtual Operating systems are running on the same machine and using the same hardware architecture of the underlying machine.

What is a Virtual machine

The simulated virtualized environments are called virtual machines or VM.

Virtual machine is a replication/simulation of an actual physical machine.

A VM acts like a real physical machine and uses the physical resources of the underlying host OS.

A VM is a running instance of a real physical machine.

Need for virtualization

So; we have an overview of virtualization, let us examine when should we virtualize and what are the benefits of virtualization?

  1. Better resource management and cost-effective: as indicated in Diagram (6) and Diagram (7); hardware resources are distributed wisely on need basis to different environments; all the virtual machines share the same resources and reduce resource wastage.
  2. Ease of quick administration and maintenance: It is easier to build, install, configure one server rather than multiple servers. Updating a patch on various machines from a single virtualized server is much more feasible.
  3. Disaster recovery: Since all the virtualized machines reside on the same server and are treated as mounted volumes of data files, it is easier to back up these machines. In case of a disaster failure (power failure, network down, cyber-attacks, failed test code, etc) VM screenshots are used to recover the running state of the machine and the whole setup can be built up within minutes.
  4. Isolated and independent secure test environment: virtualization provide an isolated independent virtual test environment to test the legacy code or a vendor-specific product or even a beta release or say a corrupt code without affecting the main hardware and software platform. (This is a contradictory statement though; will discuss more under types of virtualization)
    These test environments like dev, uat, preprod, prod etc..can be easily tested and discarded.
  5. Easily scalable and upgradable: Building up more simulated environments means spinning up more virtual machines. Also upgrading VMs is as good as to run a patch in all VMs.
  6. Portable: Virtual machines are lightweight compared to the actual running physical machines; in addition, a VM that includes its own OS, drivers, and other installation files is portable on any machine. One can access the data virtually from any location.

Sheet 3 Resource management of resources

The screenshot of activity monitor below compares the CPU load:

Percentage of CPU resources without and with OS virtualization

Implementation 

a) What is hypervisor and its types?

As discussed in the previous section; virtualization is achieved by means of a virtualized layer on top of hardware or a software resource.

This abstract layer is called a hypervisor.

A hypervisor is a virtual machine monitor (VMM)

There are 2 types of hypervisors: Diagram (8)

  1. Type-1 or bare-metal hypervisor
  2. Type-2 or hosted hypervisor

Type-1 or bare-metal hypervisor is installed directly on the system hardware, thus abstracting and sharing the hardware components with the VMs.

Type-2 or hosted hypervisor is installed on top of the system bootable OS called host OS; this hypervisor abstracts the system resources visible to the host OS and distributes it among the VMs.

Both have their own role to play in virtualization.

b) Comparing hypervisor types

Type-1 or bare-metal hypervisorType-2 or hosted hypervisor

Installed directly on the infrastructure-OS independent and more secure against software issues.

Installed on top of the host OS-more prone to software failures.

Better resource flexibility: Have direct access to the hardware infrastructure (Hard-drive partition, RAM, embedded cards such as NIC). Provide more flexibility and scalability to the VMs and assign resources on a need basis.

Limited resource allocation: Have access to just the resources exposed by the host OS.

VMs installed will have limited access to hardware resources allocated and exposed by the host OS.

Single point of failure: A compromised VM may affect the kernel. Extra security layers needed.

A compromised VM may affect only the host OS, kernel still remains unreachable.

Low latency due to direct link to the infrastructure.

High latency as all the VMs have to pass through the OS layer to access the system resources.

Generally used in Servers

Generally used on small client machines

Expensive

Less expensive

Type-1 Hypervisors in market:

VMWare ESX/ESXi

Hyperkit (OSX)

Microsoft Hyper-V (Windows)
KVM(Linux)

Oracle VM Server

Type-2 Hypervisors in market:

Oracle VM VirtualBox

VMWare Workstation

Parallels desktop for MAC

Type-1 and type-2 hypervisor

Types of virtualization

Based on what resource is virtualized, there are different classifications of virtualization.

Server, Storage device, operating system, network

Desktop virtualization: Entire desktop environment is simulated and distributed to run on a single server all at once. A desktop virtualization allows administrators to manage, install, configure similar setups on many machines. Upgrading all the machines with a single patch update or security checks becomes easier and faster.

Server virtualization: Many dedicated servers can be virtualized into a single server that provides multi-server functionality.

Example:

Many virtual machines can be built up sharing the same underlying system resources.

Storage, RAM, disks, CPU

Operating system virtualization: This happens at the kernel level

Hypervisor on hardware type 2 bare-metal

One machine: Can boot up as multiple OS like Windows or Linux side-by-side

Application virtualization: Apps are packaged and stored in a virtual environment and are distributed across different VMs. Example Microsoft applications like excel, MS word, Powerpoint etc, Citrix applications.

Network functions virtualization: Physical network components such as NIC cards, switches, routers, servers, hubs, and cables are all assembled in a single server and used virtually by multiple machines without having the load of installing them on every machine.

Virtualization is one of the building blocks and driving force behind cloud computing.

Cloud computing provide virtualized need-based services. This has given an uplift to the concept of virtualization.

A quick mention of various cloud computing models/services are listed below:

SaaS – Software as a Service– end-user applications are maintained and run by service providers and easily distributed and used by the end users without having to install them.

Top SaaS providers: Microsoft (Office suite, CRM, SQL server databases), AWS, Adobe, Oracle (ERP, CRM, SCM), Cisco’s Webex, GitHub ( git hosting web service)

PaaS – Platform as a Service – computing infrastructure(hardware/software) is maintained and updated by the service provider and the user just have to run the product over this platform.

Top Paas providers: AWS beanstalk, Oracle Cloud Platform (OCP), Google App Engine

IaaS – Infrastructure as a Service – Provide infrastructure such as servers, physical storage, networking, memory devices etc. Users can build their own platform with customized operating system and applications.

Key IaaS providers: Amazon Web Services, Microsoft Azure, Google compute engine, Citrix

Conclusion:

We now have a fair understanding of types of virtualization and how they are implemented.

Containerization

Though virtualization has its pros; there are certain downsides of virtualization such as:

  • Not all systems can be virtualized always.
  • A corrupt VM is sometimes contagious and may affect other VMs or the kernel in-case of a Type-1 or bare-metal hypervisor.
  • Latency of virtual disks due to increased payload on the CPU resources with a higher number of VMs
  • Unstable performance

An alternative approach to overcome the above flaws of virtualization is to Containerize the applications and the run-time environment together.

What is containerization  

Containerization is an OS-level virtualization; wherein the entire build of an application along with run-time environment is encapsulated or bundled up in a package.

These packages are called containers.

Containers are lightweight virtualized environments. These are independent of the infrastructure both hardware and software.

The run-time environment includes the operating system, binaries, libraries, configuration files and other applications as shown in Diagram (9).

Packaged code

What is Dockers

Dockers provide an excellent framework for containerization and allow to build, ship, and run distributed applications over multiple platforms.

Docker framework is setup as a docker engine installed on host OS and a docker daemon (background process) process is started that manage the virtual containers.

Docker architecture

Refer Diagram (10) that shows a Docker engine with 3 containers residing on host OS (MAC OS).

An instruction file called dockerfile is written with a set of system commands that change the filesystem such as add, copy or delete commands, run commands, install utilities, system calls etc…

This dockerfile is built and packaged along with its run-time environment as an executable file called a docker image.

Docker daemon services run these images to create docker containers.

Docker container is a run-time instance of an image

It is wise to say that many images (or layers of instruction files) make up a container.

Docker containers have a compact packaging and each container is well isolated.

We can run, start, stop, attach, move or delete containers as these runs as services on the host OS.

Each image is made up of different layers; each image based on top of the other with the customized command changes that we make.

Every time we make a change in the filesystem, each change related to the image is encapsulated in a new layer of filesystem and stacked up above the parent image.

Only the changed layers are rebuilt, rest of the unchanged image layers are reused.

Certain docker commands ADD, RUN and COPY create a new layer with increased byte size; rest of the commands simply adds up a new layer with zero-byte size.

These layers are re-used to build a new image, hence faster and lightweight.

Docker images are also

The layer approach of an image every time there is a change in the image makes it possible to Version control the docker images.

Here is a terminal recording that shows docker engine process and how images and containers are created.

Docker documentation - to create containers.

Ppt diagram:

Code -> package -> build images -> registry hub -> download/pull image -> run container

Docker architecture

Animation: sheet4

Let’s consider the docker container: divyabhushan/learn_docker hosted on docker hub.

Latest tagged image: centOS_release1.2

What is the container environment?
Base OS: Centos:7

Utilities: vim, yum, git

Apps/files: Dockerfile, myApp.sh, runtests.sh, data and other supporting files.

Git source code: dockerImages

Download as: git clone https://github.com/divyabhushan/DockerImages_Ubuntu.git

What does the container do?
Container launches “myApp.sh” in Ubuntu:14.04 environment and run some scripts along with a set of post test_suites in the container (Ubuntu:14.04) and saves the output log file.

How to modify and build your own app

Step 1: pull 

1.1: Pull the docker image

1.2: Run image to create a container and exit

Step 2: modify

2.1: Start the container

2.2: Attach to the container and make some changes

Step 3: commit

3.1: Examine the history logs and changes in the container

3.2: Commit the changes in container

Step 4: push

4.1: Push new image to docker hub

Let us see the steps in action:

Step 1: pull 

docker image on your machine

1.1: Pull the docker image

Command:

docker pull divyabhushan/learn_docker:myApp_ubuntu_14.04

View the image on system

docker images

screenshot

Run image to create a container and exit

Command:

docker run -it --name ubuntu14.04 0a6f949131a6

Run command in ubuntu container and exit, the container is stopped on exiting out.

Docker Vs Virtual Machines(VMs)

View the stopped container with the ‘ps -a’ command.

Docker Vs Virtual Machines(VMs)

Step 2: modify

Start the container

Command:

docker start <container_id>

Docker Vs Virtual Machines(VMs)

Now the container is listed as a running process

Attach to the container and make some changes

Command:

docker attach 7d0d0225778c

edit the ‘git configuration’ file and ‘myApp.sh’ script

Docker Vs Virtual Machines(VMs)

Container is modified and stopped

Step 3: commit

Examine the history logs and changes in the container

Docker Vs Virtual Machines(VMs)

The changes done inside the container filesystem can be viewed using the ‘docker diff’ command as:

Command: 

docker diff 7d0d0225778c

Docker Vs Virtual Machines(VMs)

Commit the changes in container

Docker commit:

Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]

docker commit -m 'new Ubuntu image' 7d0d0225778c divyabhushan/learn_docker:ubuntu14.04_v2

Docker Vs Virtual Machines(VMs)

New image is created and listed

Docker Vs Virtual Machines

Step 4: push

Push new image to docker hub

Command:

docker push divyabhushan/learn_docker:ubuntu14.04_v2

Docker Vs Virtual Machines

Point to note: just the latest commit change layer ‘50a5ce553bba’ has been pushed, while the other layers were re-used.

Image available on docker hub:

Docker Vs Virtual Machines

The latest tagged image can now be pulled from other machines; and run to create the same container environment.

Conclusion: An image was pulled and run to create a container to replicate the environment. Container was modified, new changes were committed to form a new image. New Image pushed back on the docker hub and now available as a new tag ready to be pulled by other machines.

Difference between Dockers and Virtual machines

Tabular differences on various parameters

ParametersVMsDockers
architecture

Hardware level virtualization. Each VM has its own copy of OS.

Software level virtualization. Dockers have no own OS, run on host OS


IsolationFully isolatedProcess or application-level isolation.  
Installation

Hypervisor can run directly on the hardware resources or on the host OS.


Docker engine is installed on top of the host OS. A docker daemon process is initiated on the host OS. There is no separate OS for every container.


CPU processing + performance


Slower: A VM contains the entire run-time environment that has to be loaded every time. Uses more CPU cycles; gives unstable performance.


Faster: Docker images are pre-built and share host resources as a result running an image as a container is lightweight and consumes less CPU cycle; gives a stable performance


Hardware storage


More storage space as each VM is an independent machine (OS). Example: 3 VMs of 800MB each will take 2.4 GB of space.Docker containers are lightweight since do not require to load OS+drivers, run on host OS as processes.
PortableDependency on host OS and hardware makes VM less portable. Importing a VM still requires manual setup such storage, RAM and network.Highly portable since lightweight and zero dependency on hardware.
Scalable and code-reusabilitySpinning up more VMs still need administrative tasks such as distributing resources to VM. Running a new machine puts extra load on the system resources also re-managing earlier VMs becomes a task. Every VM keeps its own copy of resources-poor code-reusability.Spinning up new docker containers simply means running pre-built images into containers as a process inside host OS. Containers are also configured on-the-fly passing parameters and run-time. Single image can be run and used to create many containers; encourage code-reusability
Resource utilizationStatic allocation results in resource wastage in case of idle VMs or if a VM’s resource requirement increases.Resources are dynamically allocated and de-allocated on the need basis by the docker engine.
Docker system prune or garbage collection

Virtual machines do not have an in-built prune mechanism, these have to be administered manually.


Docker image and containers can be pruned; which frees up a sensible amount of storage and memory space and CPU cycles.
New environmentCreating new VM from the scratch is a tedious, repetitive tasks. It involves installing a new OS, loading kernel drivers and other tools and configurations.Package the code and dependency files, build into an image, run the image to create a new container. Use an existing or a base image (dockerhub- scratch) to run and create more containers on the go.
Web-hosted HubNo web hosted hub for VMsdockerHub provides an open-source reliable trusted source of pre-built images that can be downloaded to run new containers.

Version control (backup, restore,track history)

(refer git)
Snapshot of VMs are not very user-friendly and consume more space.

Docker images are version controlled. 

Every delta difference in each docker container can easily be viewed (demo: docker diff <container_id>). 

Any change in the image is stored as a different layered version. A reference link to older images saves build time and space.

Auto-buildAutomation of creating VMs is not very feasible.Docker images can also be auto-built from every source code check-in to GitHub (Automated builds on Dockerhub)
Disaster recoveryTedious to recover from VM backup files.Easier to restore docker images (like files) just like git source files in case images are version controlled. Backup images only have to be run to create containers. (refer: screenshot).
UpdateAll the VMs have to updated with the release patch.A single image is updated, re-built and distributed across multiple platforms.
Memory usage+speedSlower: Entire snapshot of a machine and the OS is loaded into the cache memory.Real-time and fast: pre-built images. Only the instance, i.e, a container has to be run as a process and uses memory like an executable
Data integrityVM behavior may change if the dependency includes beyond the VM boundaries. (example: an app depends on production host network settings)Same behavior of apps in any environment
securityMore secure: A failure inside a VM may reach its guest OS but not the host OS or other virtual machines. Type-2 hypervisor though has a risk of kernel attack.Less secure: If a docker container compromised; underlying OS and hence all the containers may be affected since they share the same host kernel. OS Kernel may also be risked.
Key providersRed hat KVM, VMWare, Oracle VM VirtualBox, Mircrosoft Hyper-V, Citrix XenServerDockersGoogle kubernetes EngineAWS Elastic Container service
Data authenticationLot of software licenses.Docker maintains inbuilt content trust to verify published images.

Architecture comparison

 When to use VM or a Docker

When the need is an isolated OS, go for VMs.

For a hardware and software independent isolated application that needs fast distribution on multiple environments, use dockers.

  • Docker use-case:

Example: A database application along with its database

Consider the docker image - Oracle WebLogic Server on Docker Hub.

This image is pre-built Oracle WebLogic Server runtime environment, including Oracle Linux 7 and Oracle JDK 8 for deploying Java EE applications.

To create Server configurations on any machine, just download this image and run to create and start a container.

There is no need to install and configure JDK, Linux or other run-time environment.

  • Do not use Docker use-case:

The application depends on utility outside the docker container.

Code developed on dev machine with base OS as MAC; needs certain firewall setting on say Ubuntu OS.

How can the code be tested on the production ubuntu OS firewall while running from MAC OS docker container?

Solution:  Install a virtualization software on host OS-MAC; Create a VM (Virtual machine) with host OS as Ubuntu (same as production environment).

Configure the desired firewall settings on host VM – Ubuntu; import the test code inside Ubuntu and test.

  • Use a VM:

For Embedded systems programming, a VM is installed that connects to the system device drivers, controllers and kernel.

  • Virtualization used along with docker:

An extension to the previous scenario would be if you would want to also test your python application in the host OS-Ubuntu VM without having to set up the python exe and its libraries and binaries.

All you have to do is: Install Docker engine for Ubuntu OS and pull the python image from Docker hub as:

docker pull python:tag [ tag is the python version-choose the appropriate version ]

docker pull python:2.7

Refer: Python image

Either write a Dockerfile to import/copy entire source code to python environment or directly run the image passing the script path as below:

Command:

$docker run -it --name my-python-script -v “$PWD”:/usr/src/myapp -w /usr/src/myapp python:2.7 python my-application.py

Command options:

-v = volume list-bind mount a volume [mount present working directory onto /usr/src/myapp inside container]

-w = workdir string-working directory inside the container

Moreover; you can also test your python code in more than one version by downloading different python images, running them to create different containers and running your app in each container.

What’s exciting here is that once the code tested in each python environment; you could quickly work on the test results and drop the containers. And deploy the code to production only once code tested against various python versions.

Final thoughts

VMs and dockers are compatible with each other. Dockers are not here to replace Virtual machines.

Both serve the same purpose of virtualizing the computing and infrastructure resources for optimized utilization.

Using both Virtual machines and dockers together can yield better results in virtualization.

When one desires a fast, lightweight, portable and highly scalable hardware-independent environment for multiple applications isolation; wherein security is not the major concern; Dockers is the best choice.

Use a VM for embedded systems that are integrated with hardware; such as device driver or kernel coding.

A scenario simulating an infrastructure setup with a high resource control and dependency on system resources; VMs are a better choice.

Use of Dockers inside VM

CI/CD pipelines scenario:

Virtualization enables a smooth CI/CD process flow by promoting the users to concentrate only on developing the code on a working system that is set up for automated continuous integration and deployment without having to duplicate the entire setup each time.

A virtualized environment is set up; either using a VM or a docker image that takes care of the automatic code check-ins, builds, regression testing, and deployments on the server.

Divya

Divya Bhushan

Content developer/Corporate Trainer

  • Content Developer and Corporate Trainer with a 10-year background in Database administration, Linux/Unix scripting, SQL/PL-SQL coding, Git VCS. New skills acquired-DevOps and Dockers.
  • A skilled and dedicated trainer with comprehensive abilities in the areas of assessment, 
requirement understanding, design, development, and deployment of courseware via blended environments for the workplace. 

  • Excellent communication, demonstration, and interpersonal skills.

Website : https://www.knowledgehut.com/tutorials/git-tutorial

Join the Discussion

Your email address will not be published. Required fields are marked *

3 comments

saurabh 13 May 2019 1 likes

Well Written, thanks

Navneet 14 May 2019 1 likes

Excellent article ... concept of dockers is well articulated and explained.

Hugo 21 Jun 2019

I absolutely love your blog and find the majority of your post's to be exactly what I'm looking for

Suggested Blogs

12 DevOps Skills That A DevOps Engineer Should Master

Are you an engineer looking out to excel in DevOps skills? Is your team looking to adopt DevOps? You have come to the right place. In this article, we will discuss key DevOps engineering skills that make you an expert in this space. DevOps is all about breaking down the traditional silos and creating a culture of collaboration between business, operations and development teams. Along with the culture aspect, DevOps also emphasizes the key aspect of automating any repetitive and error-prone tasks using a spectrum of modern engineering tools. This article will help you gain insights on 12 specific skill set one needs to master in this space.One thing to keep in mind when you talk about a “DevOps engineer“ is that it is not a role but a skill set that needs to be mastered by every software developer and not just operation folks.DevOps Skills"DevOps, everyone is doing it, few have mastered it " - Mirco Hering, Author of “DevOps for the Modern Enterprise”. He explicitly quotes that nowadays all are adopting and working in DevOps way without understanding much about the key concepts and skills needed. Only a few are doing it right. What started as a great idea would end up in becoming a mere buzz word if we don’t understand the 12 Devops engineering skills.The 12 DevOps engineering skills are:1. Linux fundamentals and scriptingLinux is an open-source operating system created by Linus Torvalds in 1991. Since then there has been no looking back. Linux is now the most preferred operating system in the world. It’s more secure, compared to other operating systems like windows. Most of the companies have their environment setup in Linux based systems.Many DevOps tools in the configuration management space like Chef, Ansible, Puppet, etc have their architecture based on Linux master nodes. These tools help in provisioning and managing infrastructure automatically with the help of any scripting language like Ruby, Python, etc.Linux fundamentals and scripting know-how is a must to get you started with infrastructure automation which is a key concept in DevOps.2. Knowledge of various DevOps tools and technologiesDevOps is implemented with the help of tools but in most of the cases, DevOps is often misunderstood as tools. We have to always remember the great quote from Scott Hanselman “The most powerful tool we have as developers are automation.”The main aim of DevOps is to add value to the customer at an increased pace. Tools are chosen to incorporate this purpose and never to be used for the sake of using it. Technical knowledge of the tools is an added advantage for you to embrace DevOps.DevOps tools are categorized broadly into 10 categories:Collaboration toolsApplication Life management and Issue Tracking toolsCloud/Iaas/Paas/Serverless toolsSource control management toolsPackage ManagersContinuous Integration and continuous delivery toolsContinuous Testing toolsRelease orchestration toolsMonitoring toolsAnalytics tools.In each of these categories, we have more than 10 tools. A right tool must be chosen in each of these categories based on client requirements and the project environment. The main point to remember is that a tool should add value to the customer either by reducing delivery time or increasing the quality of the deliverables.3. Continuous Integration And Continuous DeliveryA better understanding of the continuous integration and continuous delivery approaches helps to deliver a high-quality product at a faster pace to the clients.Continuous integration is one of the best practices in DevOps Community where whenever a developer finishes a functionality or a user story(in terms of scrum) he/she integrates the new code with the existing code base continuously. This helps to save a lot of time spent during the integration phase of the project. Continuous integration helps to detect integration issues in the early stages itself thus making the life of the developer easier.Continuous delivery comes as an extension to continuous integration where the newly integrated code is made ready for deployment automatically without or minimum human intervention. Often in the case of the waterfall model, the development team has to release the new code to the testing team and then the testing team takes it forward. This usually takes a couple of days. These delays could be avoided by automating the transfer and testing process, making the code ready for deployment quickly.Continuous deployment is the next step in automating the delivery pipeline of an application. This is where the new code is automatically deployed in the production environment. Some of the software companies do not consider continuous deployment as a best practice as they foresee it as a place where a lot of defects can creep into.4. Infrastructure as Code (IAC)Infrastructure as Code is the latest best practice in the DevOps community. This helps to provision and manage infrastructure by abstracting to a high-level programming language. Thus all the features of the source code could be applied to the infrastructure of the application like version control, tracking, storing in repositories, etc. With the emergence of IAC, days of manually configured infrastructure and infrastructure shell scripts are gone. A person who knows to develop infrastructure as code creates less error-prone, consistent and reliable infrastructure.5. DevOps Key ConceptsDevOps is a culture where business, development, and operations teams collaborate breaking the traditional silos. The key value is to create a cross-functional team that knows what each team member does and where any team member can take up the work of the other, thus providing a better collaboration within team members and delivering a high-quality product to the customer. Since we don’t have silos anymore, unwanted time spent on transfer of the code between various teams like the testing team, the operation team is reduced, increasing the pace of delivery.Another key concept is automating everything. This is done to generate a high-quality product for the customers by reducing human defects.6. Soft SkillsDevOps emphasizes culture and people more than tools and practices. Hence people skills are a must-have when we are trying to adopt DevOps. The next important key value is trust among the team members. Trust is enabled by active and effective communication between team members creating positive vibes among team members. This, in turn, gets reflected on the quality of the deliverables and finishing off the work on time.7. Customer-first mindsetDevOps emphasizes on a customer-first mindset. All people who adopt DevOps should take decisions keeping this in mind. No activity should be performed that does not add value to the customer.8. Security skillsDevOps is all about speed, automation, and quality. As we increase the speed often we encounter vulnerabilities that get introduced into the code at a faster pace. DevOps practitioners should be able to write the code that is protected from various attacks. This has often led to DevSecOps thinking where security features are incorporated from the beginning rather than stitching it at the end.9. FlexibilityAccording to Heraclitus: “The Only Thing That Is Constant Is Change ”. A team that embraces DevOps must be equipped to adopt change. All team members should be able to accept a  requirement change or a role change. He/she must be comfortable to work in integration, testing, release, deployment, etc and also should have the technical knowhow. He/she must be aware of modern engineering tools and should be equipped to work on different tools based on requirements. Anant Agarwal, CEO of edX summarises the flexibility as follows:“It’s hard to learn something that seems to evolve as quickly as the lessons are taught. Self-learners are the perfect candidates for embracing and pursuing  DevOps adoption, as it requires a roll-up-your-sleeves, trial-and-error, do-it-yourself, continuous learning approach.”10. CollaborationCollaboration is one of the important key values in DevOps. A team that adopts DevOps is a cross-functional team where members from business, operations and development teams co-exist. Active collaboration is a key skill required by the team members. There should be transparency between the team members. Everyone should know what is happening in the team and who is responsible for a particular task.11. Decision-makingDecisiveness or decision-making is one of the key elements employers look for in their employees. The ever-changing nature of the code in the DevOps team should be handled by a person who is quick in taking decisions. Thus enabling quick delivery and deployment of new code. Faster deployments give faster returns to the customer and provide immediate feedback from the end-users. This often leads to customer satisfaction.12. Agile engineeringDevOps was introduced in 2008 by Patrick Debois and Andrew Clay Shafer after a discussion about agile infrastructure. Therefore DevOps is heavily rooted in agile principles and values. There are 4 agile values and 12 principles according to the Agile Manifesto.  Every DevOps practitioner needs to have an in-depth understanding of agile philosophies.Practical knowledge of agile practices like Test-driven development, behavior-driven development, etc helps to make a great DevOps practitioner.ConclusionDevOps is all about breaking down silos and where development teams, operation teams, and business teams collaborate to deliver a high-quality product quickly. All team members where DevOps is adopted should have all the 12 DevOps engineering skills. He/she focuses on customer satisfaction rather than local optimizations. To summarise, he/she should be a great team player, technically strong with good knowledge of  DevOps tools and who can adapt to changes.This subtle but important combination of all the attributes is important for a professional to  be a DevOps engineer. Because, at the end of the day, customer satisfaction is the key to running a successful business enterprise.
Rated 4.5/5 based on 356 customer reviews
12909
12 DevOps Skills That A DevOps Engineer Should Mas...

Are you an engineer looking out to excel in DevOps... Read More

How to Install Docker on Windows, Mac, & Linux: A Step-By-Step Guide

Docker is intended to benefit developers and system managers and makes it a component of a number of toolchains for DevOps (developers + activities). This implies that designers can concentrate their attention on writing code without worrying about the scheme that it will eventually run on. It also gives them the opportunity to take advantage of one of the thousands of programs intended to operate as part of their implementation in a container at Docker. Docker offers flexibility for the operational team and decreases possibly a smaller overhead footprint and lower overhead the number of devices required.Let’s now deep dive into installation steps for docker on different platforms.Install Docker on Windows The community version of Docker for Microsoft Windows is Docker Desktop for Windows.Download from Docker Hub. System RequirementsThe software and hardware requirements need to operate Client Hyper-V on Windows 10 effectively are:Software Requirements:Windows-10 64-bit system requirements: Pro, Enterprise or EducationWindows characteristics of Hyper-V and Containers must be activatedHardware Requirements:The support for virtualization of hardware-level Client Hyper-V in BIOS settings must be allowed with the 64-bit processor with second-level address translation (SLAT). Minimum 4 GB RAMTo run Docker Desktop, Microsoft Hyper-V is needed. The Windows installer Docker Desktop allows Hyper-V and restarts your computer if needed. VirtualBox no longer operates when Hyper-V is activated. All VirtualBox VM images are however maintained.The DOCKer VMs (including the default one generated during the installation of the Toolbox) are no longer started. VirtualBox The Docker desktop can not use these VMs side-by-side. You can still handle remote VMs using the docker.What is included in Installation?The installation of Docker Desktop consists of the Docker Engine, Docker CLI, Docker Compose, Docker Machine, and Kitematic. Docker Desktop containers and images are shared among all user accounts on the machines where they are installed. All Windows accounts are building and running containers using the same VM. Nested virtualization situations, such as operating Docker Desktop with VMWare or Parallels, might operate. See Running Docker Desktop in nested situations for more data.Installation steps To run the installer, double-click Docker Desktop Installer.exe to install Docker Desktop on Windows. The installer can be accessed from Docker Hub if you have not previously downloaded (Docker Desktop Installer.exe). It typically downloads to your download directory or can be executed at the bottom of your internet browser from the latest download bar.Follow the installation wizard directions for licensing, authorizing the installer and proceeding with the installation. If advised, authorize your system password during the installation of the Docker Desktop Installer. The networking elements, connections to the applications of Docker and the management of Hyper-V VMs need to be privately accessible.Click Finish in the setup window and launch the application Docker Desktop.Start Docker DesktopAfter installation, Docker Desktop will not begin automatically. Search for Docker and select the search outcomes for Docker Desktop.If the whale icon remains stable in the status bar, Docker Desktop is up and running and can be accessed from any terminal window.You also get a pop-up message with the next steps, as well as a link to this documentation, after the Docker Desktop app is installed.When you're done initializing, click on the whale icon in the Notifications region and pick About Docker to check that your recent version is available.Install Docker on MacThe very first step is to download the Docker Toolbox for Mac. Get the downloadable link- Download from Docker HubSystem RequirementDocker Desktop for Mac starts only when all these requirements can be met:Mac hardware must be 2010 models or newer, including Extended Page Tables (EPT) and Unrestricted Mode, with Intel hardware to provide memory management unit (MMU) virtualization. This support can be checked to see if the following command is being run on your computer: sysctl kern.hv_supportmacOS Sierra 10.12 and newer versions of macOS are endorsed. The upgrade to the newest version of macOS is recommended.VirtualBox (incompatible with Docker Desktop on Mac) before version 4.3.30 must not be installed. It's alright if you have a newer VirtualBox version installed.Installation stepsDouble-click Docker.dmg and drag the whale Moby to the application folder to open the installer.In the Applications directory, double-click Docker.app to launch Docker. In the instance below, the applications folder is in the Grid view modeYou are led to allow Docker.app with your system password after starting it. Privileged access is required to install Docker app connections and networking elements.The whale in the top status bar shows that Docker runs from a terminal and is available.You will also get a success message, with the next steps and a link to this documentation, if you have just installed the app. To reject this pop-up, click on the whale in the status bar.To get Preferences and other options, click on the whale (whale menu).To check that you have the latest version, select About Docker.Notes:Getting started provides an overview of Docker Desktop for Mac, basic Docker command examples, how to get help or give feedback, and links to all topics in the Docker Desktop for Mac guide.Troubleshooting describes common problems, workarounds, how to run and submit diagnostics, and submit issues.Install Docker on LinuxLet’s use a Ubuntu example to begin installing Docker. If you don't already have it, you can use Oracle Virtual Box to install a virtual Linux example. A straightforward Ubuntu server mounted on the Oracle Virtual Box is shown in the following screenshot. There is an OS user called a demo defined with full root access to the scheme:Step 1 − We must first make sure you have the correct version of the Linux kernel running before installing Docker. Only version 3.8 or greater is intended for Docker on Linux kernel. We can do this with the instructions below.Uname: The system data for the Linux system is returned by this method. This method will return the kernel name, kernel release, kernel version information on the Linux system.uname -aa − Used for ensuring the return of the system data.Step 2 − You need to install packages from the internet onto the Linux system via the following command, the recent packages can be updated to the OS.apt-get Optionssudo− The sudo command is used to make sure the command runs with root access.update− Update option ensures that all packages on the Linux system are updated.sudo apt-get update Step 3- The next step is to install the certificates needed to later download required Docker packages for a job with the Docker site. The following command can be used.sudo apt-get install apt-transport-https ca-certificates Step 4− Adding fresh GPG key will be the next step. This key must guarantee that the required packages for Docker are all encrypted.This command is intended to download the key from hkp:/ha.pool.sks-keyservers.net:80 and add it to the adv keychain by means of the ID58118E89F3A912897C070ADBF76221572C52609D. Please note that to download the necessary Docker packages, this specific key is needed.Step 5 − Next, you need to add the appropriate site to docker.list of the apt package manager, depending on the version of Ubuntu which you hold, to allow it to detect and download the Docker packages from the Docker site.Precise 12.04 (LTS) ─ deb https://apt.dockerproject.org/repoubuntu-precise mainTrusty 14.04 (LTS) ─ deb https://apt.dockerproject.org/repo/ ubuntu-trusty mainWily 15.10 ─ deb https://apt.dockerproject.org/repo ubuntu-wily mainXenial 16.04 (LTS) - https://apt.dockerproject.org/repo ubuntu-xenial mainecho "deb https://apt.dockerproject.org/repo ubuntu-trusty main”     | sudo tee /etc/apt/sources.list.d/docker.listStep 6 –The next step is to update the packages on Ubuntu scheme with the apt-get update command.Step 7 ‐ if we want to make sure that the package manager points towards the correct repository then we can do this by issuing the apt-cache command.apt-cache policy docker-engineStep 8– Edit the update command apt-get to guarantee that all local system packages are up-to-date.Step 9- The Linux-image-extra-* kernel packages that allow the user to use the aufs storage driver are required for Ubuntu Trusty, Wily and Xenial. The newer variants of Docker use this engine.The following command can be used:sudo apt-get install linux-image-extra-$(uname -r)  linux-image-extra-virtualStep 10− Installing Docker is the final step and this can be done with the following command:sudo apt-get install –y docker-engineHere, apt-get utilizes the installation feature to download and install Docker from the Docker page. The Docker engine is the official package for Ubuntu based devices by the Docker Corporation.The docker running version can be checked by running below command:docker version
Rated 4.5/5 based on 10 customer reviews
5884
How to Install Docker on Windows, Mac, & Linux...

Docker is intended to benefit developers and syste... Read More

Top 10 DevOps Programming Languages That You Must Know

DevOps movement tries to eliminate the gap between software development and IT operations. Programming languages act as one of the most important tools in DevOps. To be successful in DeOps and achieve Continuous Integration/Continuous Delivery (CI/CD), making the right choice of a programming language is very essential. Below discussed are the top 10 DevOps programming languages that you can opt for to become a successful DevOps engineer. Ruby: Even though it is considered to be higher-level programming, it is much easier to learn Ruby when learning to code. The top use case of Ruby is its infrastructure management and is very similar to Python. It provides a flexible approach to programming as the developers can make alterations in parts of the language to fit the requirements. Moreover, it allows you to manipulate frameworks and controllers. Hence, it is a very powerful language.Many infrastructure projects utilise Ruby, like ManageIQ is a Ruby on Rails app.  C/C++: Even though different programming trends have come and gone and continue to do so, C has remained as one of the most popular programming languages for more than half a century. C/C++ offers multiple benefits like fast and high performance and acts as a foundation for modern computing. Moreover, C is such a language that most programmers already know (to some extent).  However, C/C++ have drawbacks while working in a DevOps environment. Its size is greater in magnitude when compared to languages like Go or Ruby. As a consequence, the compilation time is also greater in C/C++. Also, the application binaries produced by C are not portable.  Hence, DevOps has a lesser friendly approach to using C/C++.Python: Python is a scripting language that is very useful for managing infrastructure. With a wide range of usage, it is used to build cloud-based infrastructure projects like OpenStack, while it also supports web applications through frameworks like Django. It is even considered as the most popular language for machine learning.When it comes to DevOps, Python helps reduce maintenance problems with the help of monitoring and deployment tools like Salt, Ansible, etc.  JavaScript: JavaScript is a lightweight, interpreted programming language, which allows you to build interactive websites. Nowadays, it is also being used in mobile app development, desktop app development and game development. There are many popular frameworks and libraries written in JavaScript, such as React, Node, etc.JavaScript might get a bit complicated for DevOps, but that doesn’t mean that they don’t tag along well. On the brighter side, it offers less interaction, immediate feedback along with better interfaces as compared to other languages.Go: Due to the fact that Go made its debut in 2009, near about the time when DevOps was hitting the market, DevOps and Go have tended to grow together, side by in various respects. Built on the foundation of C, it emphasises more on lean, network=efficient runtime, which acts as an advantage for DevOps.Go is an amazing choice of programming language for DevOps as it offers excellent performance. It takes care of concurrency and is highly portable. Also, there isn’t any need for creating dependencies when compiling Go applications and can be built quickly.SQL: Structured Query Language (SQL) is a computer language which stores, manipulates and queries data in relational databases.SQL is used in DevOps due to its container support. SQL Server 2017 supports Linux OS and its containers run on Windows, Linux and macOS.Bash: Bash is one of the most frequently used Unix Shell and carries a lot of support. Bash’s Shell and scripting language powers thousands of Linux systems around the world. Moreover, it is available for Windows and Mac too.Perl: Perl is a stable, cross-platform programming language which belongs to a family of high-level, general-purpose, dynamic programming language. It is used for simple as well as complex tasks, for small or major projects. Perl provides a quick fix solution for web apps, text processing, GUI development, etc.Since DevOps is a combination of Software Development and IT Operations, Perl is used more on the application development side.Java: Java is an object-oriented, class-based and general-purpose programming language. It is ideal for jobs which are concurrent in nature and requires its implementation dependencies to be reduced. It is widely used due to its versatility and power.PHP: PHP is a general-purpose programming language which is used all over the world. PHP covers the internal system right from its early stages up to its implementation.PHP is perfect to be utilized by your DevOps if you wish to use a programming language for non-specific coding.To Conclude:It is never too late for a Developer to learn new and different programming languages. Due to the advancement in the technology in automation and operation, you can always opt to work and sharpen new skills and programming languages.
Rated 4.5/5 based on 1 customer reviews
8658
Top 10 DevOps Programming Languages That You Must ...

DevOps movement tries to eliminate the gap between... Read More

20% Discount