Search

Series List Filter

Best Practices For Successful Implementation Of DevOps

What is DevOps?DevOps is nothing but the combination of process and philosophies which contains four basic component culture, collaboration, tools, and practices. In return, this gives a good automated system and infrastructure which helps an organisation to deliver a quality and reliable build. The beauty of this culture is it enables a quality for organizations to better serve their customers and compete more effectively in the market and also add some promised benefits which include confidence and trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.                                       “DevOps is not a goal, but a never-ending process of continual improvement.”                                                                           Jez Humble Here are the key DevOps best practices that can help you for successful implementation of DevOps.1. Understand Your Infrastructure need: Before building the infrastructure, spend some good time to understand the application and then align your goals to design your Infrastructure and this implementation of DevOps should be business-driven. While understanding infra, make sure you are capturing below components:Cycle Time : Your software cycle needs to be defined in a generic way where you need to know the limitations, ability and if there is any down time then the exact time need to be noted.Versioning Environments: While planning DevOps, always be ready for an alternative solution and versioning your environments helps you to roll out/back your plan. If you are having multiple module and tightly coupled then it requires a clean and neat plan to identify each and every patch and release.Infra as a code: When we say infra as a code it means a solution to addressing both needs – minimizing cycle time and versioning environments can be addressed by capturing and managing your Infrastructure as code. What you built should scalable for a long run.2. Don’t jump start : There is no need to automate the complete cycle in one shot, always take a small entity and apply your philosophy and get this validated. Once you feel your POC is justified, start scaling up now and create a complete pipeline and define a process so anytime you can go back and check what all need to improve and where. All these small success will help you to get confidence internally in your team and builds a trust to stakeholder and your customers.                                                        “DevOps isn’t magic, and transformations never happen overnight”3. Continuous Integration and Continuous Deployment: If  your team is not planning to implement this continuous integration and continuous delivery, then it is not fair with DevOps. Even I’ll say the beauty of DevOps is how frequently your team can deliver without disturbance and how much you are automated in this process. Let’s take a use case: You and your team members are working in an Agile team. In fact, there are multiple teams and multiple modules which are tightly coupled  in which you are also involved. Every day you work on your stories and at the end of the day, you push your ‘private build’ of your work to verify if it builds and ‘deliver’ it to a team build server and same applies to other individuals. Which indicates you all ‘integrate’ your work in the common build area and do an ‘Integration Build’. Doing these integrations and builds to verify them on a regular, preferably daily basis, is what is known as Continuous Integration.Continuous Deployment doesn’t mean every change is deployed to production as soon as possible. It means every change is proven to be deployable at any time.  What it takes is your all validated feature and build from CI and deploys them into the production environment. And here we can follow some of the practices. a) Maintain a Staging Environment that Emulates Production b) Always deploy in staging then move to production c) Automate Testing of Features and Nonfunctional Requirements d) Automatically fetch version-controlled development artifacts.4. Define Performance and do benchmarking : Always do some performance testing and get a collective benchmarking report for the latest build shared by your team because this will only justify the quality of your build and the required infra as well.For example : We have done one performance testing a few days back and got good results, explaining in details. So we did some benchmarking for our CFM machines because we are having a global footprint and at the same time, for us, latency matters and we need CFM in the nearest region. We have verified with our current build how many requests we can handle and we found we are firing more than 200 RPS (request per second). So we planned to check our build capability and fired a good number of requests and noted the number where our build got crashed and noted the RPS and then we did autoscaling of CFM. We might have upgraded our CFM but we planned for auto scaling because the number of requests is an assumption and we don’t want to spend amount for that but at the same time we are ready to consume the experimented traffic. And then we found 7 out 2 CFM are only consuming exact or little less number configuration and request (181 to 191 RPS). So we shared a report to the business team to focus on other regions where we were having very less traffic because we were paying the same amount.Conclusion: We verified our build which has given good confidence to our dev team and we shared the report to the business team which helped them to plan their marketing strategies, meanwhile we completed auto scaling the process as well.  5. Communicate and Collaborate : Collaboration and communication are the X-factors to help organisation grow and assess for DevOps. Collaboration with business and development team helps DevOps team to understand to design & define a culture. This helps to speed up the development, operations, and even other teams like marketing or sales, allowing all parts of the organization to align more closely with goals and projects.6. Start Documenting : Document everything (All your work done) which you are spreading across the process and infrastructure and specially the reports, RCA’s (Root cause Analysis), change management. This helps you to go back and see if all issues we faced can be automated in the next cycle or other ways to handle them smoothly without interrupting your production environment.7. Keep your eyes on cost burning: It has been experienced many a time that if we don’t keep an eye on cloud bills it will keep increasing and will tend to be proportional to the growth of your business till the time you don’t look for optimization. Always do an audit in 2 months and evaluate your cloud computation to optimize. Do some experiment with infra because you should not spend not more than 5  to 10 % of cost for cloud infra if you are completely dependent. Tools you can try : Reoptimize, Cloudyn, Orbitera etc.                                                                                 “If you are DevOps you should account the no’s.”8. Secure your infra : If your team follows certain compliances from day 1 then there is very less chance to compromise with your data and this can be easily enabled by providing a setup where you can verify your vulnerabilities. Before moving your build to the production team you may need to follow the standard at an early stage of development by using configured tools like: SonarQube, VeraCode, Codacy, CodeClimate etc.9. Tool Selection : Always select tools which all are compatible with rest of the toolchain you are planning to use. Why you should have to be so careful is because you have to capture each and every request capture. Once you are done with the tool selection, draft a tools metrics you are willing to capture or will be going to help you to debug. Start logging and monitoring them and have some clear definition for those logs so you can justify and determine that your processes are working as expected. Tools you can have a look : Nagios, Grafana, Pingdom, Monit, OpsGenie, Observium, Logstash etc.                                                                                                        Tool chain for DevOps process:                                                                             “If you are not monitoring,  you are not in the production”Conclusion:An organization that follows all the above best practices creates the right culture, which finally gets the ending it deserves i.e DevOps organization. "A good DevOps organization will free up developers to focus on doing what they do best: write software," says Rob Steward, Vice President of product development at Progress Software. "DevOps should take away the work and worry involved in deploying, securing and running the software once it is written."
Rated 4.0/5 based on 7 customer reviews

Best Practices For Successful Implementation Of DevOps

1522
Best Practices For Successful Implementation Of DevOps

What is DevOps?

DevOps is nothing but the combination of process and philosophies which contains four basic component culture, collaboration, tools, and practices. In return, this gives a good automated system and infrastructure which helps an organisation to deliver a quality and reliable build. The beauty of this culture is it enables a quality for organizations to better serve their customers and compete more effectively in the market and also add some promised benefits which include confidence and trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.

                                       “DevOps is not a goal, but a never-ending process of continual improvement.”
                                                                           Jez Humble




Here are the key DevOps best practices that can help you for successful implementation of DevOps.

1. Understand Your Infrastructure need: Before building the infrastructure, spend some good time to understand the application and then align your goals to design your Infrastructure and this implementation of DevOps should be business-driven. While understanding infra, make sure you are capturing below components:

Components to understand infrastructure

  • Cycle Time : Your software cycle needs to be defined in a generic way where you need to know the limitations, ability and if there is any down time then the exact time need to be noted.
  • Versioning Environments: While planning DevOps, always be ready for an alternative solution and versioning your environments helps you to roll out/back your plan. If you are having multiple module and tightly coupled then it requires a clean and neat plan to identify each and every patch and release.
  • Infra as a code: When we say infra as a code it means a solution to addressing both needs – minimizing cycle time and versioning environments can be addressed by capturing and managing your Infrastructure as code. What you built should scalable for a long run.

2. Don’t jump start : There is no need to automate the complete cycle in one shot, always take a small entity and apply your philosophy and get this validated. Once you feel your POC is justified, start scaling up now and create a complete pipeline and define a process so anytime you can go back and check what all need to improve and where. All these small success will help you to get confidence internally in your team and builds a trust to stakeholder and your customers.

                                                        “DevOps isn’t magic, and transformations never happen overnight”

3. Continuous Integration and Continuous Deployment: If  your team is not planning to implement this continuous integration and continuous delivery, then it is not fair with DevOps. Even I’ll say the beauty of DevOps is how frequently your team can deliver without disturbance and how much you are automated in this process. Let’s take a use case: You and your team members are working in an Agile team. In fact, there are multiple teams and multiple modules which are tightly coupled  in which you are also involved. Every day you work on your stories and at the end of the day, you push your ‘private build’ of your work to verify if it builds and ‘deliver’ it to a team build server and same applies to other individuals. Which indicates you all ‘integrate’ your work in the common build area and do an ‘Integration Build’. Doing these integrations and builds to verify them on a regular, preferably daily basis, is what is known as Continuous Integration.

 Continuous Integration and Continuous Deployment

Continuous Deployment doesn’t mean every change is deployed to production as soon as possible. It means every change is proven to be deployable at any time.  What it takes is your all validated feature and build from CI and deploys them into the production environment. And here we can follow some of the practices. a) Maintain a Staging Environment that Emulates Production b) Always deploy in staging then move to production c) Automate Testing of Features and Nonfunctional Requirements d) Automatically fetch version-controlled development artifacts.

4. Define Performance and do benchmarking : Always do some performance testing and get a collective benchmarking report for the latest build shared by your team because this will only justify the quality of your build and the required infra as well.

For example : We have done one performance testing a few days back and got good results, explaining in details. So we did some benchmarking for our CFM machines because we are having a global footprint and at the same time, for us, latency matters and we need CFM in the nearest region. We have verified with our current build how many requests we can handle and we found we are firing more than 200 RPS (request per second). So we planned to check our build capability and fired a good number of requests and noted the number where our build got crashed and noted the RPS and then we did autoscaling of CFM. We might have upgraded our CFM but we planned for auto scaling because the number of requests is an assumption and we don’t want to spend amount for that but at the same time we are ready to consume the experimented traffic. And then we found 7 out 2 CFM are only consuming exact or little less number configuration and request (181 to 191 RPS). So we shared a report to the business team to focus on other regions where we were having very less traffic because we were paying the same amount.

Conclusion: We verified our build which has given good confidence to our dev team and we shared the report to the business team which helped them to plan their marketing strategies, meanwhile we completed auto scaling the process as well.  

5. Communicate and Collaborate : Collaboration and communication are the X-factors to help organisation grow and assess for DevOps. Collaboration with business and development team helps DevOps team to understand to design & define a culture. This helps to speed up the development, operations, and even other teams like marketing or sales, allowing all parts of the organization to align more closely with goals and projects.

Communicate and Collaborate


6. Start Documenting : Document everything (All your work done) which you are spreading across the process and infrastructure and specially the reports, RCA’s (Root cause Analysis), change management. This helps you to go back and see if all issues we faced can be automated in the next cycle or other ways to handle them smoothly without interrupting your production environment.


Start Documenting


7. Keep your eyes on cost burning: It has been experienced many a time that if we don’t keep an eye on cloud bills it will keep increasing and will tend to be proportional to the growth of your business till the time you don’t look for optimization. Always do an audit in 2 months and evaluate your cloud computation to optimize. Do some experiment with infra because you should not spend not more than 5  to 10 % of cost for cloud infra if you are completely dependent. Tools you can try : Reoptimize, Cloudyn, Orbitera etc.

Tools you can try in devops

                                                                                 “If you are DevOps you should account the no’s.”

8. Secure your infra : If your team follows certain compliances from day 1 then there is very less chance to compromise with your data and this can be easily enabled by providing a setup where you can verify your vulnerabilities. Before moving your build to the production team you may need to follow the standard at an early stage of development by using configured tools like: SonarQube, VeraCode, Codacy, CodeClimate etc.

Devops Quote


9. Tool Selection : Always select tools which all are compatible with rest of the toolchain you are planning to use. Why you should have to be so careful is because you have to capture each and every request capture. Once you are done with the tool selection, draft a tools metrics you are willing to capture or will be going to help you to debug. Start logging and monitoring them and have some clear definition for those logs so you can justify and determine that your processes are working as expected. Tools you can have a look : Nagios, Grafana, Pingdom, Monit, OpsGenie, Observium, Logstash etc.

  Tool chain for DevOps process

                                                                                                        Tool chain for DevOps process:

Kafka Consumer group topic lag

                                                                             “If you are not monitoring,  you are not in the production”

Conclusion:

An organization that follows all the above best practices creates the right culture, which finally gets the ending it deserves i.e DevOps organization. "A good DevOps organization will free up developers to focus on doing what they do best: write software," says Rob Steward, Vice President of product development at Progress Software. "DevOps should take away the work and worry involved in deploying, securing and running the software once it is written."

MD Zaid

MD Zaid Imam

Project Manager

Md Zaid Imam is currently serving as Project Manager at Radware. With a zeal for project management and business analytics, Zaid likes to explore UI/UX, backend development, and DevOps. Playing a crucial role at his current job, Zaid has helped his team to deliver world-class product features that cater to the demand of current industry requirements of bot mitigation arena (Processing 50 billion API calls per month). Zaid is a regular contributor on Hashnode.


Website : https://zaid.hashnode.dev

Join the Discussion

Your email address will not be published. Required fields are marked *

7 comments

Rabinath jha 11 Jul 2018

Awesome post

Chittranjan Kumar Jha 14 Jul 2018

This is wholesome philosophies about DevOps. Good For Beginners.

Chittranjan Kumar Jha 14 Jul 2018

This is Wholesome philosophies about DevOps. Good for Beginners.

Arnab Roy 16 Jul 2018

Excellent stuff for quick learning

Gulam Razi 23 Jul 2018

Great write-up. Really covered all the possible corner cases of DevOps process.

vepambattu chand 31 Jul 2018

Very Good and Useful Information about DevOps, Thanks For Sharing Nice Article.

Prathik vats 28 Nov 2018

In which industries can you find DevOps organizations?

KnowledgeHut Editor 28 Nov 2018

Home automation Industrial automation Medical devices Video surveillance Networking

Suggested Blogs

Docker Vs Virtual Machines(VMs)

Let’s have a quick warm up on the resource management before we dive into the discussion on virtualization and dockers.In today’s multi-technology environments, it becomes inevitable to work on different software and hardware platforms simultaneously.The need to run multiple different machines (Desktops, Laptops, handhelds, and Servers) platforms with customized hardware and software requirements has given the rise to a new world of virtualization in IT industry.What a machine need?Each computing environment(machine) needs its own component of hardware resources and software resources.As more and more machines are needed, building up and administering many such stand-alone machines is not only cumbersome, time consuming but also adds up to the cost and energy.Apparently; to run a customized High-power Scalable Server is a better idea to consolidate all the hardware and software requirements into one place and have a single server run and distribute the resources to many machines over a network.That saves us time, resources, energy and revenue.These gigantic servers are stored in a data warehouse called a Datacenter.Below Diagram (2) indicates a single server serving and sharing resources and data among multiple client machinesDoes this look simplified enough? Yes of course!So, this setup looks feasible we have a high-power, high-storage Server that gives resources to many smaller(resources) machines over a network.How to manage huge data - ServersWith Internet Of Things in boom, Information is overflowing with a huge amount of data; handling tremendous data needs more system resources which means more Dedicated servers are needed.Many Servers approach challenge:Running several Dedicated servers for specific services such as Web service, application or database service as indicated in Diagram (3) is difficult to administer and consumes more energy, resources, manpower and is highly expensive.In addition; resource utilization of servers is very poor resulting in resource wastage.This is where simulating different environments and running them all on a single server is a smart choice; rather than having to run multiple physically distinct servers.This is how Diagram (3) would change after consolidating different servers into one as shown in Diagram (4).Sheet 2VirtualizationWhat is VirtualizationThe above single server implementation can be defined as the following term.Virtualization is a technique used to simulate and pretend a single infrastructure resource (hardware resources and software resources) to be acting as many providing multiple functionalities or services without the need to physically build, install and configure.In other words;Running multiple simulated environments in a single machine without installing and configuring them is called Virtualization.Technically speaking;Virtualization is an abstract layer that shares the infrastructure resources among various simulated virtual machines without the need to physically set up these environments.Diagram (5) displays different virtual Operating systems are running on the same machine and using the same hardware architecture of the underlying machine.What is a Virtual machineThe simulated virtualized environments are called virtual machines or VM.Virtual machine is a replication/simulation of an actual physical machine.A VM acts like a real physical machine and uses the physical resources of the underlying host OS.A VM is a running instance of a real physical machine.Need for virtualizationSo; we have an overview of virtualization, let us examine when should we virtualize and what are the benefits of virtualization?Better resource management and cost-effective: as indicated in Diagram (6) and Diagram (7); hardware resources are distributed wisely on need basis to different environments; all the virtual machines share the same resources and reduce resource wastage.Ease of quick administration and maintenance: It is easier to build, install, configure one server rather than multiple servers. Updating a patch on various machines from a single virtualized server is much more feasible.Disaster recovery: Since all the virtualized machines reside on the same server and are treated as mounted volumes of data files, it is easier to back up these machines. In case of a disaster failure (power failure, network down, cyber-attacks, failed test code, etc) VM screenshots are used to recover the running state of the machine and the whole setup can be built up within minutes.Isolated and independent secure test environment: virtualization provide an isolated independent virtual test environment to test the legacy code or a vendor-specific product or even a beta release or say a corrupt code without affecting the main hardware and software platform. (This is a contradictory statement though; will discuss more under types of virtualization)These test environments like dev, uat, preprod, prod etc..can be easily tested and discarded.Easily scalable and upgradable: Building up more simulated environments means spinning up more virtual machines. Also upgrading VMs is as good as to run a patch in all VMs.Portable: Virtual machines are lightweight compared to the actual running physical machines; in addition, a VM that includes its own OS, drivers, and other installation files is portable on any machine. One can access the data virtually from any location.The screenshot of activity monitor below compares the CPU load:Implementation a) What is hypervisor and its types?As discussed in the previous section; virtualization is achieved by means of a virtualized layer on top of hardware or a software resource.This abstract layer is called a hypervisor.A hypervisor is a virtual machine monitor (VMM)There are 2 types of hypervisors: Diagram (8)Type-1 or bare-metal hypervisorType-2 or hosted hypervisorType-1 or bare-metal hypervisor is installed directly on the system hardware, thus abstracting and sharing the hardware components with the VMs.Type-2 or hosted hypervisor is installed on top of the system bootable OS called host OS; this hypervisor abstracts the system resources visible to the host OS and distributes it among the VMs.Both have their own role to play in virtualization.b) Comparing hypervisor typesType-1 or bare-metal hypervisorType-2 or hosted hypervisorInstalled directly on the infrastructure-OS independent and more secure against software issues.Installed on top of the host OS-more prone to software failures.Better resource flexibility: Have direct access to the hardware infrastructure (Hard-drive partition, RAM, embedded cards such as NIC). Provide more flexibility and scalability to the VMs and assign resources on a need basis.Limited resource allocation: Have access to just the resources exposed by the host OS.VMs installed will have limited access to hardware resources allocated and exposed by the host OS.Single point of failure: A compromised VM may affect the kernel. Extra security layers needed.A compromised VM may affect only the host OS, kernel still remains unreachable.Low latency due to direct link to the infrastructure.High latency as all the VMs have to pass through the OS layer to access the system resources.Generally used in ServersGenerally used on small client machinesExpensiveLess expensiveType-1 Hypervisors in market:VMWare ESX/ESXiHyperkit (OSX)Microsoft Hyper-V (Windows)KVM(Linux)Oracle VM ServerType-2 Hypervisors in market:Oracle VM VirtualBoxVMWare WorkstationParallels desktop for MACTypes of virtualizationBased on what resource is virtualized, there are different classifications of virtualization.Server, Storage device, operating system, networkDesktop virtualization: Entire desktop environment is simulated and distributed to run on a single server all at once. A desktop virtualization allows administrators to manage, install, configure similar setups on many machines. Upgrading all the machines with a single patch update or security checks becomes easier and faster.Server virtualization: Many dedicated servers can be virtualized into a single server that provides multi-server functionality.Example:Many virtual machines can be built up sharing the same underlying system resources.Storage, RAM, disks, CPUOperating system virtualization: This happens at the kernel level Hypervisor on hardware type 2 bare-metal One machine: Can boot up as multiple OS like Windows or Linux side-by-sideApplication virtualization: Apps are packaged and stored in a virtual environment and are distributed across different VMs. Example Microsoft applications like excel, MS word, Powerpoint etc, Citrix applications.Network functions virtualization: Physical network components such as NIC cards, switches, routers, servers, hubs, and cables are all assembled in a single server and used virtually by multiple machines without having the load of installing them on every machine.Virtualization is one of the building blocks and driving force behind cloud computing.Cloud computing provide virtualized need-based services. This has given an uplift to the concept of virtualization.A quick mention of various cloud computing models/services are listed below:SaaS – Software as a Service– end-user applications are maintained and run by service providers and easily distributed and used by the end users without having to install them.Top SaaS providers: Microsoft (Office suite, CRM, SQL server databases), AWS, Adobe, Oracle (ERP, CRM, SCM), Cisco’s Webex, GitHub ( git hosting web service)PaaS – Platform as a Service – computing infrastructure(hardware/software) is maintained and updated by the service provider and the user just have to run the product over this platform.Top Paas providers: AWS beanstalk, Oracle Cloud Platform (OCP), Google App EngineIaaS – Infrastructure as a Service – Provide infrastructure such as servers, physical storage, networking, memory devices etc. Users can build their own platform with customized operating system and applications.Key IaaS providers: Amazon Web Services, Microsoft Azure, Google compute engine, CitrixConclusion:We now have a fair understanding of types of virtualization and how they are implemented.ContainerizationThough virtualization has its pros; there are certain downsides of virtualization such as:Not all systems can be virtualized always.A corrupt VM is sometimes contagious and may affect other VMs or the kernel in-case of a Type-1 or bare-metal hypervisor.Latency of virtual disks due to increased payload on the CPU resources with a higher number of VMsUnstable performanceAn alternative approach to overcome the above flaws of virtualization is to Containerize the applications and the run-time environment together.What is containerization  Containerization is an OS-level virtualization; wherein the entire build of an application along with run-time environment is encapsulated or bundled up in a package.These packages are called containers.Containers are lightweight virtualized environments. These are independent of the infrastructure both hardware and software.The run-time environment includes the operating system, binaries, libraries, configuration files and other applications as shown in Diagram (9).What is DockersDockers provide an excellent framework for containerization and allow to build, ship, and run distributed applications over multiple platforms.Docker framework is setup as a docker engine installed on host OS and a docker daemon (background process) process is started that manage the virtual containers.Refer Diagram (10) that shows a Docker engine with 3 containers residing on host OS (MAC OS).An instruction file called dockerfile is written with a set of system commands that change the filesystem such as add, copy or delete commands, run commands, install utilities, system calls etc…This dockerfile is built and packaged along with its run-time environment as an executable file called a docker image.Docker daemon services run these images to create docker containers.Docker container is a run-time instance of an imageIt is wise to say that many images (or layers of instruction files) make up a container.Docker containers have a compact packaging and each container is well isolated.We can run, start, stop, attach, move or delete containers as these runs as services on the host OS.Each image is made up of different layers; each image based on top of the other with the customized command changes that we make.Every time we make a change in the filesystem, each change related to the image is encapsulated in a new layer of filesystem and stacked up above the parent image.Only the changed layers are rebuilt, rest of the unchanged image layers are reused.Certain docker commands ADD, RUN and COPY create a new layer with increased byte size; rest of the commands simply adds up a new layer with zero-byte size.These layers are re-used to build a new image, hence faster and lightweight.Docker images are alsoThe layer approach of an image every time there is a change in the image makes it possible to Version control the docker images.Here is a terminal recording that shows docker engine process and how images and containers are created.Docker documentation - to create containers.Ppt diagram:Code -> package -> build images -> registry hub -> download/pull image -> run containerAnimation: sheet4Let’s consider the docker container: divyabhushan/learn_docker hosted on docker hub.Latest tagged image: centOS_release1.2What is the container environment?Base OS: Centos:7Utilities: vim, yum, gitApps/files: Dockerfile, myApp.sh, runtests.sh, data and other supporting files.Git source code: dockerImagesDownload as: git clone https://github.com/divyabhushan/DockerImages_Ubuntu.gitWhat does the container do?Container launches “myApp.sh” in Ubuntu:14.04 environment and run some scripts along with a set of post test_suites in the container (Ubuntu:14.04) and saves the output log file.How to modify and build your own appStep 1: pull 1.1: Pull the docker image1.2: Run image to create a container and exitStep 2: modify2.1: Start the container2.2: Attach to the container and make some changesStep 3: commit3.1: Examine the history logs and changes in the container3.2: Commit the changes in containerStep 4: push4.1: Push new image to docker hubLet us see the steps in action:Step 1: pull docker image on your machine1.1: Pull the docker imageCommand:docker pull divyabhushan/learn_docker:myApp_ubuntu_14.04View the image on systemdocker imagesscreenshotCommand:docker run -it --name ubuntu14.04 0a6f949131a6Run command in ubuntu container and exit, the container is stopped on exiting out.View the stopped container with the ‘ps -a’ command.Step 2: modifyStart the containerCommand:docker start Now the container is listed as a running process Attach to the container and make some changesCommand:docker attach 7d0d0225778cedit the ‘git configuration’ file and ‘myApp.sh’ scriptContainer is modified and stoppedStep 3: commitExamine the history logs and changes in the containerThe changes done inside the container filesystem can be viewed using the ‘docker diff’ command as:Command: docker diff 7d0d0225778cCommit the changes in containerDocker commit:Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]docker commit -m 'new Ubuntu image' 7d0d0225778c divyabhushan/learn_docker:ubuntu14.04_v2New image is created and listedStep 4: pushPush new image to docker hubCommand:docker push divyabhushan/learn_docker:ubuntu14.04_v2Point to note: just the latest commit change layer ‘50a5ce553bba’ has been pushed, while the other layers were re-used.Image available on docker hub:The latest tagged image can now be pulled from other machines; and run to create the same container environment.Conclusion: An image was pulled and run to create a container to replicate the environment. Container was modified, new changes were committed to form a new image. New Image pushed back on the docker hub and now available as a new tag ready to be pulled by other machines.Difference between Dockers and Virtual machinesTabular differences on various parametersParametersVMsDockersarchitectureHardware level virtualization. Each VM has its own copy of OS.Software level virtualization. Dockers have no own OS, run on host OSIsolationFully isolatedProcess or application-level isolation.  InstallationHypervisor can run directly on the hardware resources or on the host OS.Docker engine is installed on top of the host OS. A docker daemon process is initiated on the host OS. There is no separate OS for every container.CPU processing + performanceSlower: A VM contains the entire run-time environment that has to be loaded every time. Uses more CPU cycles; gives unstable performance.Faster: Docker images are pre-built and share host resources as a result running an image as a container is lightweight and consumes less CPU cycle; gives a stable performanceHardware storageMore storage space as each VM is an independent machine (OS). Example: 3 VMs of 800MB each will take 2.4 GB of space.Docker containers are lightweight since do not require to load OS+drivers, run on host OS as processes.PortableDependency on host OS and hardware makes VM less portable. Importing a VM still requires manual setup such storage, RAM and network.Highly portable since lightweight and zero dependency on hardware.Scalable and code-reusabilitySpinning up more VMs still need administrative tasks such as distributing resources to VM. Running a new machine puts extra load on the system resources also re-managing earlier VMs becomes a task. Every VM keeps its own copy of resources-poor code-reusability.Spinning up new docker containers simply means running pre-built images into containers as a process inside host OS. Containers are also configured on-the-fly passing parameters and run-time. Single image can be run and used to create many containers; encourage code-reusabilityResource utilizationStatic allocation results in resource wastage in case of idle VMs or if a VM’s resource requirement increases.Resources are dynamically allocated and de-allocated on the need basis by the docker engine.Docker system prune or garbage collectionVirtual machines do not have an in-built prune mechanism, these have to be administered manually.Docker image and containers can be pruned; which frees up a sensible amount of storage and memory space and CPU cycles.New environmentCreating new VM from the scratch is a tedious, repetitive tasks. It involves installing a new OS, loading kernel drivers and other tools and configurations.Package the code and dependency files, build into an image, run the image to create a new container. Use an existing or a base image (dockerhub- scratch) to run and create more containers on the go.Web-hosted HubNo web hosted hub for VMsdockerHub provides an open-source reliable trusted source of pre-built images that can be downloaded to run new containers.Version control (backup, restore,track history)(refer git)Snapshot of VMs are not very user-friendly and consume more space.Docker images are version controlled. Every delta difference in each docker container can easily be viewed (demo: docker diff ). Any change in the image is stored as a different layered version. A reference link to older images saves build time and space.Auto-buildAutomation of creating VMs is not very feasible.Docker images can also be auto-built from every source code check-in to GitHub (Automated builds on Dockerhub)Disaster recoveryTedious to recover from VM backup files.Easier to restore docker images (like files) just like git source files in case images are version controlled. Backup images only have to be run to create containers. (refer: screenshot).UpdateAll the VMs have to updated with the release patch.A single image is updated, re-built and distributed across multiple platforms.Memory usage+speedSlower: Entire snapshot of a machine and the OS is loaded into the cache memory.Real-time and fast: pre-built images. Only the instance, i.e, a container has to be run as a process and uses memory like an executableData integrityVM behavior may change if the dependency includes beyond the VM boundaries. (example: an app depends on production host network settings)Same behavior of apps in any environmentsecurityMore secure: A failure inside a VM may reach its guest OS but not the host OS or other virtual machines. Type-2 hypervisor though has a risk of kernel attack.Less secure: If a docker container compromised; underlying OS and hence all the containers may be affected since they share the same host kernel. OS Kernel may also be risked.Key providersRed hat KVM, VMWare, Oracle VM VirtualBox, Mircrosoft Hyper-V, Citrix XenServerDockers, Google kubernetes Engine, AWS Elastic Container serviceData authenticationLot of software licenses.Docker maintains inbuilt content trust to verify published images. When to use VM or a DockerWhen the need is an isolated OS, go for VMs.For a hardware and software independent isolated application that needs fast distribution on multiple environments, use dockers.Docker use-case:Example: A database application along with its databaseConsider the docker image - Oracle WebLogic Server on Docker Hub.This image is pre-built Oracle WebLogic Server runtime environment, including Oracle Linux 7 and Oracle JDK 8 for deploying Java EE applications.To create Server configurations on any machine, just download this image and run to create and start a container.There is no need to install and configure JDK, Linux or other run-time environment.Do not use Docker use-case:The application depends on utility outside the docker container.Code developed on dev machine with base OS as MAC; needs certain firewall setting on say Ubuntu OS.How can the code be tested on the production ubuntu OS firewall while running from MAC OS docker container?Solution:  Install a virtualization software on host OS-MAC; Create a VM (Virtual machine) with host OS as Ubuntu (same as production environment).Configure the desired firewall settings on host VM – Ubuntu; import the test code inside Ubuntu and test.Use a VM:For Embedded systems programming, a VM is installed that connects to the system device drivers, controllers and kernel.Virtualization used along with docker:An extension to the previous scenario would be if you would want to also test your python application in the host OS-Ubuntu VM without having to set up the python exe and its libraries and binaries.All you have to do is: Install Docker engine for Ubuntu OS and pull the python image from Docker hub as:docker pull python:tag [ tag is the python version-choose the appropriate version ]docker pull python:2.7Refer: Python imageEither write a Dockerfile to import/copy entire source code to python environment or directly run the image passing the script path as below:Command:$docker run -it --name my-python-script -v “$PWD”:/usr/src/myapp -w /usr/src/myapp python:2.7 python my-application.pyCommand options:-v = volume list-bind mount a volume [mount present working directory onto /usr/src/myapp inside container]-w = workdir string-working directory inside the containerMoreover; you can also test your python code in more than one version by downloading different python images, running them to create different containers and running your app in each container.What’s exciting here is that once the code tested in each python environment; you could quickly work on the test results and drop the containers. And deploy the code to production only once code tested against various python versions.Final thoughtsVMs and dockers are compatible with each other. Dockers are not here to replace Virtual machines.Both serve the same purpose of virtualizing the computing and infrastructure resources for optimized utilization.Using both Virtual machines and dockers together can yield better results in virtualization.When one desires a fast, lightweight, portable and highly scalable hardware-independent environment for multiple applications isolation; wherein security is not the major concern; Dockers is the best choice.Use a VM for embedded systems that are integrated with hardware; such as device driver or kernel coding.A scenario simulating an infrastructure setup with a high resource control and dependency on system resources; VMs are a better choice.Use of Dockers inside VMCI/CD pipelines scenario:Virtualization enables a smooth CI/CD process flow by promoting the users to concentrate only on developing the code on a working system that is set up for automated continuous integration and deployment without having to duplicate the entire setup each time.A virtualized environment is set up; either using a VM or a docker image that takes care of the automatic code check-ins, builds, regression testing, and deployments on the server.
Rated 4.5/5 based on 3 customer reviews
7823
Docker Vs Virtual Machines(VMs)

Let’s have a quick warm up on the resource manag... Read More

What is DevOps

You landed up here which means that you are willing to know more about DevOps and hey, you can admit it! And of course, the business community has been taking this trend to the next level not because it looks fancy but of course, this process has proven the commitment. The growth and adoption of this disruptive model are increasing the productivity of the company.  So here, we will get an idea of how this model works and how we can enable this across the organization. According to DevOps.com, during the period of the year 2015 to 2016, the adoption rate of DevOps increased significantly, with a rise of 9 per cent in its usage rate. You may have a look at DevOps Foundation training course to know more about the benefits of learning DevOps.1. What is DevOpsDevOps is a practice culture having a union of people, process, and tools which enables faster delivery. This culture helps to automate the process and narrow down the gaps between development and IT. DevOps is a kind of abstract impression that focuses on key principles like Speed, Rapid Delivery, Scalability, Security, Collaboration & Monitoring etc.A definition mentioned in Gartner says:“DevOps represents a change in IT culture, focusing on rapid IT service delivery through the adoption of agile, lean practices in the context of a system-oriented approach. DevOps emphasizes people (and culture) and seeks to improve collaboration between operations and development teams. DevOps implementations utilize technology— especially automation tools that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective.”2. History of DevOpsLet's talk about a bit of history in cloud computing. The word “cloud compute” got coined in early 1996 by Compaq computer, but it took a while to make this platform easily accessible even though they claimed its a $2 billion market per year.  In August 2006 Amazon had introduced cloud infra which was easily accessible, and then it became a trend, and post this, in April 2008 Google, early 2010 Microsoft and then April 2011 IBM has placed his foot in this vertical. This showed a trend that all Giants are highly believed in this revolution and found potential in this technology.And in the same era DevOps process got pitched in Toronto Conference in 2008 by Patrick Debois and Andrew Shafer. He proposed that there is a better approach can be adopted to resolve the conflicts we have with dev and operation time so far. This again took a boom when 2 Flickr employee delivered a  seminar that how they are able to place 10+ deployment in a day. They came up with a proven model which can resolve the conflicts of Dev and operation having component build, test and deploy this should be an integrated development and operations process.3. Why market has adopted so aggressivelyLouis Columbus has mentioned in  Forbes that by the end of 2020 the 83% of the workload will be on the cloud where major market contributor will be AWS & Google. The new era is working more on AI, ML, Crypto, Big Data, etc. which is playing a key role in cloud computing adoption today but at the same time, IT professionals say that the security is their biggest concern in adopting a cloud computing. Moreover, Cloud helped so many startups to grow at initial stages and later they converted as a leader in there market place, which has given confidence to the fresh ideas.This entire cloud enablement has given more confidence to the team to adopt DevOps culture as cloud expansion is helping to experiment more with less risk.4. Why is Devop used?To reduce day to day manual work done by IT team To avoid manual error while designing infra Enables a smooth platform for technology or cloud migration Gives an amazing version control capabilities A better way to handle resources whether it’s cloud infra or manpower Gives an opportunity to pitch customer a realistic feature rollout commitment To adopt better infra scaling process even though you receive 4X traffic Enables opportunity to build a stable infrastructure5. Business need and value of DevOpsLet's understand this by sharing a story of, a leading  Online video streaming platform 'Netflix' which was to disclose the acquisition or we can say a company called Blockbuster LLC got an opportunity in 2000 to buy Netflix in $50 million. Netflix previously was working only on  DVD-by-mail service, but in 2016 Netflix made a business of $8.83 billion which was tremendous growth in this vertical. Any idea of how this happened? This started with an incident at Netflix office where due to a database mismatch a DVD shipping got disrupted for 3 days which Forced Management to move on the cloud from relational systems in their data centers because this incident made a massive impact on core values. The shift happened from vertical to horizontal, and AWS later provided with Cloud service even I have read that in an early stage they gathered with AWS team and worked to scale the infrastructure. And, today Netflix is serving across 180 countries with the amount of 15,00,00,000 hours of video content to less or more 8,60,00,000 members.6. Goal Of DevOpsControl of quality and increase the frequency of deploymentsAllows to enable a risk-free experimentEnables to improve mean time to recovery and backupHelps to handle release failure without losing live dataHelps to avoid unplanned work and technical failureTo achieve compliance process and control on the audit trailAlert and monitoring in an early stage on system failureHelps to maintain SLA in a uniform fashionEnabling control for the business team7. How Does DevOps workSo DevOp model usually keeps Development and operation team tightly coupled or sometimes they get merged and they roll out the entire release cycle. Sometime we may see that development, operation and security & Network team is also involved and slowly this got coined as DevSecOps. So the integration of this team makes sure that they are able to crack development, testing, deployment, provisioning infra, monitoring, network firewalling, infrastructure accessibility and accountability. This helps them to build a clean application development lifecycle to deliver a quality product.8. DevOps workflow/LifecycleDevOps Workflow (Process)DevOps workflow ensures that we are spending time on the right thing that means the right time is involved in building product/infrastructure. And how it enables we can analyze in below diagram. When we look into the below diagram, it seems DevOps process in an extended version of agile methodologies but it doesn’t mean that it can fit in other SDLC methodologies.  There is enough scope in other SDLC process as well. Once we merge process and Tools workflow diagram, it showcases a general DevOps environment. So the team puts an effort to keep pushing the releases and at the same time by enabling automation and tools we try maintaining the quality and speed.DevOps Workflow (Process)DevOps Workflow (Tool)9. DevOps valuesI would like to split DevOps values into  two groups: Business Values Organization Values Business values are moreover customer centricHow fast we recover if there is any failure?How we can pitch the exact MRR to a customer and acquire more customers?How fast we can deliver the product to customersHow to roll out beta access asap if there any on-demand requirement?Organizational ValuesBuilding CultureEnabling communication and collaborationOptimize and automate the whole systemEnabling Feedbacks loopsDecreasing silosMetrics and Measurement10. Principle of DevOpsAutomated: Automate as much as you can in a linear and agile manner so you can build an end to end automated pipeline for software development life cycle in an effective manner which includes quality, rework, manual work and cost. And it’s not only about the cycle it is also about the migration from one technology to another technology, one cloud to another cloud etc.Collaborative: The goal of this culture is to keep a hold on both development and operations. Keep an eye and fix the gaps to keep moving thing in an agile way, which needs a good amount of communication and coordination. By encouraging collaborative environment an organization gets ample of ideas which help to resolve issue way faster. The beauty of the collaboration is it really handles all unplanned and manual work at an early stage which ends up given a quality build and process.Customer Centric approach: DevOps team always reacts as a startup and must keep a finger on the pulse to measure customer demands. The metrics they generate give an insight to the business team to see the trend of usage and burn rate. But of course, to find a signal in the noise, you should be focused and collect only those metrics which really matters.Performance Orientation: Performance is a principle and a discipline which gives an insight to the team to understand the implication of bad performance. Before moving to production if we get metrics and reports handy it gives confidence to technology and business both. This gives an opportunity to plan how to scale infrastructure, how to handle if there is a huge spike or the high usage, and the utilization of the infrastructure.Quality Indicators per application: Another set which is a predefined principle is to set measurable quality Assigning a quality gate to indicators with predefined targets, covering fit for purpose and security gives an opportunity to deliver a complete quality application.11. DevOps Key practices:i) CI/CD When we say “continuous” it doesn’t translate that “always running” but of course “always ready to run”.Continuous integration is nothing but the development philosophy and practices that drive teams to check-in code to version control system as often as possible. So keeping your build clean and QA ready developer's changes need to be validated by running automated tests against the build that could be a Junit, iTest. The goal of CI is to place a consistent and automated way to build and test applications which results in better collaboration between teams, and eventually a better-quality product.Continuous Delivery is an adjunct of CI which enables a facility to make sure that we can release new changes to your customers quickly in a sustainable way.Typical CD involves the below steps:Pull code from version control system like bitbucket and execute build.Execute any required infrastructure steps command line/script to stand up or tear down cloud infrastructure.Move build to right compute environmentAble to handle all the configuration generation process.Pushing application components to their appropriate services, such as web servers, API services, and database services.Executing any steps required to restarts services or call service endpoints that are needed for new code pushes.Executing continuous tests and rollback environments if tests fail.Providing log data and alerts on the state of the delivery.Below there is a table which will help us better understand what we need to put as an effort and what we will gain if we enable CI/CD in place:Practice TypeEffort RequiredGainContinuous integrationa) Need to prepare automated your team will need to write automated tests for each new feature,improvement or bug fix.b) You need a continuous integration server that can monitor the main repository and run the tests automatically for every new commits pushed.c) Developers need to merge their changes as often as possible, at least once a day.a Will give control on regressions which can be captured in early stage of automated testing.b) Less context switching as developers are alerted as soon as they break the build and can work on fixing it before they move to another task.c) Building the release is easy as all integration issues have been solved early.d) Testing costs are reduced drastically -your Cl server can run hundreds of tests in the matter of seconds.e) Your QA team spend less time testing and can focus on significant improvements to the quality culture.Continuous Delivery a) You need a strong foundation in continuous integration and your test suite needs to cover enough of your codebase.b) Deployments need to be automated. The trigger is still manual but once a deployment is started  there shouldn't be a need for human intervention.c) Your team will most likely need to embrace feature flags so that incomplete features do not affect customers in productiona) The complexity of deploying software has been taken away. Your team doesnt't have to spend days preparing for a release anymore.b) You can release more often,thus accelerating the feedback loop with your  customers.c) There is much less pressure on decisions for small changes,hence encoraging iterating faster.ii) Maintaining infra in a secure and compliant wayKeeping infrastructure secure and compliant is also a responsibility of DevOps or some organization are nowadays pitching as SecOps. A general and traditional way of security methodologies or rules get fails when you have multi-layer or multi-cloud system running for your product organization and sometimes it really fail when you moving with the continuous delivery process. So the job is to ensure that the team is putting clear and neat visibility on risk and vulnerabilities which may cause a big problem. Below adding a basic rule to avoid small loophole (Specific to GCP) :Level 1:- Define your resource hierarchy.- Create an Organization node and define the project structure.- Automation of project creation which helps to achieve uniformity and testability.Level 2:- Manage your Google identities.- Synchronize your existing identity platform.- Have single-sign-on (SSO).- Don’t add multiple users instead of that select resource level permission.- Be specific while providing resource access also with action type (Read, Write & Admin).- Always use service accounts to access meta-data because to authenticate this it uses keys instead of password and GCP rotates the service account keys for code running on GCP.Level 3:- Add VPC for network definition and create firewall rules to manage traffic.- Define a specific rule to control external access and avoid having unwanted port opens.- Create a centralized network control template which can be applied across the project.Level 4:- Enable centralized logging and monitoring (Preferred Stackdriver).- Enable audit log which helps you to collect the activity of Admin, System Event and Data Access in a phrase of who did what, where and when?Level 5:- Enable cold line data storage if there is a need to keeping a copy for disaster management.- Further reference for placing security standard in AWS there is an article I have posted a few months back. 12. DevOps Myths or What DevOps is not?Before I mention the myths I would clarify the biggest myth which every early stage learner carries that “DevOps practice can be rolled out in a day and output will be available from the 1st day”. This is too early to reach this conclusion, as the definition of DevOps always says that it's a culture and process which can be built in a day. But of course, you will get an opportunity to overcome your mistakes at an early stage. Let's discuss a few more myths:It’s now only about the tool, (It’s a component of the whole DevOps practice)Dev and Ops team should have used the same set of tools (How to overcome- push them to integrate both)Only startups can follow this practice (Azure has published an article on best practices of DevOps which says it can be applied anywhere)Joining DevOps/ DevOps tool conf with fancy sticker (Good you join but don’t pretend that now you are carrying DevOps tag)Pushing build in production in every 5 mins (This is not what Continuous delivery)DevOps doesn’t fit the existing system (Overcome: You may need to find the right approach to make an attempt)13. Benefits of DevOpsBusiness Benefitsa) Horizontal and vertical growth: When I’m using “Horizontal and Vertical Growth” I’m keeping customer satisfaction on X, Business on Y2 and time on the Y-axis. Now the question is how it helps to populate growth in 2 axis, and my answer will be the quick turnaround time for minor and major issues. Once we adopt DevOps we scale and built in such a fashion that in less time the graph shows a rapid jump.b) Improving ROI of Data: Having DevOps in an organization ensures that we can design a decent ROI from data at an early stage more quickly. If we will do a raw survey now Software industry is playing with data and have control over there a team should have an end to end control on data. And if we define DevOps it will help the team to crunch data in various ways by automating small jobs. By automation, we can segregate and justify data and this helps to populate either in Dashboard or can present offline to the customer.Technical Benefitsc) Scalability & Quality: If a business starts reaching to more user we start looking to increase infrastructure and bandwidth. But on the other hand, it starts popping up a question whether we are scaling our infra in the right way and also if a lot of people are pushing changes (Your code commits/builds) are having the same or greater quality we have earlier. Both the questions are somehow now reserved by the DevOps team. If your business pitch that we might be going to hit 2000+ client and they will be having billion of traffic and we are ready to handle, DevOps take these responsibilities and says yes can scale infra at any point of time. And if the same time internal release team says I want to deliver 10 feature in next 10 days independently, DevOps says quality can be maintained.Culture  Benefitsd) Agility & Velocity: They key param of adopting DevOps is to improve the velocity of product development. DevOps enables Agility and if both are in sync we can easily observe the velocity. The expectation of end users are always high and at the same time, the deliverable time span is short. To achieve this we have to ensure that we are able to our rollout new features to customers at much higher frequencies otherwise your competitors may win the market.e) Enabling Transparency:  A Practice to enable total transparency is a key impact on the DevOps culture. Sharing knowledge across the team gives you an opportunity to work faster and get aligned with the goal. Transparency will encourage an increasingly well-rounded team with heightened understandings.14. How to adopt a DevOps modelThe ideal way is to pick up a small part of the project or product but sometimes we start adopting when we are touching the bottleneck. So whatever you start few things need to be taken care like Goal should be clean and the team is are in sync to chase the same, loop whole which turns to a downtime, how can testing (Stress, performance, load ) to avoid production glitches and at the same time enable automated deployment process. All this could have some basic plan and move forward it can be elaborated in detailed format. While adopting a DevOps model, need to make sure that the team is always looking into metrics so they can justify no’s and make assumption towards the goal. If you want to have a roadmap of DevOps adoption then you really need to find the gaps up front and the typical problem you face every day which really holds your release process or spoils your team time.15. DevOps automation toolJenkins: Jenkins is an open source automation server which is used to automate the software build, and deliver or deploy the build.  It can be installed through native system packages, Docker, or even run standalone by any machine with a Java Runtime Environment (JRE) installed. In short, Jenkins enables continuous integration which helps to accelerate the development. There are ample of plugins available which enable the integration for Various DevOps stages. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc. More in-depth information about the same can be found here in our training material on Jenkins and if you are inquisitive about what sort of questions related to Jenkins are asked in job interviews, then feel free to view our set of 22 Jenkins interview questions.Ansible: An open-source platform which helps to automate the IT engine which actually pushes off the slavery work from DevOps day to day life. Usually, Ansible helps in 3 day to day task, Provisioning, configuration management and application deployment. Beauty is it can automate traditional servers, virtualization platforms, or the cloud one. It is built on playbooks which can be applied to an extensive variety of systems for deploying your app. To know more, you may have a look at our Ansible training material here or go through our set of 20 interview questions on Ansible.Chef: It’s an open source config management system and works as a master-client model. It is having a transparent design and works based on instruction which needs to be defined properly. Before you plan using this tool you need to make sure that you have a proper Git practice going on and you have an idea about Ruby as this is completely built on Ruby. The industry says that this is good to have for development-focused environments and enterprise mature architecture. Our comprehensively detailed training course on Chef will give you more insight into this tool.Puppet:  So Puppet works as a master-client setup and utilized as model driven. This is built on Ruby but you can customize this as scripting language somewhere close to JSON. Puppet helps you to get control of full-fledged configuration management. This tool somewhere helps Admin (Part of DevOps) to add stability and maturity towards config management. A more detailed explanation of Puppet and its functionality can be found in our training material.Docker:  A tool designed to help developers and administrators to provide flexibility to reduce the count of the system as docker don’t create a complete virtual operating system, instead they allow applications to use the same Linux kernel as the system. So somewhere we can say we use Docker to create, deploy, and run applications by using containers. Just a stats submitted by docket that over 3.5 million applications placed in containers using docker and 37 billion containerized applications have been downloaded. Specifically, Docker CI /CD provided an opportunity to have exactly like a live server and run multiple dev infra form the same host with different config and OS. You may visit our training course on Docker to get more information.Kubernetes: Platform developed to manage containerized applications which provide high availability and scalability. According to the usage we can downgrade and upgrade the system, we can perform rolling updates, rollback feasibility, switch traffic between to different versions of the application.  So we can have multiple instances having Kubernetes installed and can be operated as Kubernetes cluster. Post this you get an API endpoint of Kube cluster, configure Kubel and you are ready to serve. Read our all-inclusive course on Kubernetes to gather more information on the same.Docker and Kubernetes, although being widely tools as DevOps automation tools, have notable differences between their setup, installation and their attributes, clearly mentioned in our blog stressing the differences between Docker and Kubernetes.Alert:Pingdom: Pingdom is a platform which enables monitoring to check the availability,  performance, transaction monitoring (website hyperlink) and incident collection of your websites, servers or web applications. Beauty is if you are using a collaboration tool like slack or flock you can just integrate by using the webhook (Pretty much simpler no code required )you can easily get notified at any time. Pingdom also provides an API so you can have your customized dashboard (Recently started) and the documentation is having enough details and self-explanatory.Nagios: It’s an Open Source Monitoring Tool to monitor the computer network. We can monitor server, applications, incident manager etc and you can certainly configure email, SMS, Slack notifications and phone calls even. Nagios is licensed under GNU GPLv2. Listing some major components which can be monitored with Nagios:Once we install Nagios we get a dashboard to monitor network services like SMTP, HTTP, SNMP, FTP, SSH, POP, etc and can view current network status, problem history, log files, notifications that have been triggered by the system, etc.We can monitor Servers resources like disk drives, memory, processor, server load usage, system logs, etc.Image copyright stackdriver- D-4Stackdriver: Stackdriver is again a Monitoring tool to get the visibility of performance, uptime, and overall health for cloud-powered applications. Stackdriver monitoring collects events and metadata from Google Cloud Platform, Amazon Web Services (AWS). Stackdriver consumes data and generates insights via dashboards, charts, and alerts. And for alerting we can integrate to collaboration tools like Slack, PagerDuty, HipChat, Campfire, and more.Image copyright stackdriver- D-2Adding one sample log where we can see what all parameter it collects and also i have just separated them in fashion which will help us to understand what all it logs it actually collects:Log InformationUser Details and Authorization InfoRequest Type and Caller IPResource and Operation DetailsTimeStamp and Status Details{  insertId:    logName:  operation: {   first:     id:   producer:  } protoPayload: {   @type:   authenticationInfo: {   principalEmail:       }   authorizationInfo: [   0: {   granted:     permission:       }   ]methodName:     request: {    @type:     } requestMetadata:  { callerIp: callerSuppliedUserAgent:   } resourceName: response: { @type:    Id:  insertTime:    name:  operationType:    progress:    selfLink:    status:      targetId:     targetLink:    user:    zone:     }   serviceName:    }receiveTimestamp:  resource: {  labels: {  instance_id:  project_id:      zone:      }  type:    }  severity:  timestamp:   }Monitoring:Grafana: It is an open source visualization tool and can be used on top of different data stores like InfluxDB,Elasticsearch and Logz.io.We can create comprehensive charts with smart axis formats (such as lines and points) as a result of Grafana’s fast, client-side rendering — even over long ranges of time — that uses Flot as a default option. We can get the 3 different levels of access, watcher, Editor and Admin, even we can enable G-Auth for having good access control. A detail information guide can be found hereImage copyright stackdriver- D-5Elasticsearch:It's an open source realtime distributed, RESTful search and analytics engine. It collects unstructured data and stores in a cultivated format which is optimized and available for language based search. The beauty of Elastic is scalability, speed, document-oriented, schema-free. It scales horizontally to handle kajillions of events per second, while automatically managing how indices and queries are distributed across the cluster for smooth operations.Cost OptimizationreOptimize.io : Once we run ample of servers we usually end up with burning good amount not intentionally but of course because of not have a clear visualization. At reOptimze helps thereby providing a detailed insight about the cloud expense the integration can be done with 3-4 simple steps but before that you might need to look into the prerequest which can be accessed here. Just a heads up that they only need read access for all these and permission docs can be found here . Image copyright reOptimizeD-616. DevOps vs AgileDevOpsAgileDevOps culture can be enabled in the software industry to deliver reliable build.Agile is a generic culture which can be deployed in any department.The key focus area is to have involvement at an end to end process.Agile helps the management team to push a frequent release.Enables quality build with rapid delivery.Keep team aware of frequent changes for any release and feature.Agile sprints work within the immediate future, A sprint life cycle varies between 7-30 days.DevOps don’t have such scheduled metrix, they work to avoid such unscheduled disruptions.Team size also differs, in Agile wee can minimal resource can be one as well.DevOps works on collaboration and bridge a big set of the team. 17. Future of DevOpsThe industry is moving more on cloud which enables a few more responsibilities to DevOps. And immediate hot topic could be DevSecOps because more automation tends to more connectivity means more exposure. AI or ML is more data-centric and learning based which gives an opportunity to DevOps to share hand to train ML modules, unique analysis correlating, code, test results, user behavior, and production quality & performance. There is also an opportunity to break the stereotype that DevOps can be only adopted by startup and surely the next 2-3 years this will become a general practice in Enterprise.18. Importance of DevOps training certificationCertifications work like an add-on, and add-on always gives some crisp and cumulative results. Certification works similar to that if someone adds any professional certificates to resume it gives an add-on value. In the process of certification, professionals help you to understand in detail and the deep dive of DevOps culture help individual to get a clear picture. While certification you may have to look into the vendor reputation, an academic who is giving approval, the transparency, session hour and many more components.19. Conclusion: I have been working closely and observing the DevOps team from a year & so and found every day we learn something new. The more we dive we see a lot can be done and can be achieved in a couple of different ways. As the industry is growing the responsibility of DevOps seems to increase which creates a possible chance for professional but always set a new bar of quality. Now, since you have come to know everything about DevOps, feel free to read our blog on how you can become a DevOps Engineer.
Rated 4.0/5 based on 11 customer reviews
3611
What is DevOps

You landed up here which means that you are willin... Read More

Kubernetes vs Docker

What is Kubernetes?Kubernetes (also known as K8s) is a production-grade container orchestration system. It is an open source cluster management system initially developed by three Google employees during the summer of 2014 and grew exponentially and became the first project to get donated to the Cloud Native Computing Foundation(CNCF).It is basically an open source toolkit for building a fault-tolerant, scalable platform designed to automate and centrally manage containerized applications. With Kubernetes you can manage your containerized application more efficiently.Kubernetes is a HUGE project with a lot of code and functionalities. The primary responsibility of Kubernetes is container orchestration. That means making sure that all the containers that execute various workloads are scheduled to run physical or virtual machines. The containers must be packed efficiently following the constraints of the deployment environment and the cluster configuration. In addition, Kubernetes must keep an eye on all running containers and replace dead, unresponsive, or otherwise unhealthy containers.Kubernetes uses Docker to run images and manage containers. Nevertheless, K8s can use other engines, for example, rkt from the CoreOS. The platform itself can be deployed within almost any infrastructure – in the local network, server cluster, data center, any kind of cloud – public (Google Cloud, Microsoft Azure, AWS, etc.), private, hybrid, or even over the combination of these methods. It is noteworthy that Kubernetes supports the automatic placement and replication of containers over a large number of hosts. It  brings a number of features and which can be thought of as:\ As a container platformAs a microservices platformAs a portable cloud platform and a lot more.Kubernetes considers most of the operational needs for application containers. The top 10 reasons why Kubernetes is so popular are as follow:Largest Open Source project in the worldGreat Community SupportRobust Container deploymentEffective Persistent storageMulti-Cloud Support(Hybrid Cloud)Container health monitoringCompute resource managementAuto-scaling Feature SupportReal-world Use cases AvailableHigh availability by cluster federationBelow is the list of features which Kubernetes provides - Service Discovery and load balancing: Kubernetes has a feature which assigns the containers with their own IP addresses and a unique DNS name, which can be used to balance the load on them.Planning & Placement: Placement of the containers on the node is a crucial feature on which makes the decision based on the resources it requires and other restrictions.Auto Scaling: Based on the CPU usage, the vertical scaling of applications is automatically triggered using the command line.Self Repair: This is a unique feature in the Kubernetes which will restart the container automatically when it fails. If the Node dies, then containers are replaced or re-planned on the other Nodes. You can stop the containers if they don't respond to the health checks.Storage Orchestration: This feature of Kubernetes enables the user to mount the network storage system as a local file system.Batch execution: Kubernetes manages both batch and CI workloads along with replacing containers that fail.Deployments and Automatic Rollbacks: During the configuration changes for the application hosted on the Kubernetes, progressively monitors the health to ensure that it does not terminate all the instances at once, it makes an automatic rollback only in the case of failure.Configuration Management and Secrets: All classifies information like keys and passwords are stored under module called Secrets in Kubernetes. These Secrets are used especially while configuring the application without having to reconstruct the image.What is Docker?Docker is a lightweight containerization technology that has gained widespread popularity in the cloud and application packaging world. It is an open source framework that automates the deployment of applications in lightweight and portable containers. It uses a number of the Linux kernel’s features such as namespaces, cgroups, AppArmor profiles, and so on, to sandbox processes into configurable virtual environments. Though the concept of container virtualization isn’t new, it has been getting attention lately with bigwigs like Red Hat, Microsoft, VMware, SaltStack, IBM, HP, etc, throwing their weight behind newcomer Docker. Start-ups are betting their fortunes on Docker as well. CoreOS, Drone.io, and Shippable are some of the start-ups that are modeled to provide services based upon Docker. Red Hat has already included it as a primary supported container format for Red Hat Enterprise Linux 7.Why is Docker popular?The major factors driving Docker’s popularity are its speed, ease of use and the fact that it is largely free. In performance, it is even said to be comparable with KVM. A container-based approach, in which applications can run in isolation and without relying on a separate operating system, can really save huge amounts of hardware resources. Industry experts have started looking at it as hardware multi-tenancy for applications. Instead of having hundreds of VMs running per server, what if it were possible to have thousands of hardware-isolated applications?Docker is used to running software packages called "containers". A container is a standardized unit of software that packages up a code and all its dependencies so the application runs quickly and reliably from one computing environment to other. Containers are the “fastest growing cloud-enabling technology”* because they speed the delivery of software and cut the cost of operating it. Writing software is faster. Deploying it is easier — in your data center or your preferred cloud. And running it requires less hardware and support.Although container technology has existed for decades, Docker makes it work for the enterprise with core features enterprises require in a container platform and best-practice services to ensure success. And containers work on both legacy applications and new development.Existing, mission-critical applications can be “containerized,” often with little or no change. The result is instant savings in infrastructure, better security, and reduced labor. And new development happens faster because engineers only target a single platform instead of a variety of servers and clouds. Less code to write. Less testing. Faster delivery.Introduction to Docker swarm.Docker Swarm is the native clustering and scheduling tool for Docker.  It allows IT, administrators and developers, to establish and manage a cluster of Docker nodes as a single virtual system.  It is written in Go and released for the first time in November 2015 by Docker, Inc.The cluster management and orchestration features embedded in the Docker Engine are built using swarmkit. Swarmkit is a separate project which implements Docker’s orchestration layer and is used directly within Docker. It is a toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.Its main benefits are:Distributed: SwarmKit uses the Raft Consensus Algorithm in order to coordinate and does not rely on a single point of failure to perform decisions.Secure: Node communication and membership within a Swarm are secure out of the box. SwarmKit uses mutual TLS for node authentication, role authorization, and transport encryption, automating both certificate issuance and rotation.Simple: SwarmKit is operationally simple and minimizes infrastructure dependencies. It does not need an external database to operateCurrent versions of Docker include swarm mode for natively managing a cluster of Docker Engines called a swarm. One can use the Docker CLI to create a swarm, deploy application services to a swarm, and manage swarm behavior. All you need is to initiate it to use the latest features which comes with the Docker Engine. Docker Swarm Mode ArchitectureEvery node in Swarm Mode has a role which can be categorized as a Manager and Worker. Manager node has a responsibility to actually orchestrate the cluster, perform the health-check, running containers serving the API and so on. The worker node just executes the tasks which are actually containers. It cannot decide to schedule the containers on the different machine. It cannot change the desired state. The workers only take work and report back the status. You can enable node promotion or demotion easily through one-liner command.Managers and Workers use two different communication models. Managers have built-in RAFT system that allows them to share information for new leader election. At one time, the only manager is actually performing the scaling and they use a leader-follower model to figure out which one is supposed to be what. No External K-V store is required as a built-in internal distributed state store is available.Workers, on the other side, uses GOSSIP network protocol which is quite fast and consistent. Whenever any new container/tasks get generated in the cluster, the gossip is going to broadcast it to all the other containers in a specific overlay network that this new container has started. Please remember that ONLY the containers which are running in the specific overlay network will be communicated and NOT globally. Gossip is optimized for heavy traffic.How Docker swarm varies with Docker?Today Docker Platform support 3 variants of Swarm:Docker Swarm ( Classic)Swarmkit(a foundation for Docker Swarm Mode)Docker Swarm ModeLet us go through each one of them one by one Docker Swarm 1.0 was introduced for the first time in Docker Engine 1.9 Release during November 2015. It was a separate GITHUB repo and software which needed to be installed for turning a pool of Docker Engines into a single, virtual Engine.. It was announced as the easiest way to run Docker applications at scale on a cluster. You don’t have to worry about where to put containers, or how they talk to each other – it just handles all that for you.In 2016 during Dockercon, Docker Inc. announced Docker Swarm Mode for the first time. Swarm Mode came integrated directly into Docker Engine which means you don’t need to install it separately. All you need is to initiate it using `docker swarm init` command. With an optional “Swarm Mode” feature rightly integrated into core Docker Engine, native management of a cluster of Docker Engines, orchestration, decentralized design, service and application deployment, scaling, desired state reconciliation, multi-host networking, service discovery and routing mesh implementation is just a matter of few liner commands.Said that Docker Swarm mode is fundamentally different from Classic Swarm. The basic difference are listed below:Docker Swarm ModeDocker Classic SwarmDocker Swarm Mode comes integrated into Docker EngineDocker Swarm is a GITHUB repository and comes as a separate project. It is NOT integrated into Docker Engine.Comes with inbuilt Service DiscoveryNeed external KV store based on Consul & etc.Comes with inbuilt feature like: ScalingRolling UpdatesService DiscoveryLoad-Balancing Routing MeshTopological PlacementLack of inbuilt feature like Load Balancing, Scalability, Routing Mesh etc.Secured Control & Data PlaneControl Plane and Data Plane are insecureLet’s talk about Swarmkit a bit.Swarmkit is a plumbing open source project. It is a toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.Its main benefits are:Distributed: SwarmKit uses the Raft Consensus Algorithm in order to coordinate and does not rely on a single point of failure to perform decisions.Secure: Node communication and membership within a Swarm are secure out of the box. SwarmKit uses mutual TLS for node authentication, role authorization, and transport encryption, automating both certificate issuance and rotation.Simple: SwarmKit is operationally simple and minimizes infrastructure dependencies. It does not need an external database to operate.SwarmKit is completely built in Go and leverages a standard project structure to work well with Go tooling. If you want to learn more about Swarmkit, head over to https://github.com/docker/swarmkit/How Docker can be used with Kubernetes?From 30,000 feet, Docker and Kubernetes might appear to be similar technologies. They both are an open platform which allows you to run applications within Linux containers. But as you deep-dive little closer, you’ll find that the technologies operate at different layers of the stack, and can even be used together. Let’s talk about Docker first-Docker provides the ability to package and run an application in a loosely isolated environment called a container. At their core, containers are a way of packaging software. The unique feature about container is that when you run a container, you know exactly how it will run - it’s very predictable, repeatable and immutable. You are just left with no unexpected errors when you move it to a new machine, or between environments. All of your application’s code, libraries, and dependencies are packed together in the container as an immutable artifact. You can think of running a container like running a virtual machine, without the overhead of spinning up an entire operating system. Docker CLI provides the mechanism for managing the life cycle of the containers. Whereas the docker image defines the build time framework of runtime containers, CLI commands are there to start, stop, restart and perform lifecycle operations on these containers. Today, containers can be orchestrated and can be made to run on multiple hosts. The questions that need to be answered are how these containers are coordinated and scheduled? And how will the application running in these containers communicate with each other? The answer is Kubernetes.Today, Kubernetes mostly uses Docker to package, instantiate, and run containerized applications. Said that there are various another container runtime available but Docker is the most popular runtime binary used by Kubernetes. Both Kubernetes and Docker build a comprehensive standard for managing the containerized applications intelligently along with providing powerful capabilities. Docker provides a platform for building running and distributing Docker containers. Docker brings up its own clustering tool which can be used for orchestration. But Kubernetes is an orchestration platform for Docker containers which is more extensive than the Docker clustering tool and has the capacity to scale to the production level. Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker Swarm and is meant to coordinate clusters of nodes at scale in production in an efficient manner.  It is a plug and plays architecture for the container orchestration which provides features like high availability among the distributed nodes.For Example ~ Today it is possible to run Kubernetes under Docker EE 2.0 platform. Docker Enterprise Edition (EE) 2.0 is the only platform that manages and secures applications on Kubernetes in multi-Linux, multi-OS, and multi-cloud customer environments. As a complete platform that integrates and scales with your organization, Docker EE 2.0 gives you the most flexibility and choice over the types of applications supported, orchestrators used, and where it’s deployed. It also enables organizations to operationalize Kubernetes more rapidly with streamlined workflows and helps you deliver safer applications through integrated security solutions.Difference between Kubernetes and Dockeri) Kubernetes vs DockerSet up and installationKubernetesDockerIt requires a series of manual steps to setup Kubernetes Master and worker nodes components in a cluster of nodesInstalling Docker is a matter of one-liner command on Linux Platform like Debian, Ubuntu, and CentOS.Kubernetes can run on various platforms: from your laptop, to VMs on a cloud provider, to a rack of bare metal servers. For setting up a single node K8s cluster, one can use Minikube.To install a single-node Docker Swarm or Kubernetes cluster, one can deploy Docker for Mac & Docker for Windows.Kubernetes support for Windows server is under beta phase.Docker has official support for Windows 10 and Windows Server 2016 and 1709.Kubernetes Client and Server packages need to be upgraded manually on all the systems.It’s so easy to upgrade Docker Engine under Docker for Mac & Windows via just 1 click.Working in two systemsKubernetesDockerKubernetes operates at the application level rather than at the hardware level. Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes.Kubernetes can run on top of Docker but requires you to know the command line interface (CLI) specifications for both to access your data over the API.There is a kubernetes client called kubectl which talks to kube API which is running on your master node.Unlike Master components that usually run on a single node (unless High Availability Setup is explicitly stated), Node components run on every node.kubelet: agent running on the node to inspect the container health and report to the master as well as listening to new commands from the kube-apiserverkube-proxy: maintains the network rulescontainer runtime: software for running the containers (e.g. Docker, rkt, runc)Docker Platform is available in the form of two editions:Docker Community EditionDocker Enterprise EditionDocker Community comes with community-based support forums whereas Docker Enterprise Edition is offered as enterprise-class support with defined SLAs and private support channels.Docker Community and Enterprise Edition both come by default with Docker Swarm Mode. Additionally, Kubernetes is supported under Docker Enterprise Edition.For Docker Swarm Mode, one can use Docker Compose file and use Docker Stack Deploy CLI to deploy an application across the cluster nodes.The `docker stack` CLI deploys a new stack or update an existing stack. The client and daemon API must both be at least 1.25 to use this command. One can use the docker version command on the client to check your client and daemon API versionsLogging and MonitoringKubernetesDockerLogging:Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster. Few of popular logging tools are listed below:Fluentd is an open source data collector for a unified logging layer. It’s written in Ruby with a plug-in oriented architecture.It helps to collect, route and store different logs from different sources. While Fluentd is optimized to be easily extended using plugin architecture, fluent-bit is designed for performance. It’s compact and written in C so it can be enabled to minimalistic IOT devices and remain fast enough to transfer a huge quantity of logs. Moreover, it has built-in Kubernetes support. It’s an especially compact tool designed to transport logs from all nodes.Other tools like Stackdriver logging provided by GCP, Logz.io and other 3rd party drivers are available too.Monitoring:There are various open source tools available for Kubernetes application monitoring like:Heapster: Installed as a pod inside of Kubernetes, it gathers data and events from the containers and pods within the cluster.Prometheus: Open source Cloud Native Computing Foundation (CNCF) project that offers powerful querying capabilities, visualization and alerting.Grafana:  Used in conjunction with Heapster for visualizing data within your Kubernetes environment.InfluxDB: A highly-available database platform that stores the data captured by all the Heapster pods.CAdvisor:  focuses on container level performance and resource usage. This comes embedded directly into kubelet and should automatically discover active containers.Logging driver plugins are available in Docker 17.05 and higher. Logging capabilities available in Docker are exposed in the form of drivers, which is very handy since one gets to choose how and where log messages should be shippedDocker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers.Each Docker daemon has a default logging driver, which each container uses unless you configure it to use a different logging driver.In addition to using the logging drivers included with Docker, you can also implement and use logging driver plugins.To configure the Docker daemon to default to a specific logging driver, set the value of log-driver to the name of the logging driver in the daemon.json file, which is located in /etc/docker/ on Linux hosts.The following example explicitly sets the default logging driver to syslog:{   "log-driver": "syslog" }When you start a container, you can configure it to use a different logging driver than the Docker daemon default, using the --log-driver flag. If the logging driver has configurable options, you can set them using one or more instances of the --log-opt = flag. Even if the container uses the default logging driver, it can use different configurable options.SizeKubernetes DockerAs per official page of Kubernetes documentation K8s v1.12 support clusters with up to 5000 nodes based on the below criteria:No more than 5000 nodesNo more than 150000 total podsNo more than 300000 total containersNo more than 100 pods per node.According to the Docker’s blog post on scaling Swarm clusters published during Nov 2015, Docker Swarm has been scaled and performance tested up to 30,000 containers and 1,000.SpecsDiscovery backend: Consul1,000 nodes30 containers per nodeManager: AWS m4.xlarge (4 CPUs, 16GB RAM)Nodes: AWS t2.micro (1 CPU, 1 GB RAM)Container image: Ubuntu 14.04Results Percentile  API Response Time Scheduling Delay50th     150ms              230ms90th      200ms             250ms99th      360ms             400msii) Building and Deploying Containers with DockerDocker has a capability to builds images automatically by reading the instructions via text file called Dockerfile. It is a simple text file that follows a specific format and instructions set that contains all commands, in order, needed to build a given image. A Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer. For example, below is a simple Dockerfile which Consider this Dockerfile:FROM nginx:latest COPY wrapper.sh / COPY html /usr/share/nginx/html CMD ["./wrapper.sh"]Each instruction creates one layer:FROM creates a layer from the nginx:latest Docker image.COPY adds files from your Docker client’s current directory.CMD specifies what command to run within the container.When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.Building a Docker Image$docker build -t hellowhaleThe above shown `docker build` command builds an image from a Dockerfile and a context. The build context is the set of files at a specified location PATH or URL. The PATH is a directory on your local filesystem. The URL is a Git repository location.Running the Docker ContainerA running Docker Image is called Docker container and all you need is to run the below command to expose port 80 on host machine from a container and get it up and running:docker run -d -p 80:80 --name hellowhale hellowhaleTagging the Image$docker tag hellowhale userid/hellowhalePushing the Docker Image to DockerHubBefore you push Docker Image to DockerHub, you need to login to DockerHub first using the below command:$docker login $docker push userid/hellowhaleiii) Managing container with KubernetesDocker CLI for a standalone system is used to build, ship and run your Docker containers. But if you want to run multiple containers across multiple machines, you need a robust orchestration tool and Kubernetes is the most popular in the list.Kubernetes is an open source container orchestration platform, allowing large numbers of containers to work together in harmony, reducing operational burden. It helps with things like running containers across many different machines, scaling up or down by adding or removing containers when demand changes, keeping storage consistent with multiple instances of an application, distributing load between the containers and launching new containers on different machines if something fails.Below are the list of comparative CLI used by Docker Vs Kubernetes to manage containers:Docker CLIKubernetes CLIdocker runTo run an nginx container -$ docker run -d --restart=always --name nginx-app -p 80:80 nginxkubectl runTo run an nginx Deployment and expose the Deployment, see kubectl run.$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"docker psTo list what is currently running, see kubectl get.docker:$ docker ps -akubectl getTo list what is currently running under kubernetes cluster:$ kubectl get po -adocker execTo execute a command in a  Docker container:$ docker psCONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES55c103fa1296        nginx               "nginx -g 'daemon of…"   6 minutes ago       Up 6 minutes        0.0.0.0:80->80/tcp   nginx-app$ docker exec 55c103fa1296 cat /etc/hostnamekubectl:To execute a command in a container, see kubectl exec.$ kubectl get poNAME              READY     STATUS    RESTARTS   AGEnginx-app-5jyvm   1/1       Running   0          10m$ kubectl exec nginx-app-5jyvm -- cat /etc/hostnamenginx-app-5jyvmiv) Trends in Docker and KubernetesDocker, Inc has around 550+ enterprise customer who uses Docker in a production environment. Few of non-exhaustive list of companies who actively uses Docker are list below:The New York TimesPayPalBusiness InsiderCornell University (Not a company but still can be considered)SplunkThe Washington PostSwisscomAlm BrandAssa AbloyExpediaJabilMetLifeSociete GeneraleGEGrouponYandexUberEbayShopifySpotifyNew RelicYelpRecently, the Forrester New Wave™: Enterprise Container Platform Software Suites, Q4 2018 report states that Docker leading the pack with a robust container platform well-suited for the enterprise, offering a secure container supply chain from the developer's desktop to production.Lots of organizations are already using Kubernetes in production—like the ones listed on the Kubernetes case studies page, including eBay, Buffer, Pearson, Box, and Wikimedia. But that is not a complete list. Kubernetes is even more versatile than the official case studies page suggests. Below is a list of companies using it:List of Kubernetes Users   Microservices UsageMicroservices help developers break up monolithic applications into smaller components. They can move away from all-at-once massive package deployments and break up apps into smaller, individual units that can be deployed separately. Smaller microservices can give apps more scalability, more resiliency and - most importantly - they can be updated, changed and redeployed faster. Some of the biggest public cloud applications run as microservices already.Containers are a packaging strategy for microservices. Think of them more as process containers than virtual machines. They run as a process inside a shared operating system. A container typically only does one small job - validate a login or return a search result. Docker is a tool that describes those packages in a common format, and helps launch and run them. Linux containers have been around for a while, but their popularity in the public cloud has given rise to an exciting new ecosystem of companies building tools to make them easier to use, cluster and orchestrate them, run them in more places, and manage their life cycles.Over the last two years, many different types of software vendors - from operating system to IT infrastructure companies - have all joined the container ecosystem. There’s already an industry organization - the open container initiative - guiding the market and making sure everyone plays well together. IBM, HP, Microsoft, VMware, Google, Red Hat, CoreOS - these are just some of the major vendors racing to make containers as easy as possible for developers to use, to share, to protect, and to scale.The rising demand for multi-cloud environmentsWith an estimated 85% of today’s enterprise IT organizations employing a multi-cloud strategy, it has become more critical that customers have a ‘single pane of glass’ for managing their entire application portfolio. Most enterprise organizations have a hybrid and multi-cloud strategy. Containers have helped to make applications portable but let us accept the fact that even though containers are portable today but the management of containers is still a nightmare. The reason being –Each Cloud is managed under a separate operational model, duplicating effortsDifferent security and access policies across each platformContent is hard to distribute and trackPoor Infrastructure utilization still remainsThe emergence of Cloud-hosted K8s is exacerbating the challenges with managing containerized applications across multiple CloudsThis time Docker introduced new application management capabilities for Docker Enterprise Edition that will allow organizations to federate applications across Docker Enterprise Edition environments deployed on-premises and in the cloud as well as across cloud-hosted Kubernetes. This includes Azure Kubernetes Service (AKS), AWS Elastic Container Service for Kubernetes (EKS), and Google Kubernetes Engine (GKE). The federated application management feature will automate the management and security of container applications on premises and across Kubernetes-based cloud services. It will provide a single management platform to enterprises so that they can centrally control and secure the software supply chain for all the containerized applications.With this announcement, undoubtedly Docker Enterprise Edition is the only enterprise-ready container platform that can deliver federated application management with a secure supply chain. Not only does Docker give you your choice of Linux distribution or Windows Server, the choice of running in a virtual machine or on bare metal, running traditional or microservices applications with either Swarm or Kubernetes orchestration, it also gives you the flexibility to choose the right cloud for your needs.Talking about Kubernetes Platform, version 1.3 of container management platform KubernetesIntroduced cross-cluster federated services with an ability to span workloads across clusters and, by extension, across multiple clouds. This opens up the possibility for workloads that need to draw resources from multiple clouds. This would also mean that large jobs can be split among clouds. Not only this, this introduced an ability to automatically scale services to match demand. Increasing support for Docker and KubernetesKubernetes has been enjoying widespread adoption among startups, platform vendors, and enterprises. Companies like Amazon, Google, IBM, Red Hat, and Microsoft offer managed Kubernetes under the Containers as a Service (CaaS) model. The open source ecosystem has dozens of players building various tools covering logging, monitoring, automation, storage, and networking aspects of Kubernetes. System integrators have dedicated practices and offerings based on Kubernetes. Global players like Uber, Bloomberg, Blackrock, BlaBlaCar, The New York Times, Lyft, eBay, Buffer, Squarespace, Ancestry, GolfNow, Goldman Sachs and many others are using Kubernetes in production at massive scale. According to Redmonk, a developer-focused research company, 71 percent of the Fortune 100 use containers and more than 50 percent of Fortune 100 companies use Kubernetes as their container orchestration platform.Did you know there are 35 certified Kubernetes distribution, 22 certified Kubernetes hosting platform and 50 certified Kubernetes service provider available? Over the last three years, Kubernetes has been adopted by a vibrant, diverse community of providers. The Cloud Native Computing Foundation® (CNCF®), which sustains and integrates open source technologies like Kubernetes® , today announced the availability of the Certified Kubernetes Conformance Program, which ensures Certified Kubernetes™ products deliver consistency and portability, and that 35 Certified Kubernetes Distributions and Platforms are now available. A Certified Kubernetes product guarantees that the complete Kubernetes API functions as specified, so users can rely on a seamless, stable experience.In the other hand, Docker Enterprise Edition (EE) 2.0 represents a significant leap forward in container platform solutions, delivering the only solution that manages and secures applications on Kubernetes in multi-Linux, multi-OS, and multi-cloud customer environments. One of the most promising features announced with this release includes Kubernetes integration as an optional orchestration solution, running side-by-side with Docker Swarm. Not only this, this release includes Swarm Layer 7 routing improvements, Registry image mirroring, Kubernetes integration to Docker Trusted Registry & Kubernetes integration to Docker EE access controls. With this new release, organizations will be able to deploy applications with either Swarm or fully-conformant Kubernetes while maintaining the consistent developer-to-IT workflow.Docker EE is more than just a container orchestration solution; it is a full lifecycle management solution for the modernization of traditional applications and microservices across a broad set of infrastructure platforms. It is a Containers-as-a-Service(CaaS) platform for IT that manages and secures diverse applications across disparate infrastructure, both on-premises and in the cloud. Docker EE provides an integrated, tested and certified platform for apps running on enterprise Linux or Windows operating systems and Cloud providers. It is tightly integrated into the underlying infrastructure to provide a native, easy to install experience and an optimized Docker environment.V) Kubernetes vs Docker swarmInstallation & Cluster configurationGUIScalabilityAuto-ScalingLoad BalancingRolling Updates & RollbacksData VolumesLogging & MonitoringKubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It was built by Google based on their experience running containers in production using an internal cluster management system called Borg (sometimes referred to as Omega). On the other hand, a Swarm cluster consists of Docker Engine deployed on multiple nodes. Manager nodes perform orchestration and cluster management. Worker nodes receive and execute tasks Below is the major list of differences between Docker Swarm & Kubernetes:Docker SwarmKubernetesApplications are deployed in the form of services (or “microservices”) in a Swarm cluster. Docker Compose is a tool which is majorly used to deploy the app.Applications are deployed in the form of a combination of pods, deployments, and services (or “microservices”).Autoscaling feature is not available either in  Docker Swarm (Classical) or Docker SwarmaAn auto-scaling feature is available under K8s. It uses a simple number-of-pods target which is defined declaratively using deployments. CPU-utilization-per-pod target is available.  Docker Swarm supports rolling updates features. At rollout time, you can apply rolling updates to services. The Swarm manager lets you control the delay between service deployment to different sets of nodes, thereby updating only 1 task at a time.Under kubernetes, the deployment controller supports both “rolling-update” and “recreate” strategies. Rolling updates can specify a maximum number of pods unavailable or maximum number running during the process.Under Docker Swarm Mode, the node joining a Docker Swarm cluster creates an overlay network for services that span all of the hosts in the Swarm and a host-only Docker bridge network for container.By default, nodes in the Swarm cluster encrypt overlay control and management traffic between themselves. Users can choose to encrypt container data traffic when creating an overlay network by themselves.Under K8s, the networking model is a flat network, enabling all pods to communicate with one another. Network policies specify how pods communicate with each other. The flat network is typically implemented as an overlay.Docker Swarm health checks are limited to services. If a container backing the service does not come up (running state), a new container is kicked off.Users can embed health check functionality into their Docker images using the HEALTHCHECK instruction.Under K8s, the health checks are of two kinds: liveness (is app responsive) and readiness (is app responsive, but busy preparing and not yet able to serve)Out-of-the-box K8S provides a basic logging mechanism to pull aggregate logs for a set of containers that make up a pod.
Rated 4.5/5 based on 36 customer reviews
4640
Kubernetes vs Docker

What is Kubernetes?Kubernetes (also known as K8s) ... Read More