Search

12 DevOps Skills That A DevOps Engineer Should Master

Are you an engineer looking out to excel in DevOps skills? Is your team looking to adopt DevOps? You have come to the right place. In this article, we will discuss key DevOps engineering skills that make you an expert in this space. DevOps is all about breaking down the traditional silos and creating a culture of collaboration between business, operations and development teams. Along with the culture aspect, DevOps also emphasizes the key aspect of automating any repetitive and error-prone tasks using a spectrum of modern engineering tools. This article will help you gain insights on 12 specific skill set one needs to master in this space.One thing to keep in mind when you talk about a “DevOps engineer“ is that it is not a role but a skill set that needs to be mastered by every software developer and not just operation folks.DevOps Skills"DevOps, everyone is doing it, few have mastered it " - Mirco Hering, Author of “DevOps for the Modern Enterprise”. He explicitly quotes that nowadays all are adopting and working in DevOps way without understanding much about the key concepts and skills needed. Only a few are doing it right. What started as a great idea would end up in becoming a mere buzz word if we don’t understand the 12 Devops engineering skills.The 12 DevOps engineering skills are:1. Linux fundamentals and scriptingLinux is an open-source operating system created by Linus Torvalds in 1991. Since then there has been no looking back. Linux is now the most preferred operating system in the world. It’s more secure, compared to other operating systems like windows. Most of the companies have their environment setup in Linux based systems.Many DevOps tools in the configuration management space like Chef, Ansible, Puppet, etc have their architecture based on Linux master nodes. These tools help in provisioning and managing infrastructure automatically with the help of any scripting language like Ruby, Python, etc.Linux fundamentals and scripting know-how is a must to get you started with infrastructure automation which is a key concept in DevOps.2. Knowledge of various DevOps tools and technologiesDevOps is implemented with the help of tools but in most of the cases, DevOps is often misunderstood as tools. We have to always remember the great quote from Scott Hanselman “The most powerful tool we have as developers are automation.”The main aim of DevOps is to add value to the customer at an increased pace. Tools are chosen to incorporate this purpose and never to be used for the sake of using it. Technical knowledge of the tools is an added advantage for you to embrace DevOps.DevOps tools are categorized broadly into 10 categories:Collaboration toolsApplication Life management and Issue Tracking toolsCloud/Iaas/Paas/Serverless toolsSource control management toolsPackage ManagersContinuous Integration and continuous delivery toolsContinuous Testing toolsRelease orchestration toolsMonitoring toolsAnalytics tools.In each of these categories, we have more than 10 tools. A right tool must be chosen in each of these categories based on client requirements and the project environment. The main point to remember is that a tool should add value to the customer either by reducing delivery time or increasing the quality of the deliverables.3. Continuous Integration And Continuous DeliveryA better understanding of the continuous integration and continuous delivery approaches helps to deliver a high-quality product at a faster pace to the clients.Continuous integration is one of the best practices in DevOps Community where whenever a developer finishes a functionality or a user story(in terms of scrum) he/she integrates the new code with the existing code base continuously. This helps to save a lot of time spent during the integration phase of the project. Continuous integration helps to detect integration issues in the early stages itself thus making the life of the developer easier.Continuous delivery comes as an extension to continuous integration where the newly integrated code is made ready for deployment automatically without or minimum human intervention. Often in the case of the waterfall model, the development team has to release the new code to the testing team and then the testing team takes it forward. This usually takes a couple of days. These delays could be avoided by automating the transfer and testing process, making the code ready for deployment quickly.Continuous deployment is the next step in automating the delivery pipeline of an application. This is where the new code is automatically deployed in the production environment. Some of the software companies do not consider continuous deployment as a best practice as they foresee it as a place where a lot of defects can creep into.4. Infrastructure as Code (IAC)Infrastructure as Code is the latest best practice in the DevOps community. This helps to provision and manage infrastructure by abstracting to a high-level programming language. Thus all the features of the source code could be applied to the infrastructure of the application like version control, tracking, storing in repositories, etc. With the emergence of IAC, days of manually configured infrastructure and infrastructure shell scripts are gone. A person who knows to develop infrastructure as code creates less error-prone, consistent and reliable infrastructure.5. DevOps Key ConceptsDevOps is a culture where business, development, and operations teams collaborate breaking the traditional silos. The key value is to create a cross-functional team that knows what each team member does and where any team member can take up the work of the other, thus providing a better collaboration within team members and delivering a high-quality product to the customer. Since we don’t have silos anymore, unwanted time spent on transfer of the code between various teams like the testing team, the operation team is reduced, increasing the pace of delivery.Another key concept is automating everything. This is done to generate a high-quality product for the customers by reducing human defects.6. Soft SkillsDevOps emphasizes culture and people more than tools and practices. Hence people skills are a must-have when we are trying to adopt DevOps. The next important key value is trust among the team members. Trust is enabled by active and effective communication between team members creating positive vibes among team members. This, in turn, gets reflected on the quality of the deliverables and finishing off the work on time.7. Customer-first mindsetDevOps emphasizes on a customer-first mindset. All people who adopt DevOps should take decisions keeping this in mind. No activity should be performed that does not add value to the customer.8. Security skillsDevOps is all about speed, automation, and quality. As we increase the speed often we encounter vulnerabilities that get introduced into the code at a faster pace. DevOps practitioners should be able to write the code that is protected from various attacks. This has often led to DevSecOps thinking where security features are incorporated from the beginning rather than stitching it at the end.9. FlexibilityAccording to Heraclitus: “The Only Thing That Is Constant Is Change ”. A team that embraces DevOps must be equipped to adopt change. All team members should be able to accept a  requirement change or a role change. He/she must be comfortable to work in integration, testing, release, deployment, etc and also should have the technical knowhow. He/she must be aware of modern engineering tools and should be equipped to work on different tools based on requirements. Anant Agarwal, CEO of edX summarises the flexibility as follows:“It’s hard to learn something that seems to evolve as quickly as the lessons are taught. Self-learners are the perfect candidates for embracing and pursuing  DevOps adoption, as it requires a roll-up-your-sleeves, trial-and-error, do-it-yourself, continuous learning approach.”10. CollaborationCollaboration is one of the important key values in DevOps. A team that adopts DevOps is a cross-functional team where members from business, operations and development teams co-exist. Active collaboration is a key skill required by the team members. There should be transparency between the team members. Everyone should know what is happening in the team and who is responsible for a particular task.11. Decision-makingDecisiveness or decision-making is one of the key elements employers look for in their employees. The ever-changing nature of the code in the DevOps team should be handled by a person who is quick in taking decisions. Thus enabling quick delivery and deployment of new code. Faster deployments give faster returns to the customer and provide immediate feedback from the end-users. This often leads to customer satisfaction.12. Agile engineeringDevOps was introduced in 2008 by Patrick Debois and Andrew Clay Shafer after a discussion about agile infrastructure. Therefore DevOps is heavily rooted in agile principles and values. There are 4 agile values and 12 principles according to the Agile Manifesto.  Every DevOps practitioner needs to have an in-depth understanding of agile philosophies.Practical knowledge of agile practices like Test-driven development, behavior-driven development, etc helps to make a great DevOps practitioner.ConclusionDevOps is all about breaking down silos and where development teams, operation teams, and business teams collaborate to deliver a high-quality product quickly. All team members where DevOps is adopted should have all the 12 DevOps engineering skills. He/she focuses on customer satisfaction rather than local optimizations. To summarise, he/she should be a great team player, technically strong with good knowledge of  DevOps tools and who can adapt to changes.This subtle but important combination of all the attributes is important for a professional to  be a DevOps engineer. Because, at the end of the day, customer satisfaction is the key to running a successful business enterprise.
12 DevOps Skills That A DevOps Engineer Should Master
Anju
Rated 4.5/5 based on 356 customer reviews

12 DevOps Skills That A DevOps Engineer Should Master

Are you an engineer looking out to excel in DevOps skills? Is your team looking to adopt DevOps? You have come to the right place. In this article, we will discuss key DevOps engineering skills that make you an expert in this space. DevOps is all about breaking down the traditional silos and creating a culture of collaboration between business, operations and development teams. Along with the culture aspect, DevOps also emphasizes the key aspect of automating any repetitive and error-prone tasks using a spectrum of modern engineering tools. This article will help you gain insights on 12 specific skill set one needs to master in this space.One thing to keep in mind when you talk about a “DevOps engineer“ is that it is not a role but a skill set that needs to be mastered by every software developer and not just operation folks.DevOps Skills"DevOps, everyone is doing it, few have mastered it " - Mirco Hering, Author of “DevOps for the Modern Enterprise”. He explicitly quotes that nowadays all are adopting and working in DevOps way without understanding much about the key concepts and skills needed. Only a few are doing it right. What started as a great idea would end up in becoming a mere buzz word if we don’t understand the 12 Devops engineering skills.The 12 DevOps engineering skills are:1. Linux fundamentals and scriptingLinux is an open-source operating system created by Linus Torvalds in 1991. Since then there has been no looking back. Linux is now the most preferred operating system in the world. It’s more secure, compared to other operating systems like windows. Most of the companies have their environment setup in Linux based systems.Many DevOps tools in the configuration management space like Chef, Ansible, Puppet, etc have their architecture based on Linux master nodes. These tools help in provisioning and managing infrastructure automatically with the help of any scripting language like Ruby, Python, etc.Linux fundamentals and scripting know-how is a must to get you started with infrastructure automation which is a key concept in DevOps.2. Knowledge of various DevOps tools and technologiesDevOps is implemented with the help of tools but in most of the cases, DevOps is often misunderstood as tools. We have to always remember the great quote from Scott Hanselman “The most powerful tool we have as developers are automation.”The main aim of DevOps is to add value to the customer at an increased pace. Tools are chosen to incorporate this purpose and never to be used for the sake of using it. Technical knowledge of the tools is an added advantage for you to embrace DevOps.DevOps tools are categorized broadly into 10 categories:Collaboration toolsApplication Life management and Issue Tracking toolsCloud/Iaas/Paas/Serverless toolsSource control management toolsPackage ManagersContinuous Integration and continuous delivery toolsContinuous Testing toolsRelease orchestration toolsMonitoring toolsAnalytics tools.In each of these categories, we have more than 10 tools. A right tool must be chosen in each of these categories based on client requirements and the project environment. The main point to remember is that a tool should add value to the customer either by reducing delivery time or increasing the quality of the deliverables.3. Continuous Integration And Continuous DeliveryA better understanding of the continuous integration and continuous delivery approaches helps to deliver a high-quality product at a faster pace to the clients.Continuous integration is one of the best practices in DevOps Community where whenever a developer finishes a functionality or a user story(in terms of scrum) he/she integrates the new code with the existing code base continuously. This helps to save a lot of time spent during the integration phase of the project. Continuous integration helps to detect integration issues in the early stages itself thus making the life of the developer easier.Continuous delivery comes as an extension to continuous integration where the newly integrated code is made ready for deployment automatically without or minimum human intervention. Often in the case of the waterfall model, the development team has to release the new code to the testing team and then the testing team takes it forward. This usually takes a couple of days. These delays could be avoided by automating the transfer and testing process, making the code ready for deployment quickly.Continuous deployment is the next step in automating the delivery pipeline of an application. This is where the new code is automatically deployed in the production environment. Some of the software companies do not consider continuous deployment as a best practice as they foresee it as a place where a lot of defects can creep into.4. Infrastructure as Code (IAC)Infrastructure as Code is the latest best practice in the DevOps community. This helps to provision and manage infrastructure by abstracting to a high-level programming language. Thus all the features of the source code could be applied to the infrastructure of the application like version control, tracking, storing in repositories, etc. With the emergence of IAC, days of manually configured infrastructure and infrastructure shell scripts are gone. A person who knows to develop infrastructure as code creates less error-prone, consistent and reliable infrastructure.5. DevOps Key ConceptsDevOps is a culture where business, development, and operations teams collaborate breaking the traditional silos. The key value is to create a cross-functional team that knows what each team member does and where any team member can take up the work of the other, thus providing a better collaboration within team members and delivering a high-quality product to the customer. Since we don’t have silos anymore, unwanted time spent on transfer of the code between various teams like the testing team, the operation team is reduced, increasing the pace of delivery.Another key concept is automating everything. This is done to generate a high-quality product for the customers by reducing human defects.6. Soft SkillsDevOps emphasizes culture and people more than tools and practices. Hence people skills are a must-have when we are trying to adopt DevOps. The next important key value is trust among the team members. Trust is enabled by active and effective communication between team members creating positive vibes among team members. This, in turn, gets reflected on the quality of the deliverables and finishing off the work on time.7. Customer-first mindsetDevOps emphasizes on a customer-first mindset. All people who adopt DevOps should take decisions keeping this in mind. No activity should be performed that does not add value to the customer.8. Security skillsDevOps is all about speed, automation, and quality. As we increase the speed often we encounter vulnerabilities that get introduced into the code at a faster pace. DevOps practitioners should be able to write the code that is protected from various attacks. This has often led to DevSecOps thinking where security features are incorporated from the beginning rather than stitching it at the end.9. FlexibilityAccording to Heraclitus: “The Only Thing That Is Constant Is Change ”. A team that embraces DevOps must be equipped to adopt change. All team members should be able to accept a  requirement change or a role change. He/she must be comfortable to work in integration, testing, release, deployment, etc and also should have the technical knowhow. He/she must be aware of modern engineering tools and should be equipped to work on different tools based on requirements. Anant Agarwal, CEO of edX summarises the flexibility as follows:“It’s hard to learn something that seems to evolve as quickly as the lessons are taught. Self-learners are the perfect candidates for embracing and pursuing  DevOps adoption, as it requires a roll-up-your-sleeves, trial-and-error, do-it-yourself, continuous learning approach.”10. CollaborationCollaboration is one of the important key values in DevOps. A team that adopts DevOps is a cross-functional team where members from business, operations and development teams co-exist. Active collaboration is a key skill required by the team members. There should be transparency between the team members. Everyone should know what is happening in the team and who is responsible for a particular task.11. Decision-makingDecisiveness or decision-making is one of the key elements employers look for in their employees. The ever-changing nature of the code in the DevOps team should be handled by a person who is quick in taking decisions. Thus enabling quick delivery and deployment of new code. Faster deployments give faster returns to the customer and provide immediate feedback from the end-users. This often leads to customer satisfaction.12. Agile engineeringDevOps was introduced in 2008 by Patrick Debois and Andrew Clay Shafer after a discussion about agile infrastructure. Therefore DevOps is heavily rooted in agile principles and values. There are 4 agile values and 12 principles according to the Agile Manifesto.  Every DevOps practitioner needs to have an in-depth understanding of agile philosophies.Practical knowledge of agile practices like Test-driven development, behavior-driven development, etc helps to make a great DevOps practitioner.ConclusionDevOps is all about breaking down silos and where development teams, operation teams, and business teams collaborate to deliver a high-quality product quickly. All team members where DevOps is adopted should have all the 12 DevOps engineering skills. He/she focuses on customer satisfaction rather than local optimizations. To summarise, he/she should be a great team player, technically strong with good knowledge of  DevOps tools and who can adapt to changes.This subtle but important combination of all the attributes is important for a professional to  be a DevOps engineer. Because, at the end of the day, customer satisfaction is the key to running a successful business enterprise.
Rated 4.5/5 based on 356 customer reviews
12894
12 DevOps Skills That A DevOps Engineer Should Mas...

Are you an engineer looking out to excel in DevOps... Read More

DevOps Engineer Salary

Before we start discussing the salary of DevOps Engineer, let’s understand what DevOps is? DevOps is a software development strategy which bridges the gap between the developers and the IT staff. It enables organizations to release small features very quickly and incorporate the feedback which they receive very quickly. Its benefits are less software failure and shortened the lead time between fixes.As a DevOps Engineer, you get a clear understanding of the Software Development Lifecycle along with various automation tools for developing digital pipelines (CI/CD pipelines). Glassdoor ranks DevOps at second among the 50 Best Jobs in America. But who can become a DevOps Engineer? The journey of becoming a DevOps Engineer is pretty long. However, you can become a DevOps Engineer even as a fresher. Further, working as a Developer, Ops person, Quality Assurance also prepares you to work in this role. The 2019 Tech Salary Report released by Dice shows that the salary of DevOps Engineer ranks in the Top 5 salaries with an average of $111,683.According to Experts, the popularity of DevOps is going to reach its peak in 2019. The above chart shows the Google trend for the term DevOps in the year 2019 projecting a hypothesis of its growth.Salary of DevOps EngineerWith all the skills and expertise as a DevOps Engineer, it is important for you to know where you stand in terms of salary. The salary you can earn as a DevOps Engineer in the major countries are as follows:CountriesCurrenciesSalary of DevOps Engineer (per annum)IndiaINR6,42,692UKPound Sterling40,883USAUSD92,054CanadaCAD76,357Experience also plays a pivotal role while determining your salary. Now let’s take a look at the experience-wise salary of DevOps Engineer across the globe:CountriesCurrenciesSalary of DevOps Engineer (per annum)Entry-levelMid-CareerExperiencedIndiaINR3,52,23310,47,56716,12,255UKPound Sterling29,59848,56160,629USAUSD72,966100,576117,161CanadaCAD61,19182,27789,198Further, the maximum and minimum salaries that DevOps Engineers earn in the above-mentioned countries can be seen in the following chart:CountriesCurrenciesMinimum Salary (per annum)Maximum Salary (per annum)IndiaINR3,07,00020,00,000UKPound Sterling26,00067,000USAUSD63,000134,000CanadaCAD55,00099,000Company-wise salary in various countriesAs we have seen the average as well as the minimum and maximum salary in the major countries, now let’s take a look at a company-wise salary in those countries. First, let’s take a look at the top companies hiring DevOps Engineers in India and the salary paid by them:Company NameAverage Salary (per annum in INR)Tata Consultancy Services4,92,545Accenture6,40,043IBM4,35,534Cognizant Technology Solutions4,43,759Amazon11,15,053Now, let’s take a look at the average salary paid by the top companies to DevOps Engineer in the United Kingdom:Company NameAverage Salary (per annum in £)Cloudreach35,222Accenture33,198KPMG58,575ClearScore55,827Sky44,642The top companies hiring DevOps Engineer in the United States and the salaries offered to them are as follows:Company NameAverage Salary (per annum in $)IBM112,432Cognizant Technology Solutions96,381Accenture107,086Amazon110,779Capital One100,344The following are the top companies hiring DevOps Engineer in Canada and the salaries offered to them are as follows:Company NameAverage Salary (per annum in CA$)IBM104,170Cloudreach74,068SAP105,500Blackberry89,500Global Relay83,544Why DevOps Engineers are paid highly?DevOps is one of the most spoken and popular terms in IT. As you can see in the above charts that apart from being in demand, DevOps can set you on a promising journey in terms of your career prospects. Now let’s take a look at a few reasons which make DevOps Engineer the most demanded job:1. High ROIDid you ever imagine why companies are ready to invest in paying high salaries to DevOps Engineers? The reason for the demand is high ROI. Implementation of DevOps helps an organisation to increase its annual profit, resulting in companies searching for such engineers.2. DevOps lead the futureReports tell that DevOps is going to remain an essential part of IT projects. This is another reason that leads to high demand in job. This increase in demand for hiring DevOps engineers as the experts believe that it’s going to be more vital for businesses in the future.3. It’s cloud-basedIn this era, when most of the applications are changing to cloud-based platforms, you might be glad to know that it’s important for you to have some DevOps experts by your side. The clouds and infrastructural processes applied through DevOps are complementary. Organisations need to deploy proper processes to successfully change the applications in the main platform. This is the core reason behind IT projects and businesses looking to hire DevOps Engineers.4. Triggers sufficiencyDevOps focuses on increasing the sufficiency. You are able to interact with the applications easily as well as all the changes are confirmed a fast as possible once you apply the practices and updates more frequently. This reduced reputation-threatening errors.5. CompetitiveAnother reason for DevOps’ demand is that it is capable of pushing you up the list. From IT-based company to business application company, DevOps can keep it going without fail.On a concluding noteTo conclude, a DevOps Engineer is equipped with the skills of testing, building, integrating, coding, and deployment along with problem-solving skills. Also, as a DevOps Engineer, you need to multitask and handle challenges that arise from multiple roles in order to justify your designation. It is suggestible to opt for a DevOps training with a reputed training provider and get DevOps certified to explore its benefits.The skills and expertise that you possess as a DevOps Engineer also makes you eligible to move towards a promising career with a good salary. This blog will give you a clear idea about the earning prospect around the globe as well as the top companies hiring DevOps Engineers across the globe. All the best for your DevOps career!
Rated 4.5/5 based on 5 customer reviews
9758
DevOps Engineer Salary

Before we start discussing the salary of DevOps En... Read More

Docker Vs Virtual Machines(VMs)

Let’s have a quick warm up on the resource management before we dive into the discussion on virtualization and dockers.In today’s multi-technology environments, it becomes inevitable to work on different software and hardware platforms simultaneously.The need to run multiple different machines (Desktops, Laptops, handhelds, and Servers) platforms with customized hardware and software requirements has given the rise to a new world of virtualization in IT industry.What a machine need?Each computing environment(machine) needs its own component of hardware resources and software resources.As more and more machines are needed, building up and administering many such stand-alone machines is not only cumbersome, time consuming but also adds up to the cost and energy.Apparently; to run a customized High-power Scalable Server is a better idea to consolidate all the hardware and software requirements into one place and have a single server run and distribute the resources to many machines over a network.That saves us time, resources, energy and revenue.These gigantic servers are stored in a data warehouse called a Datacenter.Below Diagram (2) indicates a single server serving and sharing resources and data among multiple client machinesDoes this look simplified enough? Yes of course!So, this setup looks feasible we have a high-power, high-storage Server that gives resources to many smaller(resources) machines over a network.How to manage huge data - ServersWith Internet Of Things in boom, Information is overflowing with a huge amount of data; handling tremendous data needs more system resources which means more Dedicated servers are needed.Many Servers approach challenge:Running several Dedicated servers for specific services such as Web service, application or database service as indicated in Diagram (3) is difficult to administer and consumes more energy, resources, manpower and is highly expensive.In addition; resource utilization of servers is very poor resulting in resource wastage.This is where simulating different environments and running them all on a single server is a smart choice; rather than having to run multiple physically distinct servers.This is how Diagram (3) would change after consolidating different servers into one as shown in Diagram (4).Sheet 2VirtualizationWhat is VirtualizationThe above single server implementation can be defined as the following term.Virtualization is a technique used to simulate and pretend a single infrastructure resource (hardware resources and software resources) to be acting as many providing multiple functionalities or services without the need to physically build, install and configure.In other words;Running multiple simulated environments in a single machine without installing and configuring them is called Virtualization.Technically speaking;Virtualization is an abstract layer that shares the infrastructure resources among various simulated virtual machines without the need to physically set up these environments.Diagram (5) displays different virtual Operating systems are running on the same machine and using the same hardware architecture of the underlying machine.What is a Virtual machineThe simulated virtualized environments are called virtual machines or VM.Virtual machine is a replication/simulation of an actual physical machine.A VM acts like a real physical machine and uses the physical resources of the underlying host OS.A VM is a running instance of a real physical machine.Need for virtualizationSo; we have an overview of virtualization, let us examine when should we virtualize and what are the benefits of virtualization?Better resource management and cost-effective: as indicated in Diagram (6) and Diagram (7); hardware resources are distributed wisely on need basis to different environments; all the virtual machines share the same resources and reduce resource wastage.Ease of quick administration and maintenance: It is easier to build, install, configure one server rather than multiple servers. Updating a patch on various machines from a single virtualized server is much more feasible.Disaster recovery: Since all the virtualized machines reside on the same server and are treated as mounted volumes of data files, it is easier to back up these machines. In case of a disaster failure (power failure, network down, cyber-attacks, failed test code, etc) VM screenshots are used to recover the running state of the machine and the whole setup can be built up within minutes.Isolated and independent secure test environment: virtualization provide an isolated independent virtual test environment to test the legacy code or a vendor-specific product or even a beta release or say a corrupt code without affecting the main hardware and software platform. (This is a contradictory statement though; will discuss more under types of virtualization)These test environments like dev, uat, preprod, prod etc..can be easily tested and discarded.Easily scalable and upgradable: Building up more simulated environments means spinning up more virtual machines. Also upgrading VMs is as good as to run a patch in all VMs.Portable: Virtual machines are lightweight compared to the actual running physical machines; in addition, a VM that includes its own OS, drivers, and other installation files is portable on any machine. One can access the data virtually from any location.The screenshot of activity monitor below compares the CPU load:Implementation a) What is hypervisor and its types?As discussed in the previous section; virtualization is achieved by means of a virtualized layer on top of hardware or a software resource.This abstract layer is called a hypervisor.A hypervisor is a virtual machine monitor (VMM)There are 2 types of hypervisors: Diagram (8)Type-1 or bare-metal hypervisorType-2 or hosted hypervisorType-1 or bare-metal hypervisor is installed directly on the system hardware, thus abstracting and sharing the hardware components with the VMs.Type-2 or hosted hypervisor is installed on top of the system bootable OS called host OS; this hypervisor abstracts the system resources visible to the host OS and distributes it among the VMs.Both have their own role to play in virtualization.b) Comparing hypervisor typesType-1 or bare-metal hypervisorType-2 or hosted hypervisorInstalled directly on the infrastructure-OS independent and more secure against software issues.Installed on top of the host OS-more prone to software failures.Better resource flexibility: Have direct access to the hardware infrastructure (Hard-drive partition, RAM, embedded cards such as NIC). Provide more flexibility and scalability to the VMs and assign resources on a need basis.Limited resource allocation: Have access to just the resources exposed by the host OS.VMs installed will have limited access to hardware resources allocated and exposed by the host OS.Single point of failure: A compromised VM may affect the kernel. Extra security layers needed.A compromised VM may affect only the host OS, kernel still remains unreachable.Low latency due to direct link to the infrastructure.High latency as all the VMs have to pass through the OS layer to access the system resources.Generally used in ServersGenerally used on small client machinesExpensiveLess expensiveType-1 Hypervisors in market:VMWare ESX/ESXiHyperkit (OSX)Microsoft Hyper-V (Windows)KVM(Linux)Oracle VM ServerType-2 Hypervisors in market:Oracle VM VirtualBoxVMWare WorkstationParallels desktop for MACTypes of virtualizationBased on what resource is virtualized, there are different classifications of virtualization.Server, Storage device, operating system, networkDesktop virtualization: Entire desktop environment is simulated and distributed to run on a single server all at once. A desktop virtualization allows administrators to manage, install, configure similar setups on many machines. Upgrading all the machines with a single patch update or security checks becomes easier and faster.Server virtualization: Many dedicated servers can be virtualized into a single server that provides multi-server functionality.Example:Many virtual machines can be built up sharing the same underlying system resources.Storage, RAM, disks, CPUOperating system virtualization: This happens at the kernel levelHypervisor on hardware type 2 bare-metalOne machine: Can boot up as multiple OS like Windows or Linux side-by-sideApplication virtualization: Apps are packaged and stored in a virtual environment and are distributed across different VMs. Example Microsoft applications like excel, MS word, Powerpoint etc, Citrix applications.Network functions virtualization: Physical network components such as NIC cards, switches, routers, servers, hubs, and cables are all assembled in a single server and used virtually by multiple machines without having the load of installing them on every machine.Virtualization is one of the building blocks and driving force behind cloud computing.Cloud computing provide virtualized need-based services. This has given an uplift to the concept of virtualization.A quick mention of various cloud computing models/services are listed below:SaaS – Software as a Service– end-user applications are maintained and run by service providers and easily distributed and used by the end users without having to install them.Top SaaS providers: Microsoft (Office suite, CRM, SQL server databases), AWS, Adobe, Oracle (ERP, CRM, SCM), Cisco’s Webex, GitHub ( git hosting web service)PaaS – Platform as a Service – computing infrastructure(hardware/software) is maintained and updated by the service provider and the user just have to run the product over this platform.Top Paas providers: AWS beanstalk, Oracle Cloud Platform (OCP), Google App EngineIaaS – Infrastructure as a Service – Provide infrastructure such as servers, physical storage, networking, memory devices etc. Users can build their own platform with customized operating system and applications.Key IaaS providers: Amazon Web Services, Microsoft Azure, Google compute engine, CitrixConclusion:We now have a fair understanding of types of virtualization and how they are implemented.ContainerizationThough virtualization has its pros; there are certain downsides of virtualization such as:Not all systems can be virtualized always.A corrupt VM is sometimes contagious and may affect other VMs or the kernel in-case of a Type-1 or bare-metal hypervisor.Latency of virtual disks due to increased payload on the CPU resources with a higher number of VMsUnstable performanceAn alternative approach to overcome the above flaws of virtualization is to Containerize the applications and the run-time environment together.What is containerization  Containerization is an OS-level virtualization; wherein the entire build of an application along with run-time environment is encapsulated or bundled up in a package.These packages are called containers.Containers are lightweight virtualized environments. These are independent of the infrastructure both hardware and software.The run-time environment includes the operating system, binaries, libraries, configuration files and other applications as shown in Diagram (9).What is DockersDockers provide an excellent framework for containerization and allow to build, ship, and run distributed applications over multiple platforms.Docker framework is setup as a docker engine installed on host OS and a docker daemon (background process) process is started that manage the virtual containers.Refer Diagram (10) that shows a Docker engine with 3 containers residing on host OS (MAC OS).An instruction file called dockerfile is written with a set of system commands that change the filesystem such as add, copy or delete commands, run commands, install utilities, system calls etc…This dockerfile is built and packaged along with its run-time environment as an executable file called a docker image.Docker daemon services run these images to create docker containers.Docker container is a run-time instance of an imageIt is wise to say that many images (or layers of instruction files) make up a container.Docker containers have a compact packaging and each container is well isolated.We can run, start, stop, attach, move or delete containers as these runs as services on the host OS.Each image is made up of different layers; each image based on top of the other with the customized command changes that we make.Every time we make a change in the filesystem, each change related to the image is encapsulated in a new layer of filesystem and stacked up above the parent image.Only the changed layers are rebuilt, rest of the unchanged image layers are reused.Certain docker commands ADD, RUN and COPY create a new layer with increased byte size; rest of the commands simply adds up a new layer with zero-byte size.These layers are re-used to build a new image, hence faster and lightweight.Docker images are alsoThe layer approach of an image every time there is a change in the image makes it possible to Version control the docker images.Here is a terminal recording that shows docker engine process and how images and containers are created.Docker documentation - to create containers.Ppt diagram:Code -> package -> build images -> registry hub -> download/pull image -> run containerAnimation: sheet4Let’s consider the docker container: divyabhushan/learn_docker hosted on docker hub.Latest tagged image: centOS_release1.2What is the container environment?Base OS: Centos:7Utilities: vim, yum, gitApps/files: Dockerfile, myApp.sh, runtests.sh, data and other supporting files.Git source code: dockerImagesDownload as: git clone https://github.com/divyabhushan/DockerImages_Ubuntu.gitWhat does the container do?Container launches “myApp.sh” in Ubuntu:14.04 environment and run some scripts along with a set of post test_suites in the container (Ubuntu:14.04) and saves the output log file.How to modify and build your own appStep 1: pull 1.1: Pull the docker image1.2: Run image to create a container and exitStep 2: modify2.1: Start the container2.2: Attach to the container and make some changesStep 3: commit3.1: Examine the history logs and changes in the container3.2: Commit the changes in containerStep 4: push4.1: Push new image to docker hubLet us see the steps in action:Step 1: pull docker image on your machine1.1: Pull the docker imageCommand:docker pull divyabhushan/learn_docker:myApp_ubuntu_14.04View the image on systemdocker imagesscreenshotCommand:docker run -it --name ubuntu14.04 0a6f949131a6Run command in ubuntu container and exit, the container is stopped on exiting out.View the stopped container with the ‘ps -a’ command.Step 2: modifyStart the containerCommand:docker start Now the container is listed as a running processAttach to the container and make some changesCommand:docker attach 7d0d0225778cedit the ‘git configuration’ file and ‘myApp.sh’ scriptContainer is modified and stoppedStep 3: commitExamine the history logs and changes in the containerThe changes done inside the container filesystem can be viewed using the ‘docker diff’ command as:Command: docker diff 7d0d0225778cCommit the changes in containerDocker commit:Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]docker commit -m 'new Ubuntu image' 7d0d0225778c divyabhushan/learn_docker:ubuntu14.04_v2New image is created and listedStep 4: pushPush new image to docker hubCommand:docker push divyabhushan/learn_docker:ubuntu14.04_v2Point to note: just the latest commit change layer ‘50a5ce553bba’ has been pushed, while the other layers were re-used.Image available on docker hub:The latest tagged image can now be pulled from other machines; and run to create the same container environment.Conclusion: An image was pulled and run to create a container to replicate the environment. Container was modified, new changes were committed to form a new image. New Image pushed back on the docker hub and now available as a new tag ready to be pulled by other machines.Difference between Dockers and Virtual machinesTabular differences on various parametersParametersVMsDockersarchitectureHardware level virtualization. Each VM has its own copy of OS.Software level virtualization. Dockers have no own OS, run on host OSIsolationFully isolatedProcess or application-level isolation.  InstallationHypervisor can run directly on the hardware resources or on the host OS.Docker engine is installed on top of the host OS. A docker daemon process is initiated on the host OS. There is no separate OS for every container.CPU processing + performanceSlower: A VM contains the entire run-time environment that has to be loaded every time. Uses more CPU cycles; gives unstable performance.Faster: Docker images are pre-built and share host resources as a result running an image as a container is lightweight and consumes less CPU cycle; gives a stable performanceHardware storageMore storage space as each VM is an independent machine (OS). Example: 3 VMs of 800MB each will take 2.4 GB of space.Docker containers are lightweight since do not require to load OS+drivers, run on host OS as processes.PortableDependency on host OS and hardware makes VM less portable. Importing a VM still requires manual setup such storage, RAM and network.Highly portable since lightweight and zero dependency on hardware.Scalable and code-reusabilitySpinning up more VMs still need administrative tasks such as distributing resources to VM. Running a new machine puts extra load on the system resources also re-managing earlier VMs becomes a task. Every VM keeps its own copy of resources-poor code-reusability.Spinning up new docker containers simply means running pre-built images into containers as a process inside host OS. Containers are also configured on-the-fly passing parameters and run-time. Single image can be run and used to create many containers; encourage code-reusabilityResource utilizationStatic allocation results in resource wastage in case of idle VMs or if a VM’s resource requirement increases.Resources are dynamically allocated and de-allocated on the need basis by the docker engine.Docker system prune or garbage collectionVirtual machines do not have an in-built prune mechanism, these have to be administered manually.Docker image and containers can be pruned; which frees up a sensible amount of storage and memory space and CPU cycles.New environmentCreating new VM from the scratch is a tedious, repetitive tasks. It involves installing a new OS, loading kernel drivers and other tools and configurations.Package the code and dependency files, build into an image, run the image to create a new container. Use an existing or a base image (dockerhub- scratch) to run and create more containers on the go.Web-hosted HubNo web hosted hub for VMsdockerHub provides an open-source reliable trusted source of pre-built images that can be downloaded to run new containers.Version control (backup, restore,track history)(refer git)Snapshot of VMs are not very user-friendly and consume more space.Docker images are version controlled. Every delta difference in each docker container can easily be viewed (demo: docker diff ). Any change in the image is stored as a different layered version. A reference link to older images saves build time and space.Auto-buildAutomation of creating VMs is not very feasible.Docker images can also be auto-built from every source code check-in to GitHub (Automated builds on Dockerhub)Disaster recoveryTedious to recover from VM backup files.Easier to restore docker images (like files) just like git source files in case images are version controlled. Backup images only have to be run to create containers. (refer: screenshot).UpdateAll the VMs have to updated with the release patch.A single image is updated, re-built and distributed across multiple platforms.Memory usage+speedSlower: Entire snapshot of a machine and the OS is loaded into the cache memory.Real-time and fast: pre-built images. Only the instance, i.e, a container has to be run as a process and uses memory like an executableData integrityVM behavior may change if the dependency includes beyond the VM boundaries. (example: an app depends on production host network settings)Same behavior of apps in any environmentsecurityMore secure: A failure inside a VM may reach its guest OS but not the host OS or other virtual machines. Type-2 hypervisor though has a risk of kernel attack.Less secure: If a docker container compromised; underlying OS and hence all the containers may be affected since they share the same host kernel. OS Kernel may also be risked.Key providersRed hat KVM, VMWare, Oracle VM VirtualBox, Mircrosoft Hyper-V, Citrix XenServerDockers, Google kubernetes Engine, AWS Elastic Container serviceData authenticationLot of software licenses.Docker maintains inbuilt content trust to verify published images. When to use VM or a DockerWhen the need is an isolated OS, go for VMs.For a hardware and software independent isolated application that needs fast distribution on multiple environments, use dockers.Docker use-case:Example: A database application along with its databaseConsider the docker image - Oracle WebLogic Server on Docker Hub.This image is pre-built Oracle WebLogic Server runtime environment, including Oracle Linux 7 and Oracle JDK 8 for deploying Java EE applications.To create Server configurations on any machine, just download this image and run to create and start a container.There is no need to install and configure JDK, Linux or other run-time environment.Do not use Docker use-case:The application depends on utility outside the docker container.Code developed on dev machine with base OS as MAC; needs certain firewall setting on say Ubuntu OS.How can the code be tested on the production ubuntu OS firewall while running from MAC OS docker container?Solution:  Install a virtualization software on host OS-MAC; Create a VM (Virtual machine) with host OS as Ubuntu (same as production environment).Configure the desired firewall settings on host VM – Ubuntu; import the test code inside Ubuntu and test.Use a VM:For Embedded systems programming, a VM is installed that connects to the system device drivers, controllers and kernel.Virtualization used along with docker:An extension to the previous scenario would be if you would want to also test your python application in the host OS-Ubuntu VM without having to set up the python exe and its libraries and binaries.All you have to do is: Install Docker engine for Ubuntu OS and pull the python image from Docker hub as:docker pull python:tag [ tag is the python version-choose the appropriate version ]docker pull python:2.7Refer: Python imageEither write a Dockerfile to import/copy entire source code to python environment or directly run the image passing the script path as below:Command:$docker run -it --name my-python-script -v “$PWD”:/usr/src/myapp -w /usr/src/myapp python:2.7 python my-application.pyCommand options:-v = volume list-bind mount a volume [mount present working directory onto /usr/src/myapp inside container]-w = workdir string-working directory inside the containerMoreover; you can also test your python code in more than one version by downloading different python images, running them to create different containers and running your app in each container.What’s exciting here is that once the code tested in each python environment; you could quickly work on the test results and drop the containers. And deploy the code to production only once code tested against various python versions.Final thoughtsVMs and dockers are compatible with each other. Dockers are not here to replace Virtual machines.Both serve the same purpose of virtualizing the computing and infrastructure resources for optimized utilization.Using both Virtual machines and dockers together can yield better results in virtualization.When one desires a fast, lightweight, portable and highly scalable hardware-independent environment for multiple applications isolation; wherein security is not the major concern; Dockers is the best choice.Use a VM for embedded systems that are integrated with hardware; such as device driver or kernel coding.A scenario simulating an infrastructure setup with a high resource control and dependency on system resources; VMs are a better choice.Use of Dockers inside VMCI/CD pipelines scenario:Virtualization enables a smooth CI/CD process flow by promoting the users to concentrate only on developing the code on a working system that is set up for automated continuous integration and deployment without having to duplicate the entire setup each time.A virtualized environment is set up; either using a VM or a docker image that takes care of the automatic code check-ins, builds, regression testing, and deployments on the server.
Rated 4.5/5 based on 3 customer reviews
7733
Docker Vs Virtual Machines(VMs)

Let’s have a quick warm up on the resource manag... Read More

Top Devops Tools You Must Know

In the last decade for most of the enterprises, the term DevOps has transformed from just a buzzword to a way of working. The concept of DevOps originated in 2008 following a discussion on agile infrastructure by Patrick Debois and Andrew Clay Shafer. The idea started to gain momentum in 2009 after the first DevOpsDays in Belgium. What initially began as a practice to bring more efficiency in software infrastructure management, is now evolved into a continuous feedback model which has redefined every aspect of software development from requirement engineering to deployment. With this change, evolved new frameworks, practices and tools rooted in the core values of lean and agile. This paper discusses in detail the various tools that evolved during the DevOps movement. Readers would get a comprehensive understanding of what and where to apply these tools in their day to day DevOps journey.1. What is DevOps?DevOps is a culture where active collaboration between development, operations, and business teams are achieved. It’s not all about tools and DevOps in an organisation is to create value to end customer respecting human all team members. Tools are only aids to build this culture. DevOps increases organizations capability to deliver high-quality products or services at a swift pace. It automates all processes starting from build to deployment phase of an application. There are many tools available in the market to help us achieve this.2. What are the DevOps tools?DevOps tools are categorized into following categories:a. Collaboration Tools :DevOps teams rely on regular feedback and constant communication. Hence traditional email communication mechanism becomes less effective. Thus DevOps teams rely on more integrated collaboration suites that help in continuous communication and feedback loops. Some of these new generation collaboration tools include Slack, Teams, CA Flowdock etc.1. SlackSlack is a messaging tool for the teams providing a common place for all communications. We can set different channels for different kinds of work. Voice and video call options are also available with Slack. Atlassian and Slack have created a partnership and will be discontinuing other collaboration tools like Hipchat and Stride and will provide migration to Slack.Availability: Free version with limited features are available for users.For more details click here  2. CA FlowdockCA Flowdock is yet another collaboration tool from CA Technologies. It brings all conversations, chats, work items, etc to one place making it easier to prioritize work and solve problems.Availability: CA Flowdock is free for up to 5 member teams and free for non-profit organizations and student projects.Learn more about CA Flowdock here.     3. TeamsTeams is a unified communication platform by Microsoft. Teams combine workplace chats, video meetings, file storage, and application integration. The service also integrates with the company's existing Office 365 productivity suite and features extensions to integrate with non-Microsoft products and features.Availability: Teams is free for a small number of users.Learn more about Teams here.SL NoTool NameProsConsAvailability1.SlackIntuitiveSaaS productGood integration with other toolsThe video conferencing feature is not as great as its competitorsFreemium2.CA FlowdockEasy to configureIntegration with tools beyond CA tools is to be improvedFreemium for small users3.TeamOne stop shop -  Integrates file sharing, messaging, meetings, and other tools.Still early and could be a little buggyFreemium for small usersb. Top Application Life Management and Issue Tracking ToolsALM and planning tools help team members to plan their iterations by constantly getting feedback from the customers and prioritizing them. This helps to achieve visualisation of the works in hand, share plans, and track the progress. These tools make sure that all the team members are heard and addressed. Customer feedback is taken seriously and increases the responsiveness within the team. The tools enable teams to identify and track dependencies. It helps teams to plan their releases and sprints in a systematic way. Issue tracking tools enable features like auto triaging and assignment. Some of the tools are:1. JIRAJIRA is an issue tracking and project management tool from Atlassian. It could be used by small or large companies. Kanban and scrum boards which are simple and flexible are available with JIRA. It’s not free software.Availability: The pricing varies with the number of users.Learn more about JIRA here.  2. Mantis Bug TrackerMantis BT is an open source web-based issue tracker. It’s simple to use dashboard, helps to assign issues to developers and keep track of the issue progress. It is empowered with a built-in time tracking mechanism that helps the user to analyse the time spent by a developer on an issue.Availability: Paid version is available.For more details click here.3. TrelloTrello is a free project collaboration tool. It helps to manage projects with it’s simple and easy to work for boards. All tasks are defined as individual cards. These cards can be moved around helping the teams to visualise the work in progress.Click here for more information.4. CollabNet VersionOneCollabNet VersionOne is agile management  It helps in collaboration between teams at all levels to have a unified vision for software delivery.For more details about CollabNet VersionOne click here.  5. RallyRally is formerly known as CA Agile Central. It provides a platform to plan, track, prioritize work collaboratively. Thus improving visibility.Click here for more details.  6. OpsGenieOpsGenie is an incident management tool that helps to determine who should respond to events. It’s from Atlassian. It also helps in defining collaboration methods like video conferences etc. It’s free for small teams up to 5 users.Availability: Paid version is available which varies with the number of users and add on features.Learn more about OpsGenie here.7. Pivotal TrackerPivotal Tracker is an agile project management tool. Pivotal tracker helps to create public and private projects. Private projects are accessible only to the collaborators and it's the default setting. Public projects are available via URL in read-only mode. Edit permissions are given only to an invitee to the project. Open source software development process makes use of public projects.Availability: Pivotal Tracker for two projects,2GB of file storage, and a total of three collaborators. An upgrade from this could be only in the paid version.For more details click here.8. Azure BoardAzure Board is a tracking tool from Microsoft Azure. It helps to track and plan your projects via kanban boards, team dashboards etc. It supports all agile methodologies. Built-in analytics provide information about project progress and status.Availability: Azure Board is free for up to 5 users and unlimited stakeholders.Click here to know more about Azure Board.  9. TasktopTasktop is a stream management tool to integrate and synchronize development and operations tools together. It helps in tracking tasks across different task tracking systems.Availability: Tasktop is not a free tool but paid.Learn more about Tasktop here.10. KanboardKanboard is an open-source project used for project management. It is known for its super easy installations, great visualisation of the project tasks and drag and drops feature for project management.Availability: Free version of Kanboard is available.Click here for more details about Kanboard.SL NoTool NameProsConsAvailability1JIRAWidely  usedEnterprise-gradeLearning curveComplex to configurePaid2Mantis Bug TrackerFree and good communityPaid Hosting option availableNeed experts to configureGood for defects and simple projectsFree & open source3.TrelloEasy to useEasy to configureNot ideal for large teams /programsFreemium4CollabNet VersionOneWidely  usedEnterprise-gradeRich featuresLearning curveLess intuitivePai5RallyEnterprise-gradeEasy to set upLess intuitive and complex to learnPaid6OpsGenieRich features for issue tracking and on-call managementFeatures are limited to issue trackingFreemium7Pivotal TrackerRich feature set for trackingIntuitive and easy to use  Integrability with other toolsFreemium8Azure BoardIntegrates well with Microsoft toolchainLacks richness in feature set  in comparison with other enterprise-grade tools in the same segmentPaid9TasktopGood for Value Stream ManagementIntegrability with other toolsPaid10KanboardSimple to useLimited feature setNot ideal for large teams/programsFree & open sourcec. Cloud /iaas/paas/serverless toolsCloud along with Infrastructure as service and platform as service produces a platform for developing, testing and deployment of applications. Using such features DevOps reduces the much latency overload in acquiring and accessing assets. All private and public clouds provide support to DevOps tooling and thus reducing the cost spent for on-premises systems.Some of the platforms are1. AWSAmazon Web Services (AWS) is a cloud services platform, offering to compute power, database storage, content delivery, and other cloud-related functionalities.Availability: AWS is an on-demand cloud computing platform where we are charged on as you go basis.Learn more about AWS here.2. AWS LambdaLambda is a serverless computing platform from Amazon Web Services (AWS). It is a service that manages the computing resources and runs code in response to events.Availability: We are charged only on the computing time.Learn more about AWS Lambda here.3. AzureMicrosoft Azure is an enterprise-grade cloud computing service that helps in managing applications through Microsoft-managed data centers. It provides software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS).Click here to learn more about Azure.4. Google Cloud PlatformGoogle Cloud Platform, offered by Google, is a suite of cloud computing services. Platform as service, Infrastructure as a service, and serverless computing are provided by GCP.Click here to learn more about Google Cloud Platform.5. IBM cloudIBM Cloud is a suite of cloud computing services from IBM. It also provides infrastructure as a service (IaaS) and platform as a service (PaaS).Availability: Lite version of IBM Cloud is free and allows one instance per plan.Click here to learn more about IBM Cloud.6. OpenStackOpenStack is a free and open-source software platform for cloud computing, mostly deployed as infrastructure-as-a-service, whereby all virtual servers and other resources are made available to customers. It’s written in python.Learn more about OpenStack here.7. Cloud FoundryCloud Foundry is an open source cloud platform that helps to develop cloud applications. It’s from Pivotal.Learn more about Cloud Foundry here.8. HerokuHeroku is a  platform as a service cloud environment. Thus help developers to work entirely on cloud.Availability: Free version is available with limited features.Learn more about Heroku here.9. OpenWhiskApache OpenWhisk is an open source, distributed Serverless platform. OpenWhisk manages the infrastructure, servers and scaling using Docker containers.Click here for more details about OpenWhisk.SL NoTool NameProsConsAvailability1.AWSEnterprise-ready providerComplex cost structurePay as you go2.AWS LambdaServerless computingReduced operational costsLimit on concurrent executions after which causing Denial Of ServiceCharged for computing time3.AzureIntegrates better with many Microsoft toolsServices provided still needs to be improvedPay for resources used4.Google Cloud PlatformScalableBetter load balancingServerless computingCurrently, GCP has fewer services and features compared to AWS or AzurePay as you go5.IBM CloudEasy setupConsistent performanceDifficulty in scalingFree Lite version6.OpenStackMassive scalabilityEasy implementationComplex configurationsFreemium7.Cloud FoundrySupports on-premises and multi-cloud deploymentGreat privacy and securityLess feature set compared to AWS or AzurePaid8.HerokuAdvanced Continuous Integration PlatformHighly scalableLess reconfigurabilityFreemium9.OpenWhiskOpen event provider systemServerless computingNot efficient for long running applicationsPaid as per computing timed. Top Source control managementSource control management as practice stores and tracks the application and infrastructure code. Even delivery pipelines for an application is nowadays stored in source code repositories. Some of the tools are GitHub, Bit Bucket, Subversion, Mercurial, Rational ClearCase.1. GitHubGitHub is a popular repository hosting service using Git. Git is a  free and open source system. It’s of the ease with which it performs branching and merge operations. It’s a distributed version control system that adds to its preferences.Click here for more details.  2. MercurialMercurial is a free distributed version control system. It is very easy to learn compared to Git but the branching feature of Git is more widely loved. Big and small projects could be handled in Mercurial.Click here for more details.3. BitbucketBitbucket is a repository hosting service from Atlassian. It could be used to store source code using  Mercurial or Git revision control systems. It’s free for teams with a maximum of 5 users. The paid version is available for bigger teams.Learn more about Bit Bucket here.4. Rational ClearCaseRational ClearCase is a source control management tool from IBM. It helps in the parallel development of software. Software artefacts whether it be source code or design documents etc could be managed by ClearCase. Enterprise version is available for ClearCase.Learn more about ClearCase here.5. SubversionSubversion is a version control system from Apache. It’s a free tool and open source.It helps to track down all the changes done to files and directories.Click here to learn more about Subversion.  6. JFrog ArtifactoryArtifactory is an artefact repository management tool from JFrog. It’s a paid tool. It primarily stores binary files which are typically the product of our build process.Click here to learn more about Artifactory.SL NoTool NameProsConsAvailability1.GitHubEasy to navigate user interfaceDifficult to learnFree2.MercurialCannot rewrite commit historySlower network operationsFree3.BitbucketSupports Git and MercurialDifficult integration with other toolsFreemium4Rational ClearCaseIntegrates with Microsoft Visual StudioNot suitable for projects with a big code baseDifficult to work withPaid5.SubversionEasy to learn even for non-technical usersSlower because of centralised version control systemFree6.ArtifactorySupports many languages and toolsEasy to useExpensivePaide. Top Package managersPackage managers build or package code with all metadatas like software’s name, purpose, version and all dependencies needed by the software to function correctly. It lessens the burden of manual installs especially in big enterprises where we need to install big software. Some of the tools available are1. MavenMaven is an open-source build automation tool from Apache used mainly for Java applications. Main features are it provides easy and uniform builds. It also keeps aside a parallel space for test code.Learn more details about Maven here.  2. GradleGradle is also an open source build tool from Apache. It is built on Groovy domain-specific language. It is more like a combination of Ant and Maven.Learn more about Gradle here . 3. MSBuildMicrosoft Build automation tool is a free and open source mainly for  C++ and .NET applications. Visual Studio makes use of MSBuild to build its applications.Learn more about MSBuild here.  SL NoTool NameProsConsAvailability1.MavenAll dependencies are downloaded automaticallyBetter suited for java projectsComplex to work withLarge learning curveFree2.GradleCan write build script ourselvesPoor integration with eclipseFree3.MSBuildGreat community supportMainly for .NET applications onlyFreef. Continuous IntegrationIn continuous integration, a code is checked into the source code repository whenever a developer finishes a requirement or user story. Continuous Integration tools enable teams to build software application automatically in a decided time. Thus reducing the time elapsed in a manual build. Some of the popular tools available are   1. GitLab CIGitLab CI is an integrated part of GitLab, GitLab offers a continuous integration service.Availability: A free version is available with limited features.Learn more about GitLab CI here.  2. SemaphoreSemaphore is the fastest hosted continuous integration and delivery solution as claimed by its developers.Availability: Open source projects can use Semaphore for free in its full capacity, free use for private projects is limited to 100 builds per month.Learn more about Semaphore here.3. Circle CICircle CI's continuous integration and delivery platform make it easy for teams of all sizes to rapidly build and release quality software at scale. It is built for Linux servers and automates build, test and deployment processes.Availability: Circle CI has a free version available for a single container.Click here for more details about Circle CI.4. JenkinsJenkins is an open-source continuous integration tool written in Java. Jenkins is a fork by the core developers of Hudson after a dispute with Oracle. Jenkins is the most widely used CI tool. Availability: Both free and enterprise versions are available.Click here for more details about Jenkins.5. HudsonHudson is a continuous integration tool written in Java that runs in a servlet container such as Apache Tomcat or GlassFish.Click here for more details about Hudson.   6. CruiseControlCruiseControl is an open source continuous integration tool and extensible framework for facilitating a continuous build process. Distributed under a BSD-style license.Learn more about CruiseControl here.  7. BambooBamboo is a continuous integration (CI) server produced by Atlassian. Bamboo ties automated builds, tests, and releases together in a single workflow.Availability: Licensed version is available at a starting price of $10.Learn more about Bamboo here.8. Team Foundation BuildTeam Foundation Build (TFB) is part of the Team Foundation system and provides the functionality of a public build lab. With TFB, build managers can synchronize sources and compile.Click here to know more about Team Foundation Build9. GumpApache Gump is an open-source continuous integration tool, designed with the overarching aim of ensuring that projects are compatible at both the API level and regarding.Learn more about Gump here.10. Travis CITravis CI is an open-source distributed continuous integration (CI) service used to build and test projects hosted on GitHub. Open source projects can freely avail Travis CI.Availability: Travis CI is free for first 100 builds but after which it is priced.Learn more about Travis CI here.11. TeamCityTeamCity is an open-source CI platform from Jet Brains. It’s known for easy user interface and support for Microsoft stack.Availability: Free version of TeamCity is available with the limited feature set.Click here to know more about TeamCity.  12. Puppet PipelinesPuppet Pipelines makes software delivery easy and unites silos of automation across Dev and Ops teams. It automates your application builds and deployments.The community edition of Puppet Pipelines is available free of cost for up to three users.Click here for more details about Puppet Pipelines.  SL NoTool NameProsConsAvailability1.GitLab CIEasy to configureSource control and continuous integration in one placeNeed GitLab integrationFreemium2.Semaphore CISimple and to the pointLess user base and community supportFreemium3.CircleCIEasy to useLess known in the community4.JenkinsUses plugin model to integrate with several DevOps toolsGreater community supportCumbersome groovy syntaxesFreemium5.HudsonJenkins forked from Hudson so has all basic features of JenkinsNo much development of new features taking placeLess community supportOpen source6.CruiseControlGoes well with .NET applicationsDifficult setupOpen source7.BambooA lot of tasks available as a built-in option and not as pluginsGoes well with Atlassian products like Bitbucket and JIRAOnly paid option availablePaid8.Team Foundation BuildWorks smoothly with .NET applicationsIntuitive easy to installInteroperability with other stacks is a challengePaid9.GumpIntegrates well with Apache tools like MavenLess plugin supportOpen source10.Travis CIEasy to set up and configureSupports most technological stacks using Node, Ruby, etcDoesn’t  support BitbucketFreemium11.TeamCityGreat user interfaceEasy to learnCommunity support is good but not greatFreemium12.Puppet PipelinesEasy setup and installationPlugin availabilityFreemiumg. Top Continuous Delivery and Deployment toolsContinuous deployment tools automate the delivery pipeline of application development, thus reducing the wastage of time caused by transfer between different teams like development and release teams. Few of the most popular deployment tools used by DevOps teams are1. ChefChef is a tool used to manage and develop infrastructure. It could be used for application deployment also. It is an open-source tool but with an enterprise version available. Chef uses a  domain-specific language based on ruby to define and configure infrastructure. Chef allows high flexibility and typically preferred by developers. It has a higher learning curve compared to other tools in this space. Chef is known to be the most preferred tool for large scale, complex enterprise systems.Availability: Chef is free for up to a limited number of nodes which is five nodes now after which it’s priced.Learn more about Chef here.2. PuppetPuppet is another configuration management tool to define infrastructure as code. Puppet is an enterprise-grade tool. Puppet uses a more declarative language and hence makes it easier to work with. It’s preferred by operations teams as it doesn’t require programming skills.Learn more about Puppet here.3. Octopus DeployOctopus Deploy is a release management server from XebiaLabs. It’s used mainly for .NET applications and windows services. It’s a paid deployment as a service.Click here for more about Octopus Deploy.4. SpinnakerSpinnaker is an open source free release platform that increases the number of good-quality releases. This platform helps in deployment across multi-cloud providers like AWS EC2, Google Kubernetes Engine etc.Learn more about Spinnaker here.  5. GoCDGoCD is a free and open source server that helps in continuous delivery. It helps in creating a continuous delivery pipeline in cloud environments like Docker, AWS etc.Learn more about GoCD here.  6. UrbanCode DeployUrbanCode Deploy or uDeploy is a tool used to automate application deployment from IBM. It’s a licensed version and available as hosted services also.Click here to know more about UrbanCode Deploy.  7. XebiaLabs XL DeployXL Deploy is a release automation tool for any environment. It is a licensed version by XebiaLabs.Click here to know more about XL Deploy.8. AnsibleAnsible is an open source configuration management tool and application deployment tool. In comparison with Chef, Ansible works with a decentralised agentless architecture and hence it’s easy to get started with Ansible.Availability: CLI based Ansible is free for no limit on nodes.Learn more about Ansible here.  9. SaltStackSaltStack is an open-source configuration management software written in Python. It enables teams to craft  "Infrastructure as Code".SaltStack in comparison with Ansible is quickly scalable but enforces teams to learn python.Learn more about SaltStack here.  SL NoTool NameProsConsAvailability1.ChefGreat documentation availableHard to learnNeed programming skillsFreemium2.PuppetProgramming skills are not a mustNot much suitable for applications where updates are frequentPaid3.Octopus DeployEasy configurationIntegrates smoothly with TeamCityA quick and flexible deployment pipelineLess community support especially for non-Microsoft applicationsPaid4.SpinnakerGreatly preferred for cloud-based deploymentsLess community supportOpen source5.GoCDBetter suitable for end-to-end Continuous delivery pipeline where great visualisation needed.Less cost efficientA Steep learning curve with a confusing user interfaceOpen source6.UrbanCode DeploySimple and easy to useSlower deploymentsPaid7.XL DeployA large number of plugins availableLesser visibility for the deployment processPaid8.AnsibleSimpler installationEasy to useGUI is not that greatNo support for windowsOpen source9.SaltStackQuickly scalableUnderdeveloped GUIOpen sourceh. Testing automationTesting automation tools are used in close proximity to continuous integration and deployment tools. It helps in performing repetitive tasks unable to perform by manual tests. The automated test gives a more clear picture of the health of the software product without any bias.1. Unit testingUnit testing tools help to test a single unit or component of the software. Thus detecting the errors earlier and fixing. And, Unit testing also helps in smooth integration.2. Integration testingIntegration testing tools help validate every integration that happens in the integration phase. Only successful build of the code move to the next stage.3. End-to-end testingIn end-to-end testing, the entire system or application is checked from start to finish. The tools generate the reports which can be used to verify whether the new change is causing  any unexpected behavior from the entire system4. Performance testingPerformance testing tools analyse the system in an expected workload. The tools measure the responsiveness of the system, scalability, and stability. Tools also provide details on where the system is failing and where the system needs improvement.5. Infrastructure testing and auditingInfrastructure testing plays a very important part as an error in the infrastructure code can even alter the production environment creating unseen repercussions. Ensuring the compliance of an organisation is an integral part of such tools keeping security in mind.i. Other Popular DevOps toolsSome of the popular tools used are1. SeleniumSelenium is a free and open source testing framework for web applications. It’s a suite of four tools Selenium WebDriver, Selenium RC or Remote   Selenium IDE And Selenium-Grid.Learn more about Selenium here.2. CucumberCucumber is an open source testing tool. It’s the best choice for behavior driven development popularly known as BDD as it tests business readable requirements. Free and enterprise version of Cucumber is available.Click here for more details about Cucumber.3. InSpecInSpec is a free and open source testing framework from Chef. InSpec tests infrastructure. It’s also a compliance framework.Click here to know more about InSpec.  4. KarmaKarma is a free test runner created for testing, applications made with Angular CLI. It’s from the AngularJS team.Learn more about Karma here.5. Jasmine  Jasmine is an open source testing framework. It is used mainly for JavaScript applications. It’s used for behavior driven development also.Click here to learn more about Jasmine.6. UFTUFT or Unified Functional Testing is a test automation tool for web, desktop, mobile   Micro Focus. There is a 60-day free trial version available but after which it’s not a free tool.Learn more about UFT here.  7. SoapUISoapUI is an open-source testing tool for web applications. It’s the market leader in API testing. It’s a licensed tool.Click here to know more about Soap UI.  8. JMeterJMeter is an open source load testing tool from Apache. It helps in analyzing the performance of services mainly for web applications. It’s a free test suite.Learn more about JMeter here. SL NoTool NameProsConsAvailability1.SeleniumWide range of languages supportedEasy integration with Jenkins, MavenDifficult to useNo support officiallyOpen source2.CucumberGreat documentationSupports Behavior-driven developmentSlow compared to other testing toolsOpen source3.InSpecHighly flexible and can be used cross any Infra As code framework.Need to know the scripting languageOpen source4.KarmaEasy debuggingLesser user baseOpen source5.JasmineDifficult to debugEasy to set up and useOpen source6.UFTEasily integrated with continuous integration DevOps toolsLess compatibility with different operating systems ExpensivePaid7.SoapUIUser-friendlyPlugin availability is lessOpen source8.JMeterEasy installation Great user-friendly interfaceHigher learning curve Doesn’t support javascriptOpen source9. DockerDocker is a platform for working with containers, from Docker, Inc.Docker is an open source and available as free and enterprise version. Containers help to develop applications and package it with its dependencies and libraries, thus ensuring the application runs in any environment. Docker containers are like virtual machines but share the same OS resources like file system etc.It has less overhead unlike VM 's. The building block of a container is an image which is the executable package including libraries, dependencies, environment variables etc needed to run the application. Running instance of an image is a container.Learn more about Docker here.10. KubernetesKubernetes is an open source production grade container orchestration tool. It helps in managing multiple containers in an application. Kubernetes is the market leader in this category. It is often compared with Docker Swarm which is the native clustering method for docker.  Click here to know more about Kubernetes.  11. OpenShiftOpenShift from Red Hat is a group of containerization software. OpenShift Container Platform is the major software in the group that provides a platform as a service built around Docker containers. These docker containers are managed for experimenting by an individual a free version is available for one project.Learn more about OpenShift here.  SL NoTool NameProsConsAvailability1.DockerContainers are lightweight compared to virtual machinesSecurity is a concernOpen source2.KubernetesHighly scalableWork better with CI/CD pipelinesLess user-friendlyOpen source3.OpenShiftGreat community supportOnly supports Red Hat Enterprise LinuxFreemiumj. Release orchestrationRelease orchestration tools are used to achieve automation of the application release process. Some of the popular tools are Xebialabs XL release, Plutora Release, AWS Codepipeline, CACD Director, OpenMake, Spinnaker, HashiCorp Vault, SonarQube, BlackDuck, Signal Sciences, Checkmarx SAST.k. ContainerologyContainerology tools help to run an application on the virtual environment as a package with all dependencies. It avoids the situation “it doesn’t work in my system”.Some of the tools are:l. Monitoring ToolsMonitoring tools help to pinpoint and track issues and verify the health of the system. This enables fast recovery of the system with minimum or no human interventions.Popular tools are:1. PrometheusPrometheus is an open source monitoring tool from SoundCloud.Its mainly used with systems using microservices as it has a multi-dimensional data collection feature. It uses a flexible query language PromQL.All Prometheus server is standalone and doesn’t depend on network storage helping us to understand the defect especially during outages. Collected data through the multi-dimensional collection feature may not be too detailed and having complete information. So it is not suitable for systems where 100% accuracy is required.Click here for more details about Prometheus.2. SplunkSplunk as a monitoring tool is used across application management, security, compliance, web analytics etc. Splunk tools listen and store data, index the same and correlate the captured real-time data in a searchable repository from which it can generate useful graphs, reports, alerts, and various other visualizations. One can create and configure relevant dashboards based on various visualizations/ graphs.  Learn more about Splunk here.3. NagiosNagios is an open source and free tool to monitor services, applications and infrastructure. It’s known for its auto-discovery feature. Its user interface is a bit difficult for beginnersLearn more about Nagios here.  4. ZabbixSimilar to Nagios, Zabbix is an enterprise open source monitoring solution. Compared to Nagios, Zabbix is user-friendly and is comparatively easy to configure. The main disadvantage is that it doesn’t support plugins.Learn more about Zabbix here.5. ZenossZenoss is a free, open-source tool used for services and network monitoring. It is written in Python language.Click here to know more about Zenoss.6. ELK Stack"ELK" is the acronym for three free open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine based on Java. Logstash is a server‑side data processing platform with the ability to clean, transform data and send it to Elasticsearch.Kibana is a virtualization tool that helps to visualize data with charts and graphs in Elasticsearch.The Elastic Stack is the next evolution of the ELK Stack.Learn more about ELK Stack here.SL NoTool NameProsConsAvailability1.PrometheusEasy to useEasy integration with other DevOps toolsBad user interfaceOpen source2.NagiosNumber of plugins available in the marketDifficult configurations needed to make the system stableOpen source3.ZabbixEasy configuration based on a web-based user interfaceNon-availability of pluginsOpen source4.ZenossGreat community supportLimitation on the number of devices monitoredOpen source5.ELK StackEasy to installHighly customisableDifficult to configureOpen source6SplunkEasy to installUser-friendlyEasy to configure  simple graphsFor complex configurations, the learning curve is a bit steepPaidm. AnalyticsAnalytics tools give a clear picture of what is happening in the team, be it code development or team interaction, code coverage and efficiency etc. Some tools used are XebiaLabs XL Impact,New Relic,Dynatrace,Datadog,AppDynamics, ElasticSearch .Bonus Information3. Why is DevOps needed?DevOps helps to remove silos in organisations and enable the creation of cross-functional teams, thus reducing reliance on any one person or team during the delivery process. Frequent communication between teams improves the confidence and efficiency of the team members. Through automation, DevOps team increase their productivity making satisfied customers. According to State of DevOps report 2016 “Teams that practice DevOps deploy 30x more frequently, have 60x fewer failures, and recover 160x faster“. It also provides better work environments with increased trust, better management of issues reducing unplanned works.4. How to implement DevOps?The “DevOps Handbook” defines the “Three Ways: The principles of underpinning DevOps” as a way to implement DevOps in large enterprises. In this session, we will detail these three ways and three core pillars.The First Way: Systems ThinkingThe First Way emphasizes the need for global optimisation as opposed to local optimisation, hence the focus is on optimising all business value streams enabled by IT.The Second Way: Amplify feedback loopsThe Second Way is about discovering and injecting right feedback loops so that necessary corrections can be made before it’s too late.The Third Way: Culture of Experimentation and learningThe third way is all about creating the right culture that fosters two things, continual experimentation and learning from failures. It emphasises the understanding that repetition and practice make teams perfect.While the three ways focus on the key principles, we also have three pillars which are keys to any successful DevOps adoption.The three pillars of any DevOps adoption are,Culture and PeopleTools and TechnologyProcesses and practices4.1 Important DevOps practicesa. Continuous IntegrationContinuous integration is a software engineering practice where software development team members frequently merge and build their code changes. The key benefit is to detect and fix code merge conflicts and integration bugs in the early stages of software development. Hence reducing the cost to detect and fix the issues.b. Continuous DeliveryContinuous delivery is a software engineering practice in which changes are automatically built, tested, and made release ready to production. In order to get into a continuous delivery state, it is very crucial to define a test strategy. The main goal is to identify functional and non-functional defects at a much earlier stage thus reducing the cost to fix defects. It also enables teams to come up with working software as defined in the agile manifesto. Continuous delivery as a practice depends on continuous integration and test automation. Hence it is crucial that teams need to ensure that they practice continuous integration along with test automation religiously, to effectively practice continuous delivery.c. Continuous DeploymentContinuous deployment is a software engineering practice in which codes committed by the developers are automatically built, tested and deployed to production. Continuous deployment as a practice, require that teams have already adopted continuous integration and continuous delivery approach. The primary advantage of this practice is reducing time-to-market and early feedback from users.d. Continuous TestingContinuous Testing can be defined as a software testing practice that involves a process of testing early, testing often and test automation. The primary goal of Continuous Testing is to shift left the test phase as much as possible to identify defects and reduce the cost of fixing.e. MicroservicesMicroservices architecture helps to create an application as a set of small services independent of each other. Any language could be used to create microservices and typically an HTTP based API is used to interact between services. Microservices as a design approach helps to achieve fewer risk deployments and enables continuous delivery.f. Infrastructure as codeInfrastructure as a code is an engineering practice in which infrastructure is developed and managed through code. Thus creating a consistent, reproducible and versioned infrastructure. Since the infrastructure is implemented as the code it’s easy for the team members to update and change it. Infrastructure as a code no more considers scaling as a major problem.g. Policy as codePolicy as a code is a software engineering practice where compliance rules or policies of the organisation could be monitored and verified. Policy as code enables organizations to enforce the compliance rules more strictly and helps to bring the non-compliant resource into compliance mode. This practice gained importance during the DevSecOps movement.h. Continuous Monitoring and LoggingMonitoring and logging as a best practice to help organizations to analyse the products’ end user experience. This helps the software teams to get to know about the root cause of the defects and latencies in the software development process. More transparency into the actions performed by the team members causes increased responsibility among the teams causing increased performance.i. Communication and collaborationEffective communication and collaboration are one of the key values emphasised by DevOps.Devops tools in the field of communication and collaborations bring together collective responsibility for the products delivered.   5. How to choose the right DevOps tools?Today the DevOps market is overcrowded with tools across different stages of software development life cycle. As enterprises, it's extremely crucial to select the right tools in order to get maximum benefit. Saying so choosing the right tool is an extremely difficult and time-consuming process given the spectrum of tools available today. Hence enterprises should have a five-point strategy towards deciding the right tools. Five point strategy would include dimensions likeAbility to integrateScalabilitySecurityTechnical know howReliability5.1. Ability to integrateThe ability to integrate is extremely crucial and is one of the fundamental requirements while checking out tools. Certain tools integrate smoothly with a particular technology stack in comparison with others. Hence it is vital for the DevOps architect to compare different tools on the basis of integration ability and ensure that tool that is been selected seamlessly integrates with the team’s technology stack. Another aspect that needs to be considered is how a particular tool integrates with other tools that are selected in the ecosystem. For eg., you would want your continuous integration system to constantly talk to the reporting system and alert prediction system in a smooth way. Hence integration between tools also becomes a very important factor while choosing tools.5.2. ScalabilityScalability is the second most important factor in choosing the right tools. Based on the need for scalability an enterprise might choose an enterprise version over a community version. Scalability also is a key factor why certain companies go for SaaS-based products. SaaS-based products are easily scalable and hence without any overhead, it can be adopted across large enterprises.5.3. SecurityThese days a lot of enterprises are emphasising on the need for security in the DevOps tooling space. Hence enterprise versions by various tooling companies have taken special care towards addressing these security-related issues. Thus enterprise versions are comparatively more preferred in comparison with that of open source solutions. Saying so this doesn't mean that all open source DevOps tools have security vulnerabilities. Certain open source DevOps tools fair much better than available enterprise versions5.4. Technical know howThis people dimension is one of the factors that is typically overlooked by enterprises. Knowing the skill levels and capability of team members is a key towards choosing the right tool. Often the tools available in the market wouldn't work out of the box and would need a substantial level of customisation to smoothly integrate with existing systems and workflows. Also, certain tools require a certain specific skill set towards configuration and customisation. Typical eg. is Chef, which is chosen by developers who are comfortable in ruby language whereas Puppet is preferred by system admins as it does not require much of programming skills.5.5. ReliabilityLast but not the least, reliability is extremely crucial for any successful tool adoption. Most of the tools available in the market, both enterprise and open source needs to be checked using this quality wheel. Tools should be reliable even during large scale and complex operational conditions. ConclusionIn this paper, we discussed the what, why, and how of DevOps.We also deep dived into various tool categories and tools available across the spectrum in today's DevOps market. Tools are definitely the key ingredients in successful DevOps adoption but saying so a lot of companies only invest in tool part without focusing on cultural and people dimensions. In order for tools to bear fruit its vital that the people operating and analysing the tools/data understand and realise the true spirit of DevOps. To conclude, would like to resonate with the wise words “yes, we need all the tools that can help us, but just tools will not help us get there!”.
Rated 4.5/5 based on 6 customer reviews
9554
Top Devops Tools You Must Know

In the last decade for most of the enterprises, th... Read More

Kubernetes vs Docker

What is Kubernetes?Kubernetes (also known as K8s) is a production-grade container orchestration system. It is an open source cluster management system initially developed by three Google employees during the summer of 2014 and grew exponentially and became the first project to get donated to the Cloud Native Computing Foundation(CNCF).It is basically an open source toolkit for building a fault-tolerant, scalable platform designed to automate and centrally manage containerized applications. With Kubernetes you can manage your containerized application more efficiently.Kubernetes is a HUGE project with a lot of code and functionalities. The primary responsibility of Kubernetes is container orchestration. That means making sure that all the containers that execute various workloads are scheduled to run physical or virtual machines. The containers must be packed efficiently following the constraints of the deployment environment and the cluster configuration. In addition, Kubernetes must keep an eye on all running containers and replace dead, unresponsive, or otherwise unhealthy containers.Kubernetes uses Docker to run images and manage containers. Nevertheless, K8s can use other engines, for example, rkt from the CoreOS. The platform itself can be deployed within almost any infrastructure – in the local network, server cluster, data center, any kind of cloud – public (Google Cloud, Microsoft Azure, AWS, etc.), private, hybrid, or even over the combination of these methods. It is noteworthy that Kubernetes supports the automatic placement and replication of containers over a large number of hosts. It  brings a number of features and which can be thought of as:\ As a container platformAs a microservices platformAs a portable cloud platform and a lot more.Kubernetes considers most of the operational needs for application containers. The top 10 reasons why Kubernetes is so popular are as follow:Largest Open Source project in the worldGreat Community SupportRobust Container deploymentEffective Persistent storageMulti-Cloud Support(Hybrid Cloud)Container health monitoringCompute resource managementAuto-scaling Feature SupportReal-world Use cases AvailableHigh availability by cluster federationBelow is the list of features which Kubernetes provides - Service Discovery and load balancing: Kubernetes has a feature which assigns the containers with their own IP addresses and a unique DNS name, which can be used to balance the load on them.Planning & Placement: Placement of the containers on the node is a crucial feature on which makes the decision based on the resources it requires and other restrictions.Auto Scaling: Based on the CPU usage, the vertical scaling of applications is automatically triggered using the command line.Self Repair: This is a unique feature in the Kubernetes which will restart the container automatically when it fails. If the Node dies, then containers are replaced or re-planned on the other Nodes. You can stop the containers if they don't respond to the health checks.Storage Orchestration: This feature of Kubernetes enables the user to mount the network storage system as a local file system.Batch execution: Kubernetes manages both batch and CI workloads along with replacing containers that fail.Deployments and Automatic Rollbacks: During the configuration changes for the application hosted on the Kubernetes, progressively monitors the health to ensure that it does not terminate all the instances at once, it makes an automatic rollback only in the case of failure.Configuration Management and Secrets: All classifies information like keys and passwords are stored under module called Secrets in Kubernetes. These Secrets are used especially while configuring the application without having to reconstruct the image.What is Docker?Docker is a lightweight containerization technology that has gained widespread popularity in the cloud and application packaging world. It is an open source framework that automates the deployment of applications in lightweight and portable containers. It uses a number of the Linux kernel’s features such as namespaces, cgroups, AppArmor profiles, and so on, to sandbox processes into configurable virtual environments. Though the concept of container virtualization isn’t new, it has been getting attention lately with bigwigs like Red Hat, Microsoft, VMware, SaltStack, IBM, HP, etc, throwing their weight behind newcomer Docker. Start-ups are betting their fortunes on Docker as well. CoreOS, Drone.io, and Shippable are some of the start-ups that are modeled to provide services based upon Docker. Red Hat has already included it as a primary supported container format for Red Hat Enterprise Linux 7.Why is Docker popular?The major factors driving Docker’s popularity are its speed, ease of use and the fact that it is largely free. In performance, it is even said to be comparable with KVM. A container-based approach, in which applications can run in isolation and without relying on a separate operating system, can really save huge amounts of hardware resources. Industry experts have started looking at it as hardware multi-tenancy for applications. Instead of having hundreds of VMs running per server, what if it were possible to have thousands of hardware-isolated applications?Docker is used to running software packages called "containers". A container is a standardized unit of software that packages up a code and all its dependencies so the application runs quickly and reliably from one computing environment to other. Containers are the “fastest growing cloud-enabling technology”* because they speed the delivery of software and cut the cost of operating it. Writing software is faster. Deploying it is easier — in your data center or your preferred cloud. And running it requires less hardware and support.Although container technology has existed for decades, Docker makes it work for the enterprise with core features enterprises require in a container platform and best-practice services to ensure success. And containers work on both legacy applications and new development.Existing, mission-critical applications can be “containerized,” often with little or no change. The result is instant savings in infrastructure, better security, and reduced labor. And new development happens faster because engineers only target a single platform instead of a variety of servers and clouds. Less code to write. Less testing. Faster delivery.Introduction to Docker swarm.Docker Swarm is the native clustering and scheduling tool for Docker.  It allows IT, administrators and developers, to establish and manage a cluster of Docker nodes as a single virtual system.  It is written in Go and released for the first time in November 2015 by Docker, Inc.The cluster management and orchestration features embedded in the Docker Engine are built using swarmkit. Swarmkit is a separate project which implements Docker’s orchestration layer and is used directly within Docker. It is a toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.Its main benefits are:Distributed: SwarmKit uses the Raft Consensus Algorithm in order to coordinate and does not rely on a single point of failure to perform decisions.Secure: Node communication and membership within a Swarm are secure out of the box. SwarmKit uses mutual TLS for node authentication, role authorization, and transport encryption, automating both certificate issuance and rotation.Simple: SwarmKit is operationally simple and minimizes infrastructure dependencies. It does not need an external database to operateCurrent versions of Docker include swarm mode for natively managing a cluster of Docker Engines called a swarm. One can use the Docker CLI to create a swarm, deploy application services to a swarm, and manage swarm behavior. All you need is to initiate it to use the latest features which comes with the Docker Engine. Docker Swarm Mode ArchitectureEvery node in Swarm Mode has a role which can be categorized as a Manager and Worker. Manager node has a responsibility to actually orchestrate the cluster, perform the health-check, running containers serving the API and so on. The worker node just executes the tasks which are actually containers. It cannot decide to schedule the containers on the different machine. It cannot change the desired state. The workers only take work and report back the status. You can enable node promotion or demotion easily through one-liner command.Managers and Workers use two different communication models. Managers have built-in RAFT system that allows them to share information for new leader election. At one time, the only manager is actually performing the scaling and they use a leader-follower model to figure out which one is supposed to be what. No External K-V store is required as a built-in internal distributed state store is available.Workers, on the other side, uses GOSSIP network protocol which is quite fast and consistent. Whenever any new container/tasks get generated in the cluster, the gossip is going to broadcast it to all the other containers in a specific overlay network that this new container has started. Please remember that ONLY the containers which are running in the specific overlay network will be communicated and NOT globally. Gossip is optimized for heavy traffic.How Docker swarm varies with Docker?Today Docker Platform support 3 variants of Swarm:Docker Swarm ( Classic)Swarmkit(a foundation for Docker Swarm Mode)Docker Swarm ModeLet us go through each one of them one by one Docker Swarm 1.0 was introduced for the first time in Docker Engine 1.9 Release during November 2015. It was a separate GITHUB repo and software which needed to be installed for turning a pool of Docker Engines into a single, virtual Engine.. It was announced as the easiest way to run Docker applications at scale on a cluster. You don’t have to worry about where to put containers, or how they talk to each other – it just handles all that for you.In 2016 during Dockercon, Docker Inc. announced Docker Swarm Mode for the first time. Swarm Mode came integrated directly into Docker Engine which means you don’t need to install it separately. All you need is to initiate it using `docker swarm init` command. With an optional “Swarm Mode” feature rightly integrated into core Docker Engine, native management of a cluster of Docker Engines, orchestration, decentralized design, service and application deployment, scaling, desired state reconciliation, multi-host networking, service discovery and routing mesh implementation is just a matter of few liner commands.Said that Docker Swarm mode is fundamentally different from Classic Swarm. The basic difference are listed below:Docker Swarm ModeDocker Classic SwarmDocker Swarm Mode comes integrated into Docker EngineDocker Swarm is a GITHUB repository and comes as a separate project. It is NOT integrated into Docker Engine.Comes with inbuilt Service DiscoveryNeed external KV store based on Consul & etc.Comes with inbuilt feature like: ScalingRolling UpdatesService DiscoveryLoad-Balancing Routing MeshTopological PlacementLack of inbuilt feature like Load Balancing, Scalability, Routing Mesh etc.Secured Control & Data PlaneControl Plane and Data Plane are insecureLet’s talk about Swarmkit a bit.Swarmkit is a plumbing open source project. It is a toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.Its main benefits are:Distributed: SwarmKit uses the Raft Consensus Algorithm in order to coordinate and does not rely on a single point of failure to perform decisions.Secure: Node communication and membership within a Swarm are secure out of the box. SwarmKit uses mutual TLS for node authentication, role authorization, and transport encryption, automating both certificate issuance and rotation.Simple: SwarmKit is operationally simple and minimizes infrastructure dependencies. It does not need an external database to operate.SwarmKit is completely built in Go and leverages a standard project structure to work well with Go tooling. If you want to learn more about Swarmkit, head over to https://github.com/docker/swarmkit/How Docker can be used with Kubernetes?From 30,000 feet, Docker and Kubernetes might appear to be similar technologies. They both are an open platform which allows you to run applications within Linux containers. But as you deep-dive little closer, you’ll find that the technologies operate at different layers of the stack, and can even be used together. Let’s talk about Docker first-Docker provides the ability to package and run an application in a loosely isolated environment called a container. At their core, containers are a way of packaging software. The unique feature about container is that when you run a container, you know exactly how it will run - it’s very predictable, repeatable and immutable. You are just left with no unexpected errors when you move it to a new machine, or between environments. All of your application’s code, libraries, and dependencies are packed together in the container as an immutable artifact. You can think of running a container like running a virtual machine, without the overhead of spinning up an entire operating system. Docker CLI provides the mechanism for managing the life cycle of the containers. Whereas the docker image defines the build time framework of runtime containers, CLI commands are there to start, stop, restart and perform lifecycle operations on these containers. Today, containers can be orchestrated and can be made to run on multiple hosts. The questions that need to be answered are how these containers are coordinated and scheduled? And how will the application running in these containers communicate with each other? The answer is Kubernetes.Today, Kubernetes mostly uses Docker to package, instantiate, and run containerized applications. Said that there are various another container runtime available but Docker is the most popular runtime binary used by Kubernetes. Both Kubernetes and Docker build a comprehensive standard for managing the containerized applications intelligently along with providing powerful capabilities. Docker provides a platform for building running and distributing Docker containers. Docker brings up its own clustering tool which can be used for orchestration. But Kubernetes is an orchestration platform for Docker containers which is more extensive than the Docker clustering tool and has the capacity to scale to the production level. Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker Swarm and is meant to coordinate clusters of nodes at scale in production in an efficient manner.  It is a plug and plays architecture for the container orchestration which provides features like high availability among the distributed nodes.For Example ~ Today it is possible to run Kubernetes under Docker EE 2.0 platform. Docker Enterprise Edition (EE) 2.0 is the only platform that manages and secures applications on Kubernetes in multi-Linux, multi-OS, and multi-cloud customer environments. As a complete platform that integrates and scales with your organization, Docker EE 2.0 gives you the most flexibility and choice over the types of applications supported, orchestrators used, and where it’s deployed. It also enables organizations to operationalize Kubernetes more rapidly with streamlined workflows and helps you deliver safer applications through integrated security solutions.Difference between Kubernetes and Dockeri) Kubernetes vs DockerSet up and installationKubernetesDockerIt requires a series of manual steps to setup Kubernetes Master and worker nodes components in a cluster of nodesInstalling Docker is a matter of one-liner command on Linux Platform like Debian, Ubuntu, and CentOS.Kubernetes can run on various platforms: from your laptop, to VMs on a cloud provider, to a rack of bare metal servers. For setting up a single node K8s cluster, one can use Minikube.To install a single-node Docker Swarm or Kubernetes cluster, one can deploy Docker for Mac & Docker for Windows.Kubernetes support for Windows server is under beta phase.Docker has official support for Windows 10 and Windows Server 2016 and 1709.Kubernetes Client and Server packages need to be upgraded manually on all the systems.It’s so easy to upgrade Docker Engine under Docker for Mac & Windows via just 1 click.Working in two systemsKubernetesDockerKubernetes operates at the application level rather than at the hardware level. Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes.Kubernetes can run on top of Docker but requires you to know the command line interface (CLI) specifications for both to access your data over the API.There is a kubernetes client called kubectl which talks to kube API which is running on your master node.Unlike Master components that usually run on a single node (unless High Availability Setup is explicitly stated), Node components run on every node.kubelet: agent running on the node to inspect the container health and report to the master as well as listening to new commands from the kube-apiserverkube-proxy: maintains the network rulescontainer runtime: software for running the containers (e.g. Docker, rkt, runc)Docker Platform is available in the form of two editions:Docker Community EditionDocker Enterprise EditionDocker Community comes with community-based support forums whereas Docker Enterprise Edition is offered as enterprise-class support with defined SLAs and private support channels.Docker Community and Enterprise Edition both come by default with Docker Swarm Mode. Additionally, Kubernetes is supported under Docker Enterprise Edition.For Docker Swarm Mode, one can use Docker Compose file and use Docker Stack Deploy CLI to deploy an application across the cluster nodes.The `docker stack` CLI deploys a new stack or update an existing stack. The client and daemon API must both be at least 1.25 to use this command. One can use the docker version command on the client to check your client and daemon API versionsLogging and MonitoringKubernetesDockerLogging:Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster. Few of popular logging tools are listed below:Fluentd is an open source data collector for a unified logging layer. It’s written in Ruby with a plug-in oriented architecture.It helps to collect, route and store different logs from different sources. While Fluentd is optimized to be easily extended using plugin architecture, fluent-bit is designed for performance. It’s compact and written in C so it can be enabled to minimalistic IOT devices and remain fast enough to transfer a huge quantity of logs. Moreover, it has built-in Kubernetes support. It’s an especially compact tool designed to transport logs from all nodes.Other tools like Stackdriver logging provided by GCP, Logz.io and other 3rd party drivers are available too.Monitoring:There are various open source tools available for Kubernetes application monitoring like:Heapster: Installed as a pod inside of Kubernetes, it gathers data and events from the containers and pods within the cluster.Prometheus: Open source Cloud Native Computing Foundation (CNCF) project that offers powerful querying capabilities, visualization and alerting.Grafana:  Used in conjunction with Heapster for visualizing data within your Kubernetes environment.InfluxDB: A highly-available database platform that stores the data captured by all the Heapster pods.CAdvisor:  focuses on container level performance and resource usage. This comes embedded directly into kubelet and should automatically discover active containers.Logging driver plugins are available in Docker 17.05 and higher. Logging capabilities available in Docker are exposed in the form of drivers, which is very handy since one gets to choose how and where log messages should be shippedDocker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers.Each Docker daemon has a default logging driver, which each container uses unless you configure it to use a different logging driver.In addition to using the logging drivers included with Docker, you can also implement and use logging driver plugins.To configure the Docker daemon to default to a specific logging driver, set the value of log-driver to the name of the logging driver in the daemon.json file, which is located in /etc/docker/ on Linux hosts.The following example explicitly sets the default logging driver to syslog:{   "log-driver": "syslog" }When you start a container, you can configure it to use a different logging driver than the Docker daemon default, using the --log-driver flag. If the logging driver has configurable options, you can set them using one or more instances of the --log-opt = flag. Even if the container uses the default logging driver, it can use different configurable options.SizeKubernetes DockerAs per official page of Kubernetes documentation K8s v1.12 support clusters with up to 5000 nodes based on the below criteria:No more than 5000 nodesNo more than 150000 total podsNo more than 300000 total containersNo more than 100 pods per node.According to the Docker’s blog post on scaling Swarm clusters published during Nov 2015, Docker Swarm has been scaled and performance tested up to 30,000 containers and 1,000.SpecsDiscovery backend: Consul1,000 nodes30 containers per nodeManager: AWS m4.xlarge (4 CPUs, 16GB RAM)Nodes: AWS t2.micro (1 CPU, 1 GB RAM)Container image: Ubuntu 14.04Results Percentile  API Response Time Scheduling Delay50th     150ms              230ms90th      200ms             250ms99th      360ms             400msii) Building and Deploying Containers with DockerDocker has a capability to builds images automatically by reading the instructions via text file called Dockerfile. It is a simple text file that follows a specific format and instructions set that contains all commands, in order, needed to build a given image. A Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer. For example, below is a simple Dockerfile which Consider this Dockerfile:FROM nginx:latest COPY wrapper.sh / COPY html /usr/share/nginx/html CMD ["./wrapper.sh"]Each instruction creates one layer:FROM creates a layer from the nginx:latest Docker image.COPY adds files from your Docker client’s current directory.CMD specifies what command to run within the container.When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.Building a Docker Image$docker build -t hellowhaleThe above shown `docker build` command builds an image from a Dockerfile and a context. The build context is the set of files at a specified location PATH or URL. The PATH is a directory on your local filesystem. The URL is a Git repository location.Running the Docker ContainerA running Docker Image is called Docker container and all you need is to run the below command to expose port 80 on host machine from a container and get it up and running:docker run -d -p 80:80 --name hellowhale hellowhaleTagging the Image$docker tag hellowhale userid/hellowhalePushing the Docker Image to DockerHubBefore you push Docker Image to DockerHub, you need to login to DockerHub first using the below command:$docker login $docker push userid/hellowhaleiii) Managing container with KubernetesDocker CLI for a standalone system is used to build, ship and run your Docker containers. But if you want to run multiple containers across multiple machines, you need a robust orchestration tool and Kubernetes is the most popular in the list.Kubernetes is an open source container orchestration platform, allowing large numbers of containers to work together in harmony, reducing operational burden. It helps with things like running containers across many different machines, scaling up or down by adding or removing containers when demand changes, keeping storage consistent with multiple instances of an application, distributing load between the containers and launching new containers on different machines if something fails.Below are the list of comparative CLI used by Docker Vs Kubernetes to manage containers:Docker CLIKubernetes CLIdocker runTo run an nginx container -$ docker run -d --restart=always --name nginx-app -p 80:80 nginxkubectl runTo run an nginx Deployment and expose the Deployment, see kubectl run.$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"docker psTo list what is currently running, see kubectl get.docker:$ docker ps -akubectl getTo list what is currently running under kubernetes cluster:$ kubectl get po -adocker execTo execute a command in a  Docker container:$ docker psCONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES55c103fa1296        nginx               "nginx -g 'daemon of…"   6 minutes ago       Up 6 minutes        0.0.0.0:80->80/tcp   nginx-app$ docker exec 55c103fa1296 cat /etc/hostnamekubectl:To execute a command in a container, see kubectl exec.$ kubectl get poNAME              READY     STATUS    RESTARTS   AGEnginx-app-5jyvm   1/1       Running   0          10m$ kubectl exec nginx-app-5jyvm -- cat /etc/hostnamenginx-app-5jyvmiv) Trends in Docker and KubernetesDocker, Inc has around 550+ enterprise customer who uses Docker in a production environment. Few of non-exhaustive list of companies who actively uses Docker are list below:The New York TimesPayPalBusiness InsiderCornell University (Not a company but still can be considered)SplunkThe Washington PostSwisscomAlm BrandAssa AbloyExpediaJabilMetLifeSociete GeneraleGEGrouponYandexUberEbayShopifySpotifyNew RelicYelpRecently, the Forrester New Wave™: Enterprise Container Platform Software Suites, Q4 2018 report states that Docker leading the pack with a robust container platform well-suited for the enterprise, offering a secure container supply chain from the developer's desktop to production.Lots of organizations are already using Kubernetes in production—like the ones listed on the Kubernetes case studies page, including eBay, Buffer, Pearson, Box, and Wikimedia. But that is not a complete list. Kubernetes is even more versatile than the official case studies page suggests. Below is a list of companies using it:List of Kubernetes Users   Microservices UsageMicroservices help developers break up monolithic applications into smaller components. They can move away from all-at-once massive package deployments and break up apps into smaller, individual units that can be deployed separately. Smaller microservices can give apps more scalability, more resiliency and - most importantly - they can be updated, changed and redeployed faster. Some of the biggest public cloud applications run as microservices already.Containers are a packaging strategy for microservices. Think of them more as process containers than virtual machines. They run as a process inside a shared operating system. A container typically only does one small job - validate a login or return a search result. Docker is a tool that describes those packages in a common format, and helps launch and run them. Linux containers have been around for a while, but their popularity in the public cloud has given rise to an exciting new ecosystem of companies building tools to make them easier to use, cluster and orchestrate them, run them in more places, and manage their life cycles.Over the last two years, many different types of software vendors - from operating system to IT infrastructure companies - have all joined the container ecosystem. There’s already an industry organization - the open container initiative - guiding the market and making sure everyone plays well together. IBM, HP, Microsoft, VMware, Google, Red Hat, CoreOS - these are just some of the major vendors racing to make containers as easy as possible for developers to use, to share, to protect, and to scale.The rising demand for multi-cloud environmentsWith an estimated 85% of today’s enterprise IT organizations employing a multi-cloud strategy, it has become more critical that customers have a ‘single pane of glass’ for managing their entire application portfolio. Most enterprise organizations have a hybrid and multi-cloud strategy. Containers have helped to make applications portable but let us accept the fact that even though containers are portable today but the management of containers is still a nightmare. The reason being –Each Cloud is managed under a separate operational model, duplicating effortsDifferent security and access policies across each platformContent is hard to distribute and trackPoor Infrastructure utilization still remainsThe emergence of Cloud-hosted K8s is exacerbating the challenges with managing containerized applications across multiple CloudsThis time Docker introduced new application management capabilities for Docker Enterprise Edition that will allow organizations to federate applications across Docker Enterprise Edition environments deployed on-premises and in the cloud as well as across cloud-hosted Kubernetes. This includes Azure Kubernetes Service (AKS), AWS Elastic Container Service for Kubernetes (EKS), and Google Kubernetes Engine (GKE). The federated application management feature will automate the management and security of container applications on premises and across Kubernetes-based cloud services. It will provide a single management platform to enterprises so that they can centrally control and secure the software supply chain for all the containerized applications.With this announcement, undoubtedly Docker Enterprise Edition is the only enterprise-ready container platform that can deliver federated application management with a secure supply chain. Not only does Docker give you your choice of Linux distribution or Windows Server, the choice of running in a virtual machine or on bare metal, running traditional or microservices applications with either Swarm or Kubernetes orchestration, it also gives you the flexibility to choose the right cloud for your needs.Talking about Kubernetes Platform, version 1.3 of container management platform KubernetesIntroduced cross-cluster federated services with an ability to span workloads across clusters and, by extension, across multiple clouds. This opens up the possibility for workloads that need to draw resources from multiple clouds. This would also mean that large jobs can be split among clouds. Not only this, this introduced an ability to automatically scale services to match demand. Increasing support for Docker and KubernetesKubernetes has been enjoying widespread adoption among startups, platform vendors, and enterprises. Companies like Amazon, Google, IBM, Red Hat, and Microsoft offer managed Kubernetes under the Containers as a Service (CaaS) model. The open source ecosystem has dozens of players building various tools covering logging, monitoring, automation, storage, and networking aspects of Kubernetes. System integrators have dedicated practices and offerings based on Kubernetes. Global players like Uber, Bloomberg, Blackrock, BlaBlaCar, The New York Times, Lyft, eBay, Buffer, Squarespace, Ancestry, GolfNow, Goldman Sachs and many others are using Kubernetes in production at massive scale. According to Redmonk, a developer-focused research company, 71 percent of the Fortune 100 use containers and more than 50 percent of Fortune 100 companies use Kubernetes as their container orchestration platform.Did you know there are 35 certified Kubernetes distribution, 22 certified Kubernetes hosting platform and 50 certified Kubernetes service provider available? Over the last three years, Kubernetes has been adopted by a vibrant, diverse community of providers. The Cloud Native Computing Foundation® (CNCF®), which sustains and integrates open source technologies like Kubernetes® , today announced the availability of the Certified Kubernetes Conformance Program, which ensures Certified Kubernetes™ products deliver consistency and portability, and that 35 Certified Kubernetes Distributions and Platforms are now available. A Certified Kubernetes product guarantees that the complete Kubernetes API functions as specified, so users can rely on a seamless, stable experience.In the other hand, Docker Enterprise Edition (EE) 2.0 represents a significant leap forward in container platform solutions, delivering the only solution that manages and secures applications on Kubernetes in multi-Linux, multi-OS, and multi-cloud customer environments. One of the most promising features announced with this release includes Kubernetes integration as an optional orchestration solution, running side-by-side with Docker Swarm. Not only this, this release includes Swarm Layer 7 routing improvements, Registry image mirroring, Kubernetes integration to Docker Trusted Registry & Kubernetes integration to Docker EE access controls. With this new release, organizations will be able to deploy applications with either Swarm or fully-conformant Kubernetes while maintaining the consistent developer-to-IT workflow.Docker EE is more than just a container orchestration solution; it is a full lifecycle management solution for the modernization of traditional applications and microservices across a broad set of infrastructure platforms. It is a Containers-as-a-Service(CaaS) platform for IT that manages and secures diverse applications across disparate infrastructure, both on-premises and in the cloud. Docker EE provides an integrated, tested and certified platform for apps running on enterprise Linux or Windows operating systems and Cloud providers. It is tightly integrated into the underlying infrastructure to provide a native, easy to install experience and an optimized Docker environment.V) Kubernetes vs Docker swarmInstallation & Cluster configurationGUIScalabilityAuto-ScalingLoad BalancingRolling Updates & RollbacksData VolumesLogging & MonitoringKubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It was built by Google based on their experience running containers in production using an internal cluster management system called Borg (sometimes referred to as Omega). On the other hand, a Swarm cluster consists of Docker Engine deployed on multiple nodes. Manager nodes perform orchestration and cluster management. Worker nodes receive and execute tasks Below is the major list of differences between Docker Swarm & Kubernetes:Docker SwarmKubernetesApplications are deployed in the form of services (or “microservices”) in a Swarm cluster. Docker Compose is a tool which is majorly used to deploy the app.Applications are deployed in the form of a combination of pods, deployments, and services (or “microservices”).Autoscaling feature is not available either in  Docker Swarm (Classical) or Docker SwarmaAn auto-scaling feature is available under K8s. It uses a simple number-of-pods target which is defined declaratively using deployments. CPU-utilization-per-pod target is available.  Docker Swarm supports rolling updates features. At rollout time, you can apply rolling updates to services. The Swarm manager lets you control the delay between service deployment to different sets of nodes, thereby updating only 1 task at a time.Under kubernetes, the deployment controller supports both “rolling-update” and “recreate” strategies. Rolling updates can specify a maximum number of pods unavailable or maximum number running during the process.Under Docker Swarm Mode, the node joining a Docker Swarm cluster creates an overlay network for services that span all of the hosts in the Swarm and a host-only Docker bridge network for container.By default, nodes in the Swarm cluster encrypt overlay control and management traffic between themselves. Users can choose to encrypt container data traffic when creating an overlay network by themselves.Under K8s, the networking model is a flat network, enabling all pods to communicate with one another. Network policies specify how pods communicate with each other. The flat network is typically implemented as an overlay.Docker Swarm health checks are limited to services. If a container backing the service does not come up (running state), a new container is kicked off.Users can embed health check functionality into their Docker images using the HEALTHCHECK instruction.Under K8s, the health checks are of two kinds: liveness (is app responsive) and readiness (is app responsive, but busy preparing and not yet able to serve)Out-of-the-box K8S provides a basic logging mechanism to pull aggregate logs for a set of containers that make up a pod.
Rated 4.5/5 based on 36 customer reviews
4532
Kubernetes vs Docker

What is Kubernetes?Kubernetes (also known as K8s) ... Read More

Why Stop Inventing New DevOps Combinations?

DevOps - What's in a name?The term DevOps is well known by now. It was initially introduced by Patrick Dubois a Belgian IT consultant who organized an agile oriented event in October 2009 and named it DevOpsDays, targeting not only developers but also systems administrators, managers, and toolsmiths from all over the world. After the conference, the conversations continued on Twitter with the hashtag #DevOps.If you want to know more about the origin of the DevOps, you can check the video given below which gives you a lot of background about the reason why Patrick Dubois initially started this DevOpsDays conference:DevOps and the rise of the combinations and derivatives With the increasing popularity of DevOps, more people start to give their definition of DevOps. The different definitions of DevOps that go around can differ, depending on what aspect(s) of DevOps you want to focus.In a previous article, I wrote about how to explain DevOps in 5 letters - CALMS or CALMR i.e CALMS framework for DevOpsSome other definitions tend to focus primarily on the automation aspect, omitting the Agile foundation. As a consequence, you get the first combination of DevOps, named BizDevOps or BusDevOps. There are different interpretations about what BizDevOps actually means. “BizDevOps, also known as DevOps 2.0, is an approach to software development that encourages developers, operations staff and business teams to work together so the organization can develop software more quickly, be more responsive to user demand and ultimately maximize revenue.”At the same time, it is the most disputable definition. This definition assumes that DevOps is mainly a technology-driven initiative that hardly involves business people. But as mentioned in my previous article, the foundation of DevOps is culture, which goes back to the agile principles. And we all know that agile without business is only symptomatic. So DevOps without business is as symptomatic as agile without business.According to the Dzone article, DevOps is focusing on a single application or system whereas BizDevOps is focusing on the entire enterprise with all its complex processes and the mixture of applications and systems that support these complex processes.According to this article, BizDevOps provides an answer to dealing with:OK, fair point, but these aspects could as well be tackled by defining proper value streams and Agile Release Trains to deal with all the links and dependencies between these systems and applications. I don't see the need to come up with a different term.I guess you understand by now that I am not a big fan of the BizDevOps term and the confusion it creates. But it can get worse. It was some likely clever tool vendors that came up with the term DevSecOps. And if it is not the tool vendors that invented it, at least they were so clever to jump on the wagon to support the need for more security awareness in DevOps.Nowadays, large tool vendors using of the term DevSecOps instead of DevOps.Here's my opinion on this: security should be an integral part of DevOps. It should be a part of the culture:Don't only think about what something functionally should do, but also what can go wrong (think Abuse or Misuse cases). It is also a part of the automation. All security related tests should be automated as much as possible. Think about scanning vulnerabilities in your own source code, vulnerabilities in external libraries that you use, scanning your container images for vulnerabilities, or even - up to some extent - automated penetration testing. It is also a part of Lean principles: when a security test in your build pipeline fails (e.g. scanning your source code discovers a critical vulnerability), you stop the line.So again, the is no reason why the term DevSecOps should exist at all.Now that we have business and security covered, we can go on and see who else could feel denied or at least ignored? Maybe DBA's? Or any other person involved in data management? Maybe, that is the reason why we also have DevDataOps nowadays.I could go on for a while like this. But you get the point by now: it is uselessMaybe the DAD is right!I recently got to read an interesting article on disciplined agile delivery, the information portal from Mark Lines and Scott Ambler of their Disciplined Agile Delivery, or short DAD. DAD is not - as they call it - an agile methodology, but a process selection framework. DAD is the kernel of a layered model, like an onion, that they call Disciplined Agile and that consists of the following layers:Let’s explore each aspect in Disciplined Agile Framework mentioned in the diagram.1. Disciplined Agile Delivery (DAD)Disciplined Agile Delivery (DAD) aspect consists of initial modeling and planning, forming the team, securing funding, continuous architecture, continuous testing, continuous development, and governance all the way through the lifecycle. The Disciplined Agile Delivery (DAD) framework supports multiple delivery life cycles, basic/Agile lifecycle based on Scrum, a lean lifecycle based on Kanban, and a modern Agile lifecycle for continuous delivery. This aspect is responsible for addressing all the aspects of solution delivery.2. Disciplined DevOpsDisciplined DevOps streamlines the IT solution development and IT operations activities, and supports organization-IT activities, to benefit more effective outcomes to the organizations.3. Disciplined Agile IT (DAIT)DAIT aspect helps to understand how to apply Agile and Lean strategies to IT organizations. This aspect comprises of all IT-level activities such as enterprise architecture, data management, portfolio management, IT governance, and other capabilities.4.Disciplined Agile Enterprise (DAE)DAE can predict and respond quickly to the changes in the marketplace by facilitating a change through an organizational culture and structure. This aspect can be applied to organizations having the learning mindset in the mainstream business and underlying lean and agile processes to drive innovation.The second one, Disciplined DevOps principles deal exactly with what I mentioned before: the different derivatives and combinations of DevOps. They start by giving an answer to the question of why it is so difficult to come to a common definition of DevOps:Specialized IT practitionersMany IT professionals still tend to specialize, choose a focus, like DBA, enterprise architect, operations engineer, or whatever. Each discipline will focus on its own aspect of DevOps.Agilists are focused on continuous deliveryBecause of their focus on releasing daily or even several times a day, a lot of discussions deal with bringing new features faster and more frequently to production and not paying attention to all aspects of DevOpsOperations professionals are often frustratedSystems administrators are crunched between the push of the development teams to deliver faster and more frequently and the typical stringent service management processes they have to deal with, that are not yet adapted to the need for more frequent changesTool vendors have limited offeringsA fool with a tool is still a fool… DevOps tool vendors only focus on these DevOps-aspects that their tools coverService vendors have limited offeringsSimilarly to tool vendors, service vendors will only focus on these DevOps aspects that their  services can currently coverTool vendors treat DevOps as a marketing buzzwordSurfing the waves of the hypes, vendors might be persuaded to rebrand their existing toolset to something DevOps-ish, because it sounds better in a sales pitch. Sounds like window dressing…The DevOps = Cloud visionApparently, some people think that implementing DevOps in your organization can only succeed if you move to a cloud-based platform. Although cloud-native development practices are a facilitator for implementing DevOps, it not a requirement. And moving to a cloud platform definitely isn’t a requirement.All these reasons make that person come up with DevOps combinations that give an answer to only part of the problem.Disciplined DevOps mentions the following visions:1. BizDevOpsBizDevOps is a basic DevOps vision that explicitly brings the customers into the picture. BizDevOps is also called BusDevOps. DevOps is not just for teams, but it can be potentially applicable to any team supporting an incremental delivery lifecycle. The BizDevOps workflow consists of Business Operations, activities of delivering of products and services to the organizations. BusDevOps seeks to streamline the entire value stream, not just the IT portion of it. Its workflow is depicted in the diagram below.2.   DevSecOpsAnother common improvement over the basic DevOps vision is something called DevSecOps. The aim behind this vision is to ensure data security by getting the various security issues, adopting the latest security practices, and finding out and addressing the highest priority security gaps [DevSecOps]. This vision includes collaborative security engineers, exploit testing, real-time security monitoring, and building “rugged software” that has built-in security controls. The workflow of DevSecOps is shown in the figure.  3. DevDataOpsThe aim behind DevDataOps is to maintain a balance between the current needs of data management consists of providing timely and accurate information to the organization and DevOps to respond to the marketplace. Supporting data management activities include the definition, support, and evolution of data and information standards and guidelines; the creation, support, evolution, and operation of data sources of record within your organization; and the creation, support, evolution, and operation of  data warehouse (DW)/business intelligence (BI) solutions. The following figure depicting the workflow of DevDataOps.Or should we just stick to the term DevOps?Even though the message of Scott Ambler and Mark Lines is perfectly reasonable, not everybody might the term Disciplined DevOps. It fits their framework like a glove: everything boils down to Disciplined. If you don’t want to be framed into the Disciplined Agile/DevOps framework (pun intended), you may as well stick to the term DevOps and make sure that you cover all the aspects, which include business, security, data, release management and support.
Rated 4.0/5 based on 11 customer reviews
4810
Why Stop Inventing New DevOps Combinations?

DevOps - What's in a name?The term DevOps is well ... Read More