# What is Blue Green Deployment?

5K
• by kanav preet
• 12th Jan, 2021
• Last updated on 25th Jan, 2021

Deployment is the process of updating code and other activities on the server to make software available for use.

In the current situation, there is an increase in demand for continuous deployment to stay current with software updates, so as to provide the user with good quality application experience. There are many techniques available in the market for this, and in this article, we will be discussing about Blue Green Deployment.

## What is Blue-Green Deployment?

Blue Green Deployment is a software release model that consists of two identical production environments; Blue and Green, configured in a way where one environment is live and the other is in staging (idle) state.

The idea behind this is to redirect traffic between two environments running with a different version of the application.

This process eliminates downtime and reduces the risk that happens due to deployment.  In case any error occurs with the new version, we can immediately roll back to the stable version by swapping the environment.

In some organizations, it is also termed as Red Black Deployment.

### Working of Blue Green Deployment:

To implement Blue-Green deployment, there should be two identical environments. Also, this requires Router or Load Balancer so that traffic can be routed to the desired environment.

In the image, we have two production environments, Blue and Green. The Blue environment is one where current (let's say version 1) of the application is running and is live as well. All the traffic of the application is directed to Blue by Router or load balancer (as per the infra set up). Meanwhile, version 2 of the application is deployed and tested on Green Environment.

Currently, at this stage, we can refer to Blue environment as Live and Green as idle or at staging state.

Once the code for version 2 is tested and ready to be live for production, we redirect the traffic from Blue Environment to Green Environment, making Green as Live and Blue as a staging environment. If any problem is detected with the Infrastructure or application after version 2 is made live, we can roll back to the previous version just by swapping the environment.

Blue Green deployment matches all requirements of seamless, safe and fully reversible conditions for ideal deployment, but there are some practices that need to be adopted for smooth process, for eg. automating the workflow so that there is minimum human intervention, reducing chances of manual error.

Along with that, it is also important to keep monitoring in place for Blue Green application.

### Tools and Services for Set-Up:

Based upon infrastructure and application, there are various services i.e. Docker, Kubernetes,  Cloud, Cloudfoundry etc available which can be used to implement Blue-Green Deployment.

We will be discussing further on Blue Green deployment on Cloud and the steps to implement it.

### The advent of Cloud in Blue-Green Deployment:

The advent of cloud computing in deployment has helped the system to reduce associated risks.

Cloud utilities of infra management, billing and automation have made it easier to implement Blue-Green Deployment, making it a quicker task at a lower cost.

### AWS Services for Blue-Green Deployment:

By utilizing AWS for Blue-Green Deployment, we can access many services that help in automation of deployment and infrastructure i.e. AWS CLI, SDK, ELB, Elastic Beanstalk, CloudFormation etc. There are a number of solutions that AWS provides which we can use, some of them being:

• DNS Routing with Route53
• Swapping of Autoscaling Group with ELB
• Using Elastic Beanstalk and swapping Application
• Blue-Green Deployment using AWS Code Deploy
• Cloning Stack in OpsWork and Updating DNS.

We will discuss Using Elastic Beanstalk and swapping Application in detail:

Using Elastic Beanstalk and swapping Application

Elastic Beanstalk provides us with the ease in deployment. Once we upload application code with some version  on Elastic Beanstalk and provide information about application, it deploys our application in Blue Environment and provide its  URL.

The above Environment configuration is then copied and used to launch the new version of application-i.e. Green Environment with its  different and own URL.

This point of time our application is Up with two environments but traffic is going only to Blue Environment.

For Switching the environment to Green and serving traffic to it, we need to choose other  Environment details from Elastic Beanstalk Console and Swap it using Action menu. It leads Elastic Beanstalk to perform DNS Switch and once DNS changes are done, we can terminate Blue Environment. In this way, traffic will be redirected to Green Environment.

For Rollback, we need to invoke the Switch Environment URL again.

### Steps to perform Blue-Green deployment in AWS:

• Open Elastic Beanstalk console from AWS and select the Region where we need to set up an environment.
• Either we can launch a new environment or clone the existing environment.
• Deploy and test the new application environment. For deploying, the new version chooses Environment and name from the list. Click on upload and deploy.
• We can use screen form to upload source bundle.
• On the Overview page, choose Environment action and choose Swap environment URL.
• Choose the environment name, under Select an environment to swap column and click on Swap.

## Who Can Benefit from Blue-Green Deployments?

Blue-Green Deployment provides us with minimum Down Time and reliable deployment.

Blue-Green Deployment has become useful in deploying an application for Development Teams, but it can be used under the below scenarios:

• There should be identical and isolated environments.
• There should be the provision for router or Load Balancer.
• System should work with Continuous Update.

### Different Types of Deployment

There are a number of deployment techniques being used in the industry to deploy the application. As a DevOps Engineer, it becomes important to know the insights about different techniques based on our infrastructure providing and choose the right technique as per the impact on the end-user.

1. Blue-Green Deployment: Blue Green deployment provides us with high availability and rollback in case of critical bugs found. It consists of two environments running in parallel. One environment will be live and others will be in staging, thereby, making our application downtime free.
2. A/B Deployment: A/B Deployment is similar to Blue-Green Deployment with the difference that we send a small amount of traffic to another Server (another environment). The usage of A/B Deployment is generally when we need to check the utilization of features in the application.
Along with that, it can also be used to check user feedback on the new version.
3. Canary Deployment: Canary deployment is used when we need to release the full features of the application in subsets. Generally in Canary, we have a set of servers assigned to a different set of users. This deployment is important when we need to deploy features along with getting feedback
4. Rolling Deployment: In Rolling Deployment, there is a process where we replace currently running code server with a new version in a tandem way. Pausing the deployment is much easier in this.

1. No Downtime Deployment:  With Blue Green Deployment, whenever there is a critical bug found on the production server, traffic is redirected to other environments. This leads to no downtime for the end-user.
2. Standby: Whenever there is a system failure, we can immediately perform rollback and recover safely without disturbing the end-user.  With Blue Green deployment, once we switch to the new version of application, the older version of the application is still available. Therefore, in case of recovery, we can easily swap the environment and redirect the traffic back to the old version. Blue Green has proven to be impactful in reducing risk in the application development process.
3. Immediate Rollback: In some cases where the new feature is not working properly, we can switch to the older feature of application by performing a rollback.
4. Testing in Prod Environment: There are scenarios when deploying a new set of code works fine on local, but when deployed in the larger infrastructure, it becomes problematic. By using Blue-Green Deployment, we can check the performance of code on the Prod server without disturbing users.

Since many people are heading toward Blue-Green Deployment, there are some cases where this process is not recommended.

In some cases, it involves risk which makes deployment more prone to failure and breakdown.

1. Database Sync Up: Schema changes are complex to decouple. In the case of Blue Green deployment, syncing of database and data changes should be synchronized between the Blue and Green environment. In case of relational database, it can lead to discrepancies.
2. QA/UAT Identify of Failure: In some scenarios, with large infra, it is possible that sometimes QA test cases will not detect errors/bugs in a non-live environment
3. Dashboard Required: Since we have two identical production environments with a different version of code, while running the deployment it becomes important to monitor insights with packages and code at any point of time to trigger things.
4. Cost: For Blue-Green Deployment, we have two sets of environments running in parallel all time, thus increasing the cost of two production environments and maintaining them.

## Conclusion:

Blue Green deployment is one of favourable technique to deploy application . Since every deployment technique and application has its own pros and cons , therefore team should collaborate and work on choosing the right deployment technique for their application according to tools, and services used to host your application.

For deployment technique to work on, there is no fixed approach that will suit and work in every scenario. there should be extensive research before settling for any deployment technique.

### kanav preet

Author

Kanav is working as SRE in leading fintech firm having experience in CICD Pipeline, Cloud, Automation, Build Release  and Deployment. She is passionate about leveraging technology to build innovative and effective software solutions. Her insight, passion and energy results in her engaging a strong clientele who move ahead with her ideas. She has done various certifications in  Continuous delivery & DevOps (University of Virginia), tableau , Linux (Linux foundation) and many more.

## Introduction to Docker, Docker containers & Docker Hub

You landed up here which means that you are willing to know more about DevOps and hey, you can admit it! And of course, the business community has been taking this trend to the next level not because it looks fancy but of course, this process has proven the commitment. The growth and adoption of this disruptive model are increasing the productivity of the company.  So here, we will get an idea of how this model works and how we can enable this across the organization. According to DevOps.com, during the period of the year 2015 to 2016, the adoption rate of DevOps increased significantly, with a rise of 9 per cent in its usage rate. You may have a look at DevOps Foundation training course to know more about the benefits of learning DevOps.1. What is DevOpsDevOps is a practice culture having a union of people, process, and tools which enables faster delivery. This culture helps to automate the process and narrow down the gaps between development and IT. DevOps is a kind of abstract impression that focuses on key principles like Speed, Rapid Delivery, Scalability, Security, Collaboration & Monitoring etc.A definition mentioned in Gartner says:“DevOps represents a change in IT culture, focusing on rapid IT service delivery through the adoption of agile, lean practices in the context of a system-oriented approach. DevOps emphasizes people (and culture) and seeks to improve collaboration between operations and development teams. DevOps implementations utilize technology— especially automation tools that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective.”2. History of DevOpsLet's talk about a bit of history in cloud computing. The word “cloud compute” got coined in early 1996 by Compaq computer, but it took a while to make this platform easily accessible even though they claimed its a $2 billion market per year. In August 2006 Amazon had introduced cloud infra which was easily accessible, and then it became a trend, and post this, in April 2008 Google, early 2010 Microsoft and then April 2011 IBM has placed his foot in this vertical. This showed a trend that all Giants are highly believed in this revolution and found potential in this technology.And in the same era DevOps process got pitched in Toronto Conference in 2008 by Patrick Debois and Andrew Shafer. He proposed that there is a better approach can be adopted to resolve the conflicts we have with dev and operation time so far. This again took a boom when 2 Flickr employee delivered a seminar that how they are able to place 10+ deployment in a day. They came up with a proven model which can resolve the conflicts of Dev and operation having component build, test and deploy this should be an integrated development and operations process.3. Why market has adopted so aggressivelyLouis Columbus has mentioned in Forbes that by the end of 2020 the 83% of the workload will be on the cloud where major market contributor will be AWS & Google. The new era is working more on AI, ML, Crypto, Big Data, etc. which is playing a key role in cloud computing adoption today but at the same time, IT professionals say that the security is their biggest concern in adopting a cloud computing. Moreover, Cloud helped so many startups to grow at initial stages and later they converted as a leader in there market place, which has given confidence to the fresh ideas.This entire cloud enablement has given more confidence to the team to adopt DevOps culture as cloud expansion is helping to experiment more with less risk.4. Why is Devop used?To reduce day to day manual work done by IT team To avoid manual error while designing infra Enables a smooth platform for technology or cloud migration Gives an amazing version control capabilities A better way to handle resources whether it’s cloud infra or manpower Gives an opportunity to pitch customer a realistic feature rollout commitment To adopt better infra scaling process even though you receive 4X traffic Enables opportunity to build a stable infrastructure5. Business need and value of DevOpsLet's understand this by sharing a story of, a leading Online video streaming platform 'Netflix' which was to disclose the acquisition or we can say a company called Blockbuster LLC got an opportunity in 2000 to buy Netflix in$50 million. Netflix previously was working only on  DVD-by-mail service, but in 2016 Netflix made a business of \$8.83 billion which was tremendous growth in this vertical. Any idea of how this happened? This started with an incident at Netflix office where due to a database mismatch a DVD shipping got disrupted for 3 days which Forced Management to move on the cloud from relational systems in their data centers because this incident made a massive impact on core values. The shift happened from vertical to horizontal, and AWS later provided with Cloud service even I have read that in an early stage they gathered with AWS team and worked to scale the infrastructure. And, today Netflix is serving across 180 countries with the amount of 15,00,00,000 hours of video content to less or more 8,60,00,000 members.6. Goal Of DevOpsControl of quality and increase the frequency of deploymentsAllows to enable a risk-free experimentEnables to improve mean time to recovery and backupHelps to handle release failure without losing live dataHelps to avoid unplanned work and technical failureTo achieve compliance process and control on the audit trailAlert and monitoring in an early stage on system failureHelps to maintain SLA in a uniform fashionEnabling control for the business team7. How Does DevOps workSo DevOp model usually keeps Development and operation team tightly coupled or sometimes they get merged and they roll out the entire release cycle. Sometime we may see that development, operation and security & Network team is also involved and slowly this got coined as DevSecOps. So the integration of this team makes sure that they are able to crack development, testing, deployment, provisioning infra, monitoring, network firewalling, infrastructure accessibility and accountability. This helps them to build a clean application development lifecycle to deliver a quality product.8. DevOps workflow/LifecycleDevOps Workflow (Process)DevOps workflow ensures that we are spending time on the right thing that means the right time is involved in building product/infrastructure. And how it enables we can analyze in below diagram. When we look into the below diagram, it seems DevOps process in an extended version of agile methodologies but it doesn’t mean that it can fit in other SDLC methodologies.  There is enough scope in other SDLC process as well. Once we merge process and Tools workflow diagram, it showcases a general DevOps environment. So the team puts an effort to keep pushing the releases and at the same time by enabling automation and tools we try maintaining the quality and speed.DevOps Workflow (Process)DevOps Workflow (Tool)9. DevOps valuesI would like to split DevOps values into  two groups: Business Values Organization Values Business values are moreover customer centricHow fast we recover if there is any failure?How we can pitch the exact MRR to a customer and acquire more customers?How fast we can deliver the product to customersHow to roll out beta access asap if there any on-demand requirement?Organizational ValuesBuilding CultureEnabling communication and collaborationOptimize and automate the whole systemEnabling Feedbacks loopsDecreasing silosMetrics and Measurement10. Principle of DevOpsAutomated: Automate as much as you can in a linear and agile manner so you can build an end to end automated pipeline for software development life cycle in an effective manner which includes quality, rework, manual work and cost. And it’s not only about the cycle it is also about the migration from one technology to another technology, one cloud to another cloud etc.Collaborative: The goal of this culture is to keep a hold on both development and operations. Keep an eye and fix the gaps to keep moving thing in an agile way, which needs a good amount of communication and coordination. By encouraging collaborative environment an organization gets ample of ideas which help to resolve issue way faster. The beauty of the collaboration is it really handles all unplanned and manual work at an early stage which ends up given a quality build and process.Customer Centric approach: DevOps team always reacts as a startup and must keep a finger on the pulse to measure customer demands. The metrics they generate give an insight to the business team to see the trend of usage and burn rate. But of course, to find a signal in the noise, you should be focused and collect only those metrics which really matters.Performance Orientation: Performance is a principle and a discipline which gives an insight to the team to understand the implication of bad performance. Before moving to production if we get metrics and reports handy it gives confidence to technology and business both. This gives an opportunity to plan how to scale infrastructure, how to handle if there is a huge spike or the high usage, and the utilization of the infrastructure.Quality Indicators per application: Another set which is a predefined principle is to set measurable quality Assigning a quality gate to indicators with predefined targets, covering fit for purpose and security gives an opportunity to deliver a complete quality application.11. DevOps Key practices:i) CI/CD When we say “continuous” it doesn’t translate that “always running” but of course “always ready to run”.Continuous integration is nothing but the development philosophy and practices that drive teams to check-in code to version control system as often as possible. So keeping your build clean and QA ready developer's changes need to be validated by running automated tests against the build that could be a Junit, iTest. The goal of CI is to place a consistent and automated way to build and test applications which results in better collaboration between teams, and eventually a better-quality product.Continuous Delivery is an adjunct of CI which enables a facility to make sure that we can release new changes to your customers quickly in a sustainable way.Typical CD involves the below steps:Pull code from version control system like bitbucket and execute build.Execute any required infrastructure steps command line/script to stand up or tear down cloud infrastructure.Move build to right compute environmentAble to handle all the configuration generation process.Pushing application components to their appropriate services, such as web servers, API services, and database services.Executing any steps required to restarts services or call service endpoints that are needed for new code pushes.Executing continuous tests and rollback environments if tests fail.Providing log data and alerts on the state of the delivery.Below there is a table which will help us better understand what we need to put as an effort and what we will gain if we enable CI/CD in place:Practice TypeEffort RequiredGainContinuous integrationa) Need to prepare automated your team will need to write automated tests for each new feature,improvement or bug fix.b) You need a continuous integration server that can monitor the main repository and run the tests automatically for every new commits pushed.c) Developers need to merge their changes as often as possible, at least once a day.a Will give control on regressions which can be captured in early stage of automated testing.b) Less context switching as developers are alerted as soon as they break the build and can work on fixing it before they move to another task.c) Building the release is easy as all integration issues have been solved early.d) Testing costs are reduced drastically -your Cl server can run hundreds of tests in the matter of seconds.e) Your QA team spend less time testing and can focus on significant improvements to the quality culture.Continuous Delivery a) You need a strong foundation in continuous integration and your test suite needs to cover enough of your codebase.b) Deployments need to be automated. The trigger is still manual but once a deployment is started  there shouldn't be a need for human intervention.c) Your team will most likely need to embrace feature flags so that incomplete features do not affect customers in productiona) The complexity of deploying software has been taken away. Your team doesnt't have to spend days preparing for a release anymore.b) You can release more often,thus accelerating the feedback loop with your  customers.c) There is much less pressure on decisions for small changes,hence encoraging iterating faster.ii) Maintaining infra in a secure and compliant wayKeeping infrastructure secure and compliant is also a responsibility of DevOps or some organization are nowadays pitching as SecOps. A general and traditional way of security methodologies or rules get fails when you have multi-layer or multi-cloud system running for your product organization and sometimes it really fail when you moving with the continuous delivery process. So the job is to ensure that the team is putting clear and neat visibility on risk and vulnerabilities which may cause a big problem. Below adding a basic rule to avoid small loophole (Specific to GCP) :Level 1:- Define your resource hierarchy.- Create an Organization node and define the project structure.- Automation of project creation which helps to achieve uniformity and testability.Level 2:- Manage your Google identities.- Synchronize your existing identity platform.- Have single-sign-on (SSO).- Don’t add multiple users instead of that select resource level permission.- Be specific while providing resource access also with action type (Read, Write & Admin).- Always use service accounts to access meta-data because to authenticate this it uses keys instead of password and GCP rotates the service account keys for code running on GCP.Level 3:- Add VPC for network definition and create firewall rules to manage traffic.- Define a specific rule to control external access and avoid having unwanted port opens.- Create a centralized network control template which can be applied across the project.Level 4:- Enable centralized logging and monitoring (Preferred Stackdriver).- Enable audit log which helps you to collect the activity of Admin, System Event and Data Access in a phrase of who did what, where and when?Level 5:- Enable cold line data storage if there is a need to keeping a copy for disaster management.- Further reference for placing security standard in AWS there is an article I have posted a few months back. 12. DevOps Myths or What DevOps is not?Before I mention the myths I would clarify the biggest myth which every early stage learner carries that “DevOps practice can be rolled out in a day and output will be available from the 1st day”. This is too early to reach this conclusion, as the definition of DevOps always says that it's a culture and process which can be built in a day. But of course, you will get an opportunity to overcome your mistakes at an early stage. Let's discuss a few more myths:It’s now only about the tool, (It’s a component of the whole DevOps practice)Dev and Ops team should have used the same set of tools (How to overcome- push them to integrate both)Only startups can follow this practice (Azure has published an article on best practices of DevOps which says it can be applied anywhere)Joining DevOps/ DevOps tool conf with fancy sticker (Good you join but don’t pretend that now you are carrying DevOps tag)Pushing build in production in every 5 mins (This is not what Continuous delivery)DevOps doesn’t fit the existing system (Overcome: You may need to find the right approach to make an attempt)13. Benefits of DevOpsBusiness Benefitsa) Horizontal and vertical growth: When I’m using “Horizontal and Vertical Growth” I’m keeping customer satisfaction on X, Business on Y2 and time on the Y-axis. Now the question is how it helps to populate growth in 2 axis, and my answer will be the quick turnaround time for minor and major issues. Once we adopt DevOps we scale and built in such a fashion that in less time the graph shows a rapid jump.b) Improving ROI of Data: Having DevOps in an organization ensures that we can design a decent ROI from data at an early stage more quickly. If we will do a raw survey now Software industry is playing with data and have control over there a team should have an end to end control on data. And if we define DevOps it will help the team to crunch data in various ways by automating small jobs. By automation, we can segregate and justify data and this helps to populate either in Dashboard or can present offline to the customer.Technical Benefitsc) Scalability & Quality: If a business starts reaching to more user we start looking to increase infrastructure and bandwidth. But on the other hand, it starts popping up a question whether we are scaling our infra in the right way and also if a lot of people are pushing changes (Your code commits/builds) are having the same or greater quality we have earlier. Both the questions are somehow now reserved by the DevOps team. If your business pitch that we might be going to hit 2000+ client and they will be having billion of traffic and we are ready to handle, DevOps take these responsibilities and says yes can scale infra at any point of time. And if the same time internal release team says I want to deliver 10 feature in next 10 days independently, DevOps says quality can be maintained.Culture  Benefitsd) Agility & Velocity: They key param of adopting DevOps is to improve the velocity of product development. DevOps enables Agility and if both are in sync we can easily observe the velocity. The expectation of end users are always high and at the same time, the deliverable time span is short. To achieve this we have to ensure that we are able to our rollout new features to customers at much higher frequencies otherwise your competitors may win the market.e) Enabling Transparency:  A Practice to enable total transparency is a key impact on the DevOps culture. Sharing knowledge across the team gives you an opportunity to work faster and get aligned with the goal. Transparency will encourage an increasingly well-rounded team with heightened understandings.14. How to adopt a DevOps modelThe ideal way is to pick up a small part of the project or product but sometimes we start adopting when we are touching the bottleneck. So whatever you start few things need to be taken care like Goal should be clean and the team is are in sync to chase the same, loop whole which turns to a downtime, how can testing (Stress, performance, load ) to avoid production glitches and at the same time enable automated deployment process. All this could have some basic plan and move forward it can be elaborated in detailed format. While adopting a DevOps model, need to make sure that the team is always looking into metrics so they can justify no’s and make assumption towards the goal. If you want to have a roadmap of DevOps adoption then you really need to find the gaps up front and the typical problem you face every day which really holds your release process or spoils your team time.15. DevOps automation toolJenkins: Jenkins is an open source automation server which is used to automate the software build, and deliver or deploy the build.  It can be installed through native system packages, Docker, or even run standalone by any machine with a Java Runtime Environment (JRE) installed. In short, Jenkins enables continuous integration which helps to accelerate the development. There are ample of plugins available which enable the integration for Various DevOps stages. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc. More in-depth information about the same can be found here in our training material on Jenkins and if you are inquisitive about what sort of questions related to Jenkins are asked in job interviews, then feel free to view our set of 22 Jenkins interview questions.Ansible: An open-source platform which helps to automate the IT engine which actually pushes off the slavery work from DevOps day to day life. Usually, Ansible helps in 3 day to day task, Provisioning, configuration management and application deployment. Beauty is it can automate traditional servers, virtualization platforms, or the cloud one. It is built on playbooks which can be applied to an extensive variety of systems for deploying your app. To know more, you may have a look at our Ansible training material here or go through our set of 20 interview questions on Ansible.Chef: It’s an open source config management system and works as a master-client model. It is having a transparent design and works based on instruction which needs to be defined properly. Before you plan using this tool you need to make sure that you have a proper Git practice going on and you have an idea about Ruby as this is completely built on Ruby. The industry says that this is good to have for development-focused environments and enterprise mature architecture. Our comprehensively detailed training course on Chef will give you more insight into this tool.Puppet:  So Puppet works as a master-client setup and utilized as model driven. This is built on Ruby but you can customize this as scripting language somewhere close to JSON. Puppet helps you to get control of full-fledged configuration management. This tool somewhere helps Admin (Part of DevOps) to add stability and maturity towards config management. A more detailed explanation of Puppet and its functionality can be found in our training material.Docker:  A tool designed to help developers and administrators to provide flexibility to reduce the count of the system as docker don’t create a complete virtual operating system, instead they allow applications to use the same Linux kernel as the system. So somewhere we can say we use Docker to create, deploy, and run applications by using containers. Just a stats submitted by docket that over 3.5 million applications placed in containers using docker and 37 billion containerized applications have been downloaded. Specifically, Docker CI /CD provided an opportunity to have exactly like a live server and run multiple dev infra form the same host with different config and OS. You may visit our training course on Docker to get more information.Kubernetes: Platform developed to manage containerized applications which provide high availability and scalability. According to the usage we can downgrade and upgrade the system, we can perform rolling updates, rollback feasibility, switch traffic between to different versions of the application.  So we can have multiple instances having Kubernetes installed and can be operated as Kubernetes cluster. Post this you get an API endpoint of Kube cluster, configure Kubel and you are ready to serve. Read our all-inclusive course on Kubernetes to gather more information on the same.Docker and Kubernetes, although being widely tools as DevOps automation tools, have notable differences between their setup, installation and their attributes, clearly mentioned in our blog stressing the differences between Docker and Kubernetes.Alert:Pingdom: Pingdom is a platform which enables monitoring to check the availability,  performance, transaction monitoring (website hyperlink) and incident collection of your websites, servers or web applications. Beauty is if you are using a collaboration tool like slack or flock you can just integrate by using the webhook (Pretty much simpler no code required )you can easily get notified at any time. Pingdom also provides an API so you can have your customized dashboard (Recently started) and the documentation is having enough details and self-explanatory.Nagios: It’s an Open Source Monitoring Tool to monitor the computer network. We can monitor server, applications, incident manager etc and you can certainly configure email, SMS, Slack notifications and phone calls even. Nagios is licensed under GNU GPLv2. Listing some major components which can be monitored with Nagios:Once we install Nagios we get a dashboard to monitor network services like SMTP, HTTP, SNMP, FTP, SSH, POP, etc and can view current network status, problem history, log files, notifications that have been triggered by the system, etc.We can monitor Servers resources like disk drives, memory, processor, server load usage, system logs, etc.Image copyright stackdriver- D-4Stackdriver: Stackdriver is again a Monitoring tool to get the visibility of performance, uptime, and overall health for cloud-powered applications. Stackdriver monitoring collects events and metadata from Google Cloud Platform, Amazon Web Services (AWS). Stackdriver consumes data and generates insights via dashboards, charts, and alerts. And for alerting we can integrate to collaboration tools like Slack, PagerDuty, HipChat, Campfire, and more.Image copyright stackdriver- D-2Adding one sample log where we can see what all parameter it collects and also i have just separated them in fashion which will help us to understand what all it logs it actually collects:Log InformationUser Details and Authorization InfoRequest Type and Caller IPResource and Operation DetailsTimeStamp and Status Details{  insertId:    logName:  operation: {   first:     id:   producer:  } protoPayload: {   @type:   authenticationInfo: {   principalEmail:       }   authorizationInfo: [   0: {   granted:     permission:       }   ]methodName:     request: {    @type:     } requestMetadata:  { callerIp: callerSuppliedUserAgent:   } resourceName: response: { @type:    Id:  insertTime:    name:  operationType:    progress:    selfLink:    status:      targetId:     targetLink:    user:    zone:     }   serviceName:    }receiveTimestamp:  resource: {  labels: {  instance_id:  project_id:      zone:      }  type:    }  severity:  timestamp:   }Monitoring:Grafana: It is an open source visualization tool and can be used on top of different data stores like InfluxDB,Elasticsearch and Logz.io.We can create comprehensive charts with smart axis formats (such as lines and points) as a result of Grafana’s fast, client-side rendering — even over long ranges of time — that uses Flot as a default option. We can get the 3 different levels of access, watcher, Editor and Admin, even we can enable G-Auth for having good access control. A detail information guide can be found hereImage copyright stackdriver- D-5Elasticsearch:It's an open source realtime distributed, RESTful search and analytics engine. It collects unstructured data and stores in a cultivated format which is optimized and available for language based search. The beauty of Elastic is scalability, speed, document-oriented, schema-free. It scales horizontally to handle kajillions of events per second, while automatically managing how indices and queries are distributed across the cluster for smooth operations.Cost OptimizationreOptimize.io : Once we run ample of servers we usually end up with burning good amount not intentionally but of course because of not have a clear visualization. At reOptimze helps thereby providing a detailed insight about the cloud expense the integration can be done with 3-4 simple steps but before that you might need to look into the prerequest which can be accessed here. Just a heads up that they only need read access for all these and permission docs can be found here . Image copyright reOptimizeD-616. DevOps vs AgileDevOpsAgileDevOps culture can be enabled in the software industry to deliver reliable build.Agile is a generic culture which can be deployed in any department.The key focus area is to have involvement at an end to end process.Agile helps the management team to push a frequent release.Enables quality build with rapid delivery.Keep team aware of frequent changes for any release and feature.Agile sprints work within the immediate future, A sprint life cycle varies between 7-30 days.DevOps don’t have such scheduled metrix, they work to avoid such unscheduled disruptions.Team size also differs, in Agile wee can minimal resource can be one as well.DevOps works on collaboration and bridge a big set of the team. 17. Future of DevOpsThe industry is moving more on cloud which enables a few more responsibilities to DevOps. And immediate hot topic could be DevSecOps because more automation tends to more connectivity means more exposure. AI or ML is more data-centric and learning based which gives an opportunity to DevOps to share hand to train ML modules, unique analysis correlating, code, test results, user behavior, and production quality & performance. There is also an opportunity to break the stereotype that DevOps can be only adopted by startup and surely the next 2-3 years this will become a general practice in Enterprise.18. Importance of DevOps training certificationCertifications work like an add-on, and add-on always gives some crisp and cumulative results. Certification works similar to that if someone adds any professional certificates to resume it gives an add-on value. In the process of certification, professionals help you to understand in detail and the deep dive of DevOps culture help individual to get a clear picture. While certification you may have to look into the vendor reputation, an academic who is giving approval, the transparency, session hour and many more components.19. Conclusion: I have been working closely and observing the DevOps team from a year & so and found every day we learn something new. The more we dive we see a lot can be done and can be achieved in a couple of different ways. As the industry is growing the responsibility of DevOps seems to increase which creates a possible chance for professional but always set a new bar of quality. Now, since you have come to know everything about DevOps, feel free to read our blog on how you can become a DevOps Engineer.