Search

How to Install Docker on Ubuntu

Docker is a platform that packages the application and all its dependencies in the container so that the application works seamlessly. The Container makes the application run its resource in an isolated process similar to the virtual machines, but it is more portable. For a detailed introduction to the different components of a Docker container, you can check out Introduction to Docker, Docker Containers & Docker Hub This tutorial covers the installation and use of Docker Community Edition (CE) on an Ubuntu 20.04 machine. Pre-requisitesAudienceThis tutorial is meant for those who are interested in learning Docker as a container service System Requirements Ubuntu 20.04 64-bit operating system. (If Linux OS is not in system, we can run docker using Virtual Box, PFB the steps) A user account with sudo privileges An account on Docker Hub to pull or push an image from Hub. Ubuntu Installation on Oracle Virtual Box If you want to use Ubuntu 20.04 without making any change to the Windows Operating system, you can proceed with the Oracle Virtual box.  Virtual Box is free and open-source virtualization software from Oracle. It enables you to install other operating systems in virtual machines. It is recommended that the system should have at least 4GB of RAM to get decent performances from the virtual operating system. Below are the steps for downloading Ubuntu 20.04 on Oracle Virtual box:Navigate to the website of Oracle Virtual Box, download the .exe file and get the latest stable version. 1. Once done with downloading the virtual box, we can navigate to and download the  Ubuntu disk image (.iso file) by clicking on the download option 2. Once the download has been completed for Ubuntu .iso file, open the virtual box and click on "New" present on top.  3. Enter the details of your virtual machine by giving any name, type as "Linux " and Version as Ubuntu (64 bit)  4. Choose the memory (RAM ) that needs to be allocated to the Virtual machine  and click on Next. (I have chosen 3000 MB) 5. After the RAM allocation ,Click on  Create a virtual disk now. This serves as the hard disk of the virtual Linux system. It is where the virtual system will store its files 6. Now, we want to select the Virtual Hard Disk.  7. We can choose either the “Dynamically allocated” or the “Fixed size” option for creating the virtual hard disk. 8. Finally, we have  to specify our Ubuntu OS's size. The recommended size is 10 GB, but it  can be increased if required.8. Finally, we have  to specify our Ubuntu OS's size. The recommended size is 10 GB, but it  can be increased if required.9. Ubuntu OS is ready to install in Virtual Box, but before starting the Virtual system, we need to a make few changes in settings. Click on storage under the setting.  10. Click on Empty under Controller IDE. Navigate to Attributes and browse the Optical Drive option. 11. Choose the .iso file from the location where it is downloaded. Once selected, click on OK and start the Virtual box by clicking on start present on the Top menu.12. Click ok and start the machine. 13. Proceed with "Install Ubuntu" 14. Under "Updates and other software" section, check "Normal installation", and the two options under “Other options” and continue.15. In Installation type, check Erase disk and install Ubuntu.16. Choose your current location and set up your profile. Click Continue.  17. It may take 10-15 minutes to complete the installation 18. Once the installation finishes, restart the virtual systemWe are done with pre-request, and can now proceed with using this Ubuntu. Docker Installation Process on Ubuntu  Method 1: Install Docker on Ubuntu Using Default Repositories One of the easiest ways is the installation of Docker from the standard Ubuntu 20.04 repositories, but It’s possible that the Ubuntu default repositories have not updated to the latest revision of Docker. It happens because in some cases Docker is not supporting that particular Ubuntu version. Therefore, there can be a scenario where  Ubuntu default repositories have not updated to the latest version. Log in to Virtual Box. Run “docker” as command to check if it is previously installed.To install Docker on Ubuntu box, first update the packages. It will ask for a password. Enter it and allow the system to complete the updates.sudo apt updateTo install Docker from Ubuntu default repositories, use the below command: sudo apt install docker.io To check the installed version, use the below: docker --version Since discussed above, it has installed the 19.03.8 version of docker whereas the latest version is 20.04  Method 2: Install Docker from Official Repository For installing docker on ubuntu 20.04 with the latest version, we’ll proceed with enabling the Docker repository, importing the repository GPG key, and finally installing the package. To install the docker on Ubuntu box, update your existing list of packages. It will ask for a password. Enter it and allow the system to complete the updates. sudo apt update  We need to install a few prerequisite packages to add HTTPS repository : sudo apt install apt-transport-https ca-certificates curl software-properties-common Import the repository’s GPG key using the following curl command: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - Add the Docker APT repository to the system sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"Again, update the package database with the Docker packages sudo apt update Finally, install Docker using below command: sudo apt install docker-ce To check the installed version use below: docker --versionTo start, enable and check the status of docker, use below command: sudo systemctl  status docker  sudo systemctl  start  docker  sudo systemctl  enable  docker To check system wide information regarding docker installation, we use the command “docker info”. Information that is shown includes the kernel version, number of containers and unique images. The output will contain details as given below, depending upon the daemon running: Source:$ docker info  Client:   Context:    default   Debug Mode: true  Server:   Containers: 14    Running: 3    Paused: 1    Stopped: 10   Images: 52   Server Version: 1.13.0   Storage Driver: overlay2    Backing Filesystem: extfs    Supports d_type: true    Native Overlay Diff: false   Logging Driver: json-file   Cgroup Driver: cgroupfs   Plugins:    Volume: local    Network: bridge host macvlan null overlay   Swarm: active    NodeID: rdjq45w1op418waxlairloqbm    Is Manager: true    ClusterID: te8kdyw33n36fqiz74bfjeixd    Managers: 1    Nodes: 2    Orchestration:     Task History Retention Limit: 5    Raft:     Snapshot Interval: 10000     Number of Old Snapshots to Retain: 0     Heartbeat Tick: 1     Election Tick: 3    Dispatcher:     Heartbeat Period: 5 seconds    CA Configuration:     Expiry Duration: 3 months    Root Rotation In Progress: false    Node Address: 172.16.66.128 172.16.66.129    Manager Addresses:     172.16.66.128:2477   Runtimes: runc   Default Runtime: runc   Init Binary: docker-init   containerd version: 8517738ba4b82aff5662c97ca4627e7e4d03b531   runc version: ac031b5bf1cc92239461125f4c1ffb760522bbf2   init version: N/A (expected: v0.13.0)   Security Options:    apparmor    seccomp     Profile: default   Kernel Version: 4.4.0-31-generic   Operating System: Ubuntu 16.04.1 LTS   OSType: linux   Architecture: x86_64   CPUs: 2   Total Memory: 1.937 GiB   Name: ubuntu   ID: H52R:7ZR6:EIIA:76JG:ORIY:BVKF:GSFU:HNPG:B5MK:APSC:SZ3Q:N326   Docker Root Dir: /var/lib/docker   Debug Mode: true    File Descriptors: 30    Goroutines: 123    System Time: 2016-11-12T17:24:37.955404361-08:00    EventsListeners: 0   Http Proxy: http://test:test@proxy.example.com:8080   Https Proxy: https://test:test@proxy.example.com:8080   No Proxy: localhost,127.0.0.1,docker-registry.somecorporation.com   Registry: https://index.docker.io/v1/   WARNING: No swap limit support   Labels:    storage=ssd    staging=true   Experimental: false   Insecure Registries:    127.0.0.0/8   Registry Mirrors:     http://192.168.1.2/     http://registry-mirror.example.com:5000/   Live Restore Enabled: false Note: In case you get below error after running “docker info” command, one way is to add sudo in front and run the command, OR you can refer to the same error-resolving steps mentioned under Running Docker Images section. Running Docker Images and Verifying the process: To check whether you can access and download the images from Docker Hub, run the following command: sudo docker run hello-worldIn case of errors received after running the docker run command, you can correct it using the following steps, otherwise proceed with the next step of checking the image. ERROR: docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/containers/create: dial unix /var/run/docker.sock: connect: permission denied. See 'docker run --help'.   Create the docker group if it does not exist sudo groupadd docker Add your user to the docker group.   sudo usermod -aG docker $USER   Eg:- sudo usermod -aG docker kanav Run the following command or Logout and login again and run ( if that doesn't work you may need to reboot your machine first)  newgrp docker Check if docker can be run without root docker run hello-world If the problem still continues, try to reboot it and run the command. To check the image, use this command: sudo docker images Uninstall Procedure: Below are the common commands used to remove images and containers: sudo  apt-get  purge docker-ce docker-ce-cli containerd.io To completely uninstall Docker, use below: To identify what are the installed packages, this is the command: dpkg -l | grep -i dockersudo apt-get purge -y docker-engine docker docker.io docker-ce docker-ce-cli  sudo apt-get autoremove -y --purge docker-engine docker docker.io docker-ce   To remove images, containers, volumes, or user created configuration files, these commands can be used: sudo rm -rf /var/lib/docker /etc/docker sudo rm /etc/apparmor.d/docker sudo groupdel docker sudo rm -rf /var/run/docker.sock  Conclusion: If you found this Install Docker on Ubuntu blog relevant and useful, do check out the Docker-Training workshop from KnowledgeHut, where you can get equipped with all the basic and advanced concepts of Docker! 
How to Install Docker on Ubuntu
Kanav
Kanav

Kanav Preet

Author

Kanav is working as SRE in leading fintech firm having experience in CICD Pipeline, Cloud, Automation, Build Release  and Deployment. She is passionate about leveraging technology to build innovative and effective software solutions. Her insight, passion and energy results in her engaging a strong clientele who move ahead with her ideas. She has done various certifications in  Continuous delivery & DevOps (University of Virginia), tableau , Linux (Linux foundation) and many more.

Posts by Kanav Preet

How to Install Docker on Ubuntu

Docker is a platform that packages the application and all its dependencies in the container so that the application works seamlessly. The Container makes the application run its resource in an isolated process similar to the virtual machines, but it is more portable. For a detailed introduction to the different components of a Docker container, you can check out Introduction to Docker, Docker Containers & Docker Hub This tutorial covers the installation and use of Docker Community Edition (CE) on an Ubuntu 20.04 machine. Pre-requisitesAudienceThis tutorial is meant for those who are interested in learning Docker as a container service System Requirements Ubuntu 20.04 64-bit operating system. (If Linux OS is not in system, we can run docker using Virtual Box, PFB the steps) A user account with sudo privileges An account on Docker Hub to pull or push an image from Hub. Ubuntu Installation on Oracle Virtual Box If you want to use Ubuntu 20.04 without making any change to the Windows Operating system, you can proceed with the Oracle Virtual box.  Virtual Box is free and open-source virtualization software from Oracle. It enables you to install other operating systems in virtual machines. It is recommended that the system should have at least 4GB of RAM to get decent performances from the virtual operating system. Below are the steps for downloading Ubuntu 20.04 on Oracle Virtual box:Navigate to the website of Oracle Virtual Box, download the .exe file and get the latest stable version. 1. Once done with downloading the virtual box, we can navigate to and download the  Ubuntu disk image (.iso file) by clicking on the download option 2. Once the download has been completed for Ubuntu .iso file, open the virtual box and click on "New" present on top.  3. Enter the details of your virtual machine by giving any name, type as "Linux " and Version as Ubuntu (64 bit)  4. Choose the memory (RAM ) that needs to be allocated to the Virtual machine  and click on Next. (I have chosen 3000 MB) 5. After the RAM allocation ,Click on  Create a virtual disk now. This serves as the hard disk of the virtual Linux system. It is where the virtual system will store its files 6. Now, we want to select the Virtual Hard Disk.  7. We can choose either the “Dynamically allocated” or the “Fixed size” option for creating the virtual hard disk. 8. Finally, we have  to specify our Ubuntu OS's size. The recommended size is 10 GB, but it  can be increased if required.8. Finally, we have  to specify our Ubuntu OS's size. The recommended size is 10 GB, but it  can be increased if required.9. Ubuntu OS is ready to install in Virtual Box, but before starting the Virtual system, we need to a make few changes in settings. Click on storage under the setting.  10. Click on Empty under Controller IDE. Navigate to Attributes and browse the Optical Drive option. 11. Choose the .iso file from the location where it is downloaded. Once selected, click on OK and start the Virtual box by clicking on start present on the Top menu.12. Click ok and start the machine. 13. Proceed with "Install Ubuntu" 14. Under "Updates and other software" section, check "Normal installation", and the two options under “Other options” and continue.15. In Installation type, check Erase disk and install Ubuntu.16. Choose your current location and set up your profile. Click Continue.  17. It may take 10-15 minutes to complete the installation 18. Once the installation finishes, restart the virtual systemWe are done with pre-request, and can now proceed with using this Ubuntu. Docker Installation Process on Ubuntu  Method 1: Install Docker on Ubuntu Using Default Repositories One of the easiest ways is the installation of Docker from the standard Ubuntu 20.04 repositories, but It’s possible that the Ubuntu default repositories have not updated to the latest revision of Docker. It happens because in some cases Docker is not supporting that particular Ubuntu version. Therefore, there can be a scenario where  Ubuntu default repositories have not updated to the latest version. Log in to Virtual Box. Run “docker” as command to check if it is previously installed.To install Docker on Ubuntu box, first update the packages. It will ask for a password. Enter it and allow the system to complete the updates.sudo apt updateTo install Docker from Ubuntu default repositories, use the below command: sudo apt install docker.io To check the installed version, use the below: docker --version Since discussed above, it has installed the 19.03.8 version of docker whereas the latest version is 20.04  Method 2: Install Docker from Official Repository For installing docker on ubuntu 20.04 with the latest version, we’ll proceed with enabling the Docker repository, importing the repository GPG key, and finally installing the package. To install the docker on Ubuntu box, update your existing list of packages. It will ask for a password. Enter it and allow the system to complete the updates. sudo apt update  We need to install a few prerequisite packages to add HTTPS repository : sudo apt install apt-transport-https ca-certificates curl software-properties-common Import the repository’s GPG key using the following curl command: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - Add the Docker APT repository to the system sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"Again, update the package database with the Docker packages sudo apt update Finally, install Docker using below command: sudo apt install docker-ce To check the installed version use below: docker --versionTo start, enable and check the status of docker, use below command: sudo systemctl  status docker  sudo systemctl  start  docker  sudo systemctl  enable  docker To check system wide information regarding docker installation, we use the command “docker info”. Information that is shown includes the kernel version, number of containers and unique images. The output will contain details as given below, depending upon the daemon running: Source:$ docker info  Client:   Context:    default   Debug Mode: true  Server:   Containers: 14    Running: 3    Paused: 1    Stopped: 10   Images: 52   Server Version: 1.13.0   Storage Driver: overlay2    Backing Filesystem: extfs    Supports d_type: true    Native Overlay Diff: false   Logging Driver: json-file   Cgroup Driver: cgroupfs   Plugins:    Volume: local    Network: bridge host macvlan null overlay   Swarm: active    NodeID: rdjq45w1op418waxlairloqbm    Is Manager: true    ClusterID: te8kdyw33n36fqiz74bfjeixd    Managers: 1    Nodes: 2    Orchestration:     Task History Retention Limit: 5    Raft:     Snapshot Interval: 10000     Number of Old Snapshots to Retain: 0     Heartbeat Tick: 1     Election Tick: 3    Dispatcher:     Heartbeat Period: 5 seconds    CA Configuration:     Expiry Duration: 3 months    Root Rotation In Progress: false    Node Address: 172.16.66.128 172.16.66.129    Manager Addresses:     172.16.66.128:2477   Runtimes: runc   Default Runtime: runc   Init Binary: docker-init   containerd version: 8517738ba4b82aff5662c97ca4627e7e4d03b531   runc version: ac031b5bf1cc92239461125f4c1ffb760522bbf2   init version: N/A (expected: v0.13.0)   Security Options:    apparmor    seccomp     Profile: default   Kernel Version: 4.4.0-31-generic   Operating System: Ubuntu 16.04.1 LTS   OSType: linux   Architecture: x86_64   CPUs: 2   Total Memory: 1.937 GiB   Name: ubuntu   ID: H52R:7ZR6:EIIA:76JG:ORIY:BVKF:GSFU:HNPG:B5MK:APSC:SZ3Q:N326   Docker Root Dir: /var/lib/docker   Debug Mode: true    File Descriptors: 30    Goroutines: 123    System Time: 2016-11-12T17:24:37.955404361-08:00    EventsListeners: 0   Http Proxy: http://test:test@proxy.example.com:8080   Https Proxy: https://test:test@proxy.example.com:8080   No Proxy: localhost,127.0.0.1,docker-registry.somecorporation.com   Registry: https://index.docker.io/v1/   WARNING: No swap limit support   Labels:    storage=ssd    staging=true   Experimental: false   Insecure Registries:    127.0.0.0/8   Registry Mirrors:     http://192.168.1.2/     http://registry-mirror.example.com:5000/   Live Restore Enabled: false Note: In case you get below error after running “docker info” command, one way is to add sudo in front and run the command, OR you can refer to the same error-resolving steps mentioned under Running Docker Images section. Running Docker Images and Verifying the process: To check whether you can access and download the images from Docker Hub, run the following command: sudo docker run hello-worldIn case of errors received after running the docker run command, you can correct it using the following steps, otherwise proceed with the next step of checking the image. ERROR: docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/containers/create: dial unix /var/run/docker.sock: connect: permission denied. See 'docker run --help'.   Create the docker group if it does not exist sudo groupadd docker Add your user to the docker group.   sudo usermod -aG docker $USER   Eg:- sudo usermod -aG docker kanav Run the following command or Logout and login again and run ( if that doesn't work you may need to reboot your machine first)  newgrp docker Check if docker can be run without root docker run hello-world If the problem still continues, try to reboot it and run the command. To check the image, use this command: sudo docker images Uninstall Procedure: Below are the common commands used to remove images and containers: sudo  apt-get  purge docker-ce docker-ce-cli containerd.io To completely uninstall Docker, use below: To identify what are the installed packages, this is the command: dpkg -l | grep -i dockersudo apt-get purge -y docker-engine docker docker.io docker-ce docker-ce-cli  sudo apt-get autoremove -y --purge docker-engine docker docker.io docker-ce   To remove images, containers, volumes, or user created configuration files, these commands can be used: sudo rm -rf /var/lib/docker /etc/docker sudo rm /etc/apparmor.d/docker sudo groupdel docker sudo rm -rf /var/run/docker.sock  Conclusion: If you found this Install Docker on Ubuntu blog relevant and useful, do check out the Docker-Training workshop from KnowledgeHut, where you can get equipped with all the basic and advanced concepts of Docker! 
5458
How to Install Docker on Ubuntu

Docker is a platform that packages the application... Read More

Chaos Engineering

The 4th industrial revolution has swept the world. In just under a decade, our lives have become completely dependent on technology. The world has become a smaller place due to the internet and day by day we see an increase in the number of industries that are switching to the online platform. But this is still a new technology and emerging and developed economies are still trying to perfect the infrastructure and ecosystem which is needed to run these businesses online. This uncertainty makes failure more prevalent.  We generally came across headlines "Customers report difficulty in accessing bank mobile and online", "Bank Website down, not working" , "Service Unavailable" and such unpredictability is occurring on a regular frequency.  These outages/failures are often in complex and distributed systems, where often, several things fail at the same time, thereby compounding the problem. Finding the bugs and fixing them takes a couple of minutes to hours depending on system architecture, causing not only loss of revenue to the company but also loss of customer trust. The system is built to handle individual failures, but in big chaotic systems, failure of systems or processes may lead to severe outages. The term Microservice Death Star, refers to an architecture that is poorly designed, has highly interdependent complex systems that are slow, inflexible and can blow up and lead to failure. Image SourceStructure of microservices at AmazonImage SourceIn the old world, our system was more simplistic due to monolithic architecture. It was easy to debug errors and consequently fix them. Code changes were shipped once a quarter, or half-yearly. But today, architecture has changed a lot with migration to the cloud where innovation and speed of execution have become part for our system. The system is changing not in order of weeks and days but in order of minutes and hours. Usage of cloud-based and microservice architecture has provided us with a lot of advantages but come with complexity and chaos which can cause failure. It is an engineer’s responsibility to make the system as reliable as it can be.  Netflix's Way of Dealing with the system has taught us a better approach and has given birth to a new discipline "Chaos Engineering". Let's discuss more about it below.  Chaos Engineering and its Need:As Defined by a Netflix Engineer: "Chaos engineering is the discipline of experimenting on a software system in production to build confidence in the system's capability to withstand turbulent and unexpected conditions" Reference Link.Chaos engineering is the process of exposing a software system by introducing disruptive events, such as server outages or API throttling. In this process, we introduce  failure scenarios, faults, to test  the system’s capability of surviving against unstable and unexpected conditions. It also helps teams to simulate real-world conditions needed to uncover the hidden issues, monitoring blind spots, and performance bottlenecks that are difficult to find in distributed systems. This method is quite effective in preventing downtime or production outages before their occurrence. The Need for Chaos Engineering: How does it benefit? Implementing Chaos Engineering improves the resilience of a system.  By designing and executing Chaos Engineering experiments, we  get to know about weaknesses in the system that could lead to outages, which in turn can lose us customers. This helps improve incident response. It helps us to improve the understanding of the risk of the system by exposing threats to the system.  Principles of Chaos Engineering: The term Chaos Engineering was designed  by Engineers at Netflix. Chaos Engineering Experiments are designed based on the following four principles: Define system’s normal behaviour: First, the steady state of the system is defined, thereby defining some measurable outputs which can indicate the system’s normal behaviour. Creating Hypothesis:  During an experiment, we need a hypothesis for comparing to a stable control group, and the same applies here too. If there is a reasonable expectation for a particular action according to which we will change the steady state of a system, then the first thing to do is to fix the system so that we accommodate for the action that will potentially have that effect on the system.  Apply real-world events: Design and create experiments by introducing real-world events like terminating servers, network failures, latency, dependency failure, memory malfunction, etc. Observe Results: In this, we will be comparing steady-state metrics with the system after introducing disturbance. For monitoring we can use cloudwatch, Kibana, splunk etc or any other tool which is already part of the system architecture. If there will be a difference in results, it can be used to identify future incidents, and improvements can be made. Otherwise, if there is no difference, it can improve a higher degree of trust and confidence about application among team members. Difference Between Chaos Engineering And Testing : When we develop an application, we pass it through various tests that include Unit Tests, Integration Tests, and System Tests. With Unit testing, we write a unit test case and check the expected behaviour of a component that is independent of all external components whereas Integration testing checks the interaction of individual and inter-dependant components. But even extensive testing does not provide us with a guaranteed error-free system because this testing examines only pre-defined and single scenarios. The results don't cover new information about the application, system behaviour, performance, and properties. This uncertainty increases with the use of microservice architectures, where the system grows with passing time. Whereas in chaos, it generates a wide range and unpredictable outcome for experimenting on a distributed architecture to build confidence in the system’s capability and withstand turbulent conditions in production. Chaos Testing is a deliberate introduction of failure and faulty scenarios into our system to understand how the system will react and what could be its side effects. This type of testing is an effective method to prevent/minimize outages before they impact the system and ultimately the business.  Chaos Engineering Examples There are many chaos experiments that we can inject and test our system with, which mainly depend on our goals and system architecture.  Below is a list of the most common chaos tests: Simulating the failure of a micro-component and dependency. Simulating a high CPU load and sudden increase in traffic. Simulating failure of entire AZ(Availability Zone) or region. Injecting latency and byzantine failures in services. Exhausting memory on instances(cloud services) and allowing fault injection. Causing Host Failure. List of Tools Developed by Netflix: The Netflix Team has created a suite of tools that support chaos engineering principles and named it the Simian Army. The tools constantly test the reliability, security, or resiliency of its Amazon Web Services infrastructure. Chaos Monkey: It is a tool that is used to test the resilience of the system. It works by disabling one system of production and testing how other remaining systems respond to the outage. It is designed to test system stability by enforcing failures and later on checking the response of the system.The name "Chaos Monkey" is explained in the book Chaos Monkeys by Antonio Garcia Martinez "Imagine a monkey entering a 'data centre', these 'farms' of servers that host all the critical functions of our online activities. The monkey randomly rips cables, destroys devices, and returns everything that passes by the hand [i.e. flings excrement]. The challenge for IT managers is to design the information system they are responsible for so that it can work despite these monkeys, which no one ever knows when they arrive and what they will destroy." Reference link.Latency Monkey: This is useful in testing fault tolerance of service by creating communication delays to provoke outages in the network. Doctor Monkey: It checks the health status as well as other components related to health of the system i.e. CPU load to detect unhealthy instances and eventually fixing the instance. Conformity Monkey: It finds the instance that doesn't adhere to best practices against a set of rules and sends an email notification to the owner of the instance. Janitor Monkey: Ensures cloud service is working free of unused resources and clutter. Disposes of any waste. Security Monkey: It is an extension of Conformity Monkey. It finds security violations or vulnerabilities, such as improperly configured AWS security groups, and terminates the offending instances. Chaos Gorilla: It is similar to Chaos Monkey, but drops full Availability Zone while testing. Chaos Engineering and DevOps: When it comes to DevOps and running SDLC, implementing chaos principles in the system helps in understanding system ability against failure, which later on helps in reducing incidents in production. There are scenarios, where we quickly need to deploy the software in an environment, for all those cases we can perform chaos engineering in distributed, continuous-changing, and complex development methodologies to find unexpected failures. Advantages: Insights received after running chaos testing can lead to a reduction in production incidents for the future. Through Chaos Engineering, the team can verify the system's behaviour on failure so that accordingly it takes action. Chaos Engineering helps in the testing response of the team to the incident. Also, helps in testing if the raised alert has been notified to the correct team. On a high level, Chaos Engineering provides us an advantage by overall system availability. Chaos Experiments make the system more resilient to failures. Production outages can lead to huge losses to companies depending on the usage of the system, therefore chaos engineering helps in the prevention of large losses in revenue. It helps in improving the confidence and engagement of team members for carrying out disaster recovery methods and makes applications highly reliable. Disadvantages: Implementing Chaos Monkey for a large-scale system and experimenting can lead to an increase in cost. Carelessness or Incorrect steps in formation and implementation can impact the application, thereby hampering the customer. While implementing the project, it doesn't provide any Interface to track and monitor. It runs through scripts and configuration files. It doesn't support all kinds of deployment.  Conclusion:In the present world of Software Development Lifecycle, chaos engineering has become a magnificent tool which can help organizations to not only improve resiliency, flexibility, and velocity of the system, but also helps in operating distributed system. Along with these benefits, it has also provided us with remediation of the issue before it impacts the system. Implementation of Chaos Engineering is important and should be adopted for better outcomes. In the above article, we have shared a brief about chaos engineering and demonstrated how it can provide new insights to the system. Hope this article has provided you with valuable insights about chaos engineering. This is an extensive field and there is a lot more to learn about it.   
5569
Chaos Engineering

The 4th industrial revolution has swept the world... Read More

What is Blue Green Deployment?

Deployment is the process of updating code and other activities on the server to make software available for use.In the current situation, there is an increase in demand for continuous deployment to stay current with software updates, so as to provide the user with good quality application experience. There are many techniques available in the market for this, and in this article, we will be discussing about Blue Green Deployment.What is Blue-Green Deployment?Blue Green Deployment is a software release model that consists of two identical production environments; Blue and Green, configured in a way where one environment is live and the other is in staging (idle) state.The idea behind this is to redirect traffic between two environments running with a different version of the application.This process eliminates downtime and reduces the risk that happens due to deployment.  In case any error occurs with the new version, we can immediately roll back to the stable version by swapping the environment.In some organizations, it is also termed as Red Black Deployment.Working of Blue Green Deployment:To implement Blue-Green deployment, there should be two identical environments. Also, this requires Router or Load Balancer so that traffic can be routed to the desired environment.In the image, we have two production environments, Blue and Green. The Blue environment is one where current (let's say version 1) of the application is running and is live as well. All the traffic of the application is directed to Blue by Router or load balancer (as per the infra set up). Meanwhile, version 2 of the application is deployed and tested on Green Environment.Currently, at this stage, we can refer to Blue environment as Live and Green as idle or at staging state.Once the code for version 2 is tested and ready to be live for production, we redirect the traffic from Blue Environment to Green Environment, making Green as Live and Blue as a staging environment. If any problem is detected with the Infrastructure or application after version 2 is made live, we can roll back to the previous version just by swapping the environment.Blue Green deployment matches all requirements of seamless, safe and fully reversible conditions for ideal deployment, but there are some practices that need to be adopted for smooth process, for eg. automating the workflow so that there is minimum human intervention, reducing chances of manual error.Along with that, it is also important to keep monitoring in place for Blue Green application.Tools and Services for Set-Up:Based upon infrastructure and application, there are various services i.e. Docker, Kubernetes,  Cloud, Cloudfoundry etc available which can be used to implement Blue-Green Deployment.We will be discussing further on Blue Green deployment on Cloud and the steps to implement it.The advent of Cloud in Blue-Green Deployment:The advent of cloud computing in deployment has helped the system to reduce associated risks.Cloud utilities of infra management, billing and automation have made it easier to implement Blue-Green Deployment, making it a quicker task at a lower cost.AWS Services for Blue-Green Deployment:By utilizing AWS for Blue-Green Deployment, we can access many services that help in automation of deployment and infrastructure i.e. AWS CLI, SDK, ELB, Elastic Beanstalk, CloudFormation etc. There are a number of solutions that AWS provides which we can use, some of them being:DNS Routing with Route53Swapping of Autoscaling Group with ELBUsing Elastic Beanstalk and swapping ApplicationBlue-Green Deployment using AWS Code DeployCloning Stack in OpsWork and Updating DNS.We will discuss Using Elastic Beanstalk and swapping Application in detail:Using Elastic Beanstalk and swapping ApplicationElastic Beanstalk provides us with the ease in deployment. Once we upload application code with some version  on Elastic Beanstalk and provide information about application, it deploys our application in Blue Environment and provide its  URL. The above Environment configuration is then copied and used to launch the new version of application-i.e. Green Environment with its  different and own URL.This point of time our application is Up with two environments but traffic is going only to Blue Environment. For Switching the environment to Green and serving traffic to it, we need to choose other  Environment details from Elastic Beanstalk Console and Swap it using Action menu. It leads Elastic Beanstalk to perform DNS Switch and once DNS changes are done, we can terminate Blue Environment. In this way, traffic will be redirected to Green Environment.For Rollback, we need to invoke the Switch Environment URL again.Steps to perform Blue-Green deployment in AWS:Open Elastic Beanstalk console from AWS and select the Region where we need to set up an environment. Either we can launch a new environment or clone the existing environment. Deploy and test the new application environment. For deploying, the new version chooses Environment and name from the list. Click on upload and deploy. We can use screen form to upload source bundle. On the Overview page, choose Environment action and choose Swap environment URL.Choose the environment name, under Select an environment to swap column and click on Swap.Who Can Benefit from Blue-Green Deployments?Blue-Green Deployment provides us with minimum Down Time and reliable deployment.Blue-Green Deployment has become useful in deploying an application for Development Teams, but it can be used under the below scenarios:There should be identical and isolated environments.There should be the provision for router or Load Balancer.System should work with Continuous Update.Different Types of DeploymentThere are a number of deployment techniques being used in the industry to deploy the application. As a DevOps Engineer, it becomes important to know the insights about different techniques based on our infrastructure providing and choose the right technique as per the impact on the end-user. Blue-Green Deployment: Blue Green deployment provides us with high availability and rollback in case of critical bugs found. It consists of two environments running in parallel. One environment will be live and others will be in staging, thereby, making our application downtime free. A/B Deployment: A/B Deployment is similar to Blue-Green Deployment with the difference that we send a small amount of traffic to another Server (another environment). The usage of A/B Deployment is generally when we need to check the utilization of features in the application.Along with that, it can also be used to check user feedback on the new version. Canary Deployment: Canary deployment is used when we need to release the full features of the application in subsets. Generally in Canary, we have a set of servers assigned to a different set of users. This deployment is important when we need to deploy features along with getting feedback Rolling Deployment: In Rolling Deployment, there is a process where we replace currently running code server with a new version in a tandem way. Pausing the deployment is much easier in this.Advantages of Blue-Green DeploymentNo Downtime Deployment:  With Blue Green Deployment, whenever there is a critical bug found on the production server, traffic is redirected to other environments. This leads to no downtime for the end-user. Standby: Whenever there is a system failure, we can immediately perform rollback and recover safely without disturbing the end-user.  With Blue Green deployment, once we switch to the new version of application, the older version of the application is still available. Therefore, in case of recovery, we can easily swap the environment and redirect the traffic back to the old version. Blue Green has proven to be impactful in reducing risk in the application development process. Immediate Rollback: In some cases where the new feature is not working properly, we can switch to the older feature of application by performing a rollback. Testing in Prod Environment: There are scenarios when deploying a new set of code works fine on local, but when deployed in the larger infrastructure, it becomes problematic. By using Blue-Green Deployment, we can check the performance of code on the Prod server without disturbing users.Disadvantages of Blue-Green Deployment:Since many people are heading toward Blue-Green Deployment, there are some cases where this process is not recommended.In some cases, it involves risk which makes deployment more prone to failure and breakdown.Database Sync Up: Schema changes are complex to decouple. In the case of Blue Green deployment, syncing of database and data changes should be synchronized between the Blue and Green environment. In case of relational database, it can lead to discrepancies.QA/UAT Identify of Failure: In some scenarios, with large infra, it is possible that sometimes QA test cases will not detect errors/bugs in a non-live environmentDashboard Required: Since we have two identical production environments with a different version of code, while running the deployment it becomes important to monitor insights with packages and code at any point of time to trigger things.Cost: For Blue-Green Deployment, we have two sets of environments running in parallel all time, thus increasing the cost of two production environments and maintaining them.Conclusion:Blue Green deployment is one of favourable technique to deploy application . Since every deployment technique and application has its own pros and cons , therefore team should collaborate and work on choosing the right deployment technique for their application according to tools, and services used to host your application. For deployment technique to work on, there is no fixed approach that will suit and work in every scenario. there should be extensive research before settling for any deployment technique.
5579
What is Blue Green Deployment?

Deployment is the process of updating code and oth... Read More