Search

DevOps Filter

Best Practices For Successful Implementation Of DevOps

What is DevOps?DevOps is nothing but the combination of process and philosophies which contains four basic component culture, collaboration, tools, and practices. In return, this gives a good automated system and infrastructure which helps an organisation to deliver a quality and reliable build. The beauty of this culture is it enables a quality for organizations to better serve their customers and compete more effectively in the market and also add some promised benefits which include confidence and trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.                                       “DevOps is not a goal, but a never-ending process of continual improvement.”                                                                           Jez Humble Here are the key DevOps best practices that can help you for successful implementation of DevOps.1. Understand Your Infrastructure need: Before building the infrastructure, spend some good time to understand the application and then align your goals to design your Infrastructure and this implementation of DevOps should be business-driven. While understanding infra, make sure you are capturing below components:Cycle Time : Your software cycle needs to be defined in a generic way where you need to know the limitations, ability and if there is any down time then the exact time need to be noted.Versioning Environments: While planning DevOps, always be ready for an alternative solution and versioning your environments helps you to roll out/back your plan. If you are having multiple module and tightly coupled then it requires a clean and neat plan to identify each and every patch and release.Infra as a code: When we say infra as a code it means a solution to addressing both needs – minimizing cycle time and versioning environments can be addressed by capturing and managing your Infrastructure as code. What you built should scalable for a long run.2. Don’t jump start : There is no need to automate the complete cycle in one shot, always take a small entity and apply your philosophy and get this validated. Once you feel your POC is justified, start scaling up now and create a complete pipeline and define a process so anytime you can go back and check what all need to improve and where. All these small success will help you to get confidence internally in your team and builds a trust to stakeholder and your customers.                                                        “DevOps isn’t magic, and transformations never happen overnight”3. Continuous Integration and Continuous Deployment: If  your team is not planning to implement this continuous integration and continuous delivery, then it is not fair with DevOps. Even I’ll say the beauty of DevOps is how frequently your team can deliver without disturbance and how much you are automated in this process. Let’s take a use case: You and your team members are working in an Agile team. In fact, there are multiple teams and multiple modules which are tightly coupled  in which you are also involved. Every day you work on your stories and at the end of the day, you push your ‘private build’ of your work to verify if it builds and ‘deliver’ it to a team build server and same applies to other individuals. Which indicates you all ‘integrate’ your work in the common build area and do an ‘Integration Build’. Doing these integrations and builds to verify them on a regular, preferably daily basis, is what is known as Continuous Integration.Continuous Deployment doesn’t mean every change is deployed to production as soon as possible. It means every change is proven to be deployable at any time.  What it takes is your all validated feature and build from CI and deploys them into the production environment. And here we can follow some of the practices. a) Maintain a Staging Environment that Emulates Production b) Always deploy in staging then move to production c) Automate Testing of Features and Nonfunctional Requirements d) Automatically fetch version-controlled development artifacts.4. Define Performance and do benchmarking : Always do some performance testing and get a collective benchmarking report for the latest build shared by your team because this will only justify the quality of your build and the required infra as well.For example : We have done one performance testing a few days back and got good results, explaining in details. So we did some benchmarking for our CFM machines because we are having a global footprint and at the same time, for us, latency matters and we need CFM in the nearest region. We have verified with our current build how many requests we can handle and we found we are firing more than 200 RPS (request per second). So we planned to check our build capability and fired a good number of requests and noted the number where our build got crashed and noted the RPS and then we did autoscaling of CFM. We might have upgraded our CFM but we planned for auto scaling because the number of requests is an assumption and we don’t want to spend amount for that but at the same time we are ready to consume the experimented traffic. And then we found 7 out 2 CFM are only consuming exact or little less number configuration and request (181 to 191 RPS). So we shared a report to the business team to focus on other regions where we were having very less traffic because we were paying the same amount.Conclusion: We verified our build which has given good confidence to our dev team and we shared the report to the business team which helped them to plan their marketing strategies, meanwhile we completed auto scaling the process as well.  5. Communicate and Collaborate : Collaboration and communication are the X-factors to help organisation grow and assess for DevOps. Collaboration with business and development team helps DevOps team to understand to design & define a culture. This helps to speed up the development, operations, and even other teams like marketing or sales, allowing all parts of the organization to align more closely with goals and projects.6. Start Documenting : Document everything (All your work done) which you are spreading across the process and infrastructure and specially the reports, RCA’s (Root cause Analysis), change management. This helps you to go back and see if all issues we faced can be automated in the next cycle or other ways to handle them smoothly without interrupting your production environment.7. Keep your eyes on cost burning: It has been experienced many a time that if we don’t keep an eye on cloud bills it will keep increasing and will tend to be proportional to the growth of your business till the time you don’t look for optimization. Always do an audit in 2 months and evaluate your cloud computation to optimize. Do some experiment with infra because you should not spend not more than 5  to 10 % of cost for cloud infra if you are completely dependent. Tools you can try : Reoptimize, Cloudyn, Orbitera etc.                                                                                 “If you are DevOps you should account the no’s.”8. Secure your infra : If your team follows certain compliances from day 1 then there is very less chance to compromise with your data and this can be easily enabled by providing a setup where you can verify your vulnerabilities. Before moving your build to the production team you may need to follow the standard at an early stage of development by using configured tools like: SonarQube, VeraCode, Codacy, CodeClimate etc.9. Tool Selection : Always select tools which all are compatible with rest of the toolchain you are planning to use. Why you should have to be so careful is because you have to capture each and every request capture. Once you are done with the tool selection, draft a tools metrics you are willing to capture or will be going to help you to debug. Start logging and monitoring them and have some clear definition for those logs so you can justify and determine that your processes are working as expected. Tools you can have a look : Nagios, Grafana, Pingdom, Monit, OpsGenie, Observium, Logstash etc.                                                                                                        Tool chain for DevOps process:                                                                             “If you are not monitoring,  you are not in the production”Conclusion:An organization that follows all the above best practices creates the right culture, which finally gets the ending it deserves i.e DevOps organization. "A good DevOps organization will free up developers to focus on doing what they do best: write software," says Rob Steward, Vice President of product development at Progress Software. "DevOps should take away the work and worry involved in deploying, securing and running the software once it is written."

Best Practices For Successful Implementation Of DevOps

2038
Best Practices For Successful Implementation Of DevOps

What is DevOps?

DevOps is nothing but the combination of process and philosophies which contains four basic component culture, collaboration, tools, and practices. In return, this gives a good automated system and infrastructure which helps an organisation to deliver a quality and reliable build. The beauty of this culture is it enables a quality for organizations to better serve their customers and compete more effectively in the market and also add some promised benefits which include confidence and trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.

                                       “DevOps is not a goal, but a never-ending process of continual improvement.”
                                                                           Jez Humble




Here are the key DevOps best practices that can help you for successful implementation of DevOps.

1. Understand Your Infrastructure need: Before building the infrastructure, spend some good time to understand the application and then align your goals to design your Infrastructure and this implementation of DevOps should be business-driven. While understanding infra, make sure you are capturing below components:

Components to understand infrastructure

  • Cycle Time : Your software cycle needs to be defined in a generic way where you need to know the limitations, ability and if there is any down time then the exact time need to be noted.
  • Versioning Environments: While planning DevOps, always be ready for an alternative solution and versioning your environments helps you to roll out/back your plan. If you are having multiple module and tightly coupled then it requires a clean and neat plan to identify each and every patch and release.
  • Infra as a code: When we say infra as a code it means a solution to addressing both needs – minimizing cycle time and versioning environments can be addressed by capturing and managing your Infrastructure as code. What you built should scalable for a long run.

2. Don’t jump start : There is no need to automate the complete cycle in one shot, always take a small entity and apply your philosophy and get this validated. Once you feel your POC is justified, start scaling up now and create a complete pipeline and define a process so anytime you can go back and check what all need to improve and where. All these small success will help you to get confidence internally in your team and builds a trust to stakeholder and your customers.

                                                        “DevOps isn’t magic, and transformations never happen overnight”

3. Continuous Integration and Continuous Deployment: If  your team is not planning to implement this continuous integration and continuous delivery, then it is not fair with DevOps. Even I’ll say the beauty of DevOps is how frequently your team can deliver without disturbance and how much you are automated in this process. Let’s take a use case: You and your team members are working in an Agile team. In fact, there are multiple teams and multiple modules which are tightly coupled  in which you are also involved. Every day you work on your stories and at the end of the day, you push your ‘private build’ of your work to verify if it builds and ‘deliver’ it to a team build server and same applies to other individuals. Which indicates you all ‘integrate’ your work in the common build area and do an ‘Integration Build’. Doing these integrations and builds to verify them on a regular, preferably daily basis, is what is known as Continuous Integration.

 Continuous Integration and Continuous Deployment

Continuous Deployment doesn’t mean every change is deployed to production as soon as possible. It means every change is proven to be deployable at any time.  What it takes is your all validated feature and build from CI and deploys them into the production environment. And here we can follow some of the practices. a) Maintain a Staging Environment that Emulates Production b) Always deploy in staging then move to production c) Automate Testing of Features and Nonfunctional Requirements d) Automatically fetch version-controlled development artifacts.

4. Define Performance and do benchmarking : Always do some performance testing and get a collective benchmarking report for the latest build shared by your team because this will only justify the quality of your build and the required infra as well.

For example : We have done one performance testing a few days back and got good results, explaining in details. So we did some benchmarking for our CFM machines because we are having a global footprint and at the same time, for us, latency matters and we need CFM in the nearest region. We have verified with our current build how many requests we can handle and we found we are firing more than 200 RPS (request per second). So we planned to check our build capability and fired a good number of requests and noted the number where our build got crashed and noted the RPS and then we did autoscaling of CFM. We might have upgraded our CFM but we planned for auto scaling because the number of requests is an assumption and we don’t want to spend amount for that but at the same time we are ready to consume the experimented traffic. And then we found 7 out 2 CFM are only consuming exact or little less number configuration and request (181 to 191 RPS). So we shared a report to the business team to focus on other regions where we were having very less traffic because we were paying the same amount.

Conclusion: We verified our build which has given good confidence to our dev team and we shared the report to the business team which helped them to plan their marketing strategies, meanwhile we completed auto scaling the process as well.  

5. Communicate and Collaborate : Collaboration and communication are the X-factors to help organisation grow and assess for DevOps. Collaboration with business and development team helps DevOps team to understand to design & define a culture. This helps to speed up the development, operations, and even other teams like marketing or sales, allowing all parts of the organization to align more closely with goals and projects.

Communicate and Collaborate


6. Start Documenting : Document everything (All your work done) which you are spreading across the process and infrastructure and specially the reports, RCA’s (Root cause Analysis), change management. This helps you to go back and see if all issues we faced can be automated in the next cycle or other ways to handle them smoothly without interrupting your production environment.


Start Documenting


7. Keep your eyes on cost burning: It has been experienced many a time that if we don’t keep an eye on cloud bills it will keep increasing and will tend to be proportional to the growth of your business till the time you don’t look for optimization. Always do an audit in 2 months and evaluate your cloud computation to optimize. Do some experiment with infra because you should not spend not more than 5  to 10 % of cost for cloud infra if you are completely dependent. Tools you can try : Reoptimize, Cloudyn, Orbitera etc.

Tools you can try in devops

                                                                                 “If you are DevOps you should account the no’s.”

8. Secure your infra : If your team follows certain compliances from day 1 then there is very less chance to compromise with your data and this can be easily enabled by providing a setup where you can verify your vulnerabilities. Before moving your build to the production team you may need to follow the standard at an early stage of development by using configured tools like: SonarQube, VeraCode, Codacy, CodeClimate etc.

Devops Quote


9. Tool Selection : Always select tools which all are compatible with rest of the toolchain you are planning to use. Why you should have to be so careful is because you have to capture each and every request capture. Once you are done with the tool selection, draft a tools metrics you are willing to capture or will be going to help you to debug. Start logging and monitoring them and have some clear definition for those logs so you can justify and determine that your processes are working as expected. Tools you can have a look : Nagios, Grafana, Pingdom, Monit, OpsGenie, Observium, Logstash etc.

  Tool chain for DevOps process

                                                                                                        Tool chain for DevOps process:

Kafka Consumer group topic lag

                                                                             “If you are not monitoring,  you are not in the production”

Conclusion:

An organization that follows all the above best practices creates the right culture, which finally gets the ending it deserves i.e DevOps organization. "A good DevOps organization will free up developers to focus on doing what they do best: write software," says Rob Steward, Vice President of product development at Progress Software. "DevOps should take away the work and worry involved in deploying, securing and running the software once it is written."

MD Zaid

MD Zaid Imam

Project Manager

Md Zaid Imam is currently serving as Project Manager at Radware. With a zeal for project management and business analytics, Zaid likes to explore UI/UX, backend development, and DevOps. Playing a crucial role at his current job, Zaid has helped his team to deliver world-class product features that cater to the demand of current industry requirements of bot mitigation arena (Processing 50 billion API calls per month). Zaid is a regular contributor on Hashnode.


Website : https://zaid.hashnode.dev

Join the Discussion

Your email address will not be published. Required fields are marked *

7 comments

Rabinath jha 11 Jul 2018

Awesome post

Chittranjan Kumar Jha 14 Jul 2018

This is wholesome philosophies about DevOps. Good For Beginners.

Chittranjan Kumar Jha 14 Jul 2018

This is Wholesome philosophies about DevOps. Good for Beginners.

Arnab Roy 16 Jul 2018

Excellent stuff for quick learning

Gulam Razi 23 Jul 2018

Great write-up. Really covered all the possible corner cases of DevOps process.

vepambattu chand 31 Jul 2018

Very Good and Useful Information about DevOps, Thanks For Sharing Nice Article.

Prathik vats 28 Nov 2018

In which industries can you find DevOps organizations?

KnowledgeHut Editor 28 Nov 2018

Home automation Industrial automation Medical devices Video surveillance Networking

Suggested Blogs

What is Blue Green Deployment?

Deployment is the process of updating code and other activities on the server to make software available for use.In the current situation, there is an increase in demand for continuous deployment to stay current with software updates, so as to provide the user with good quality application experience. There are many techniques available in the market for this, and in this article, we will be discussing about Blue Green Deployment.What is Blue-Green Deployment?Blue Green Deployment is a software release model that consists of two identical production environments; Blue and Green, configured in a way where one environment is live and the other is in staging (idle) state.The idea behind this is to redirect traffic between two environments running with a different version of the application.This process eliminates downtime and reduces the risk that happens due to deployment.  In case any error occurs with the new version, we can immediately roll back to the stable version by swapping the environment.In some organizations, it is also termed as Red Black Deployment.Working of Blue Green Deployment:To implement Blue-Green deployment, there should be two identical environments. Also, this requires Router or Load Balancer so that traffic can be routed to the desired environment.In the image, we have two production environments, Blue and Green. The Blue environment is one where current (let's say version 1) of the application is running and is live as well. All the traffic of the application is directed to Blue by Router or load balancer (as per the infra set up). Meanwhile, version 2 of the application is deployed and tested on Green Environment.Currently, at this stage, we can refer to Blue environment as Live and Green as idle or at staging state.Once the code for version 2 is tested and ready to be live for production, we redirect the traffic from Blue Environment to Green Environment, making Green as Live and Blue as a staging environment. If any problem is detected with the Infrastructure or application after version 2 is made live, we can roll back to the previous version just by swapping the environment.Blue Green deployment matches all requirements of seamless, safe and fully reversible conditions for ideal deployment, but there are some practices that need to be adopted for smooth process, for eg. automating the workflow so that there is minimum human intervention, reducing chances of manual error.Along with that, it is also important to keep monitoring in place for Blue Green application.Tools and Services for Set-Up:Based upon infrastructure and application, there are various services i.e. Docker, Kubernetes,  Cloud, Cloudfoundry etc available which can be used to implement Blue-Green Deployment.We will be discussing further on Blue Green deployment on Cloud and the steps to implement it.The advent of Cloud in Blue-Green Deployment:The advent of cloud computing in deployment has helped the system to reduce associated risks.Cloud utilities of infra management, billing and automation have made it easier to implement Blue-Green Deployment, making it a quicker task at a lower cost.AWS Services for Blue-Green Deployment:By utilizing AWS for Blue-Green Deployment, we can access many services that help in automation of deployment and infrastructure i.e. AWS CLI, SDK, ELB, Elastic Beanstalk, CloudFormation etc. There are a number of solutions that AWS provides which we can use, some of them being:DNS Routing with Route53Swapping of Autoscaling Group with ELBUsing Elastic Beanstalk and swapping ApplicationBlue-Green Deployment using AWS Code DeployCloning Stack in OpsWork and Updating DNS.We will discuss Using Elastic Beanstalk and swapping Application in detail:Using Elastic Beanstalk and swapping ApplicationElastic Beanstalk provides us with the ease in deployment. Once we upload application code with some version  on Elastic Beanstalk and provide information about application, it deploys our application in Blue Environment and provide its  URL. The above Environment configuration is then copied and used to launch the new version of application-i.e. Green Environment with its  different and own URL.This point of time our application is Up with two environments but traffic is going only to Blue Environment. For Switching the environment to Green and serving traffic to it, we need to choose other  Environment details from Elastic Beanstalk Console and Swap it using Action menu. It leads Elastic Beanstalk to perform DNS Switch and once DNS changes are done, we can terminate Blue Environment. In this way, traffic will be redirected to Green Environment.For Rollback, we need to invoke the Switch Environment URL again.Steps to perform Blue-Green deployment in AWS:Open Elastic Beanstalk console from AWS and select the Region where we need to set up an environment. Either we can launch a new environment or clone the existing environment. Deploy and test the new application environment. For deploying, the new version chooses Environment and name from the list. Click on upload and deploy. We can use screen form to upload source bundle. On the Overview page, choose Environment action and choose Swap environment URL.Choose the environment name, under Select an environment to swap column and click on Swap.Who Can Benefit from Blue-Green Deployments?Blue-Green Deployment provides us with minimum Down Time and reliable deployment.Blue-Green Deployment has become useful in deploying an application for Development Teams, but it can be used under the below scenarios:There should be identical and isolated environments.There should be the provision for router or Load Balancer.System should work with Continuous Update.Different Types of DeploymentThere are a number of deployment techniques being used in the industry to deploy the application. As a DevOps Engineer, it becomes important to know the insights about different techniques based on our infrastructure providing and choose the right technique as per the impact on the end-user. Blue-Green Deployment: Blue Green deployment provides us with high availability and rollback in case of critical bugs found. It consists of two environments running in parallel. One environment will be live and others will be in staging, thereby, making our application downtime free. A/B Deployment: A/B Deployment is similar to Blue-Green Deployment with the difference that we send a small amount of traffic to another Server (another environment). The usage of A/B Deployment is generally when we need to check the utilization of features in the application.Along with that, it can also be used to check user feedback on the new version. Canary Deployment: Canary deployment is used when we need to release the full features of the application in subsets. Generally in Canary, we have a set of servers assigned to a different set of users. This deployment is important when we need to deploy features along with getting feedback Rolling Deployment: In Rolling Deployment, there is a process where we replace currently running code server with a new version in a tandem way. Pausing the deployment is much easier in this.Advantages of Blue-Green DeploymentNo Downtime Deployment:  With Blue Green Deployment, whenever there is a critical bug found on the production server, traffic is redirected to other environments. This leads to no downtime for the end-user. Standby: Whenever there is a system failure, we can immediately perform rollback and recover safely without disturbing the end-user.  With Blue Green deployment, once we switch to the new version of application, the older version of the application is still available. Therefore, in case of recovery, we can easily swap the environment and redirect the traffic back to the old version. Blue Green has proven to be impactful in reducing risk in the application development process. Immediate Rollback: In some cases where the new feature is not working properly, we can switch to the older feature of application by performing a rollback. Testing in Prod Environment: There are scenarios when deploying a new set of code works fine on local, but when deployed in the larger infrastructure, it becomes problematic. By using Blue-Green Deployment, we can check the performance of code on the Prod server without disturbing users.Disadvantages of Blue-Green Deployment:Since many people are heading toward Blue-Green Deployment, there are some cases where this process is not recommended.In some cases, it involves risk which makes deployment more prone to failure and breakdown.Database Sync Up: Schema changes are complex to decouple. In the case of Blue Green deployment, syncing of database and data changes should be synchronized between the Blue and Green environment. In case of relational database, it can lead to discrepancies.QA/UAT Identify of Failure: In some scenarios, with large infra, it is possible that sometimes QA test cases will not detect errors/bugs in a non-live environmentDashboard Required: Since we have two identical production environments with a different version of code, while running the deployment it becomes important to monitor insights with packages and code at any point of time to trigger things.Cost: For Blue-Green Deployment, we have two sets of environments running in parallel all time, thus increasing the cost of two production environments and maintaining them.Conclusion:Blue Green deployment is one of favourable technique to deploy application . Since every deployment technique and application has its own pros and cons , therefore team should collaborate and work on choosing the right deployment technique for their application according to tools, and services used to host your application. For deployment technique to work on, there is no fixed approach that will suit and work in every scenario. there should be extensive research before settling for any deployment technique.
5411
What is Blue Green Deployment?

Deployment is the process of updating code and oth... Read More

Infographic: How Devops Is Helping Organizations Remodel in 2019

Introduction to Docker, Docker containers & Docker Hub

Docker is a tool that makes creating, deploying, and running applications easier with the use of containers. Now, what are containers? These can be described as something that makes it possible for developers to spruce up an application with all the parts needed for it. These could include libraries, for instance, along with other dependencies. Docker assembles all these and presents them as one package. The container gives the developer the assurance that the application will run on just about any Linux machine, no matter to what extent any of its customized settings in a particular machine could be at variance from those on the machine on which the code is written and tested.Who is Docker for:Docker is aimed to benefit both developers and system administrators. This makes it a part of many DevOps (developers + operations) toolchains. The main benefit that Docker carries for developers is that they can concentrate on their core job of writing the code without having to bog themselves down with which system it will run on.How Docker is useful in the IT industry:The most vital use of the Docker Enterprise container platform is that it offers value to a business by drastically bringing down its cost on infrastructure and maintenance. It can also do the same when it comes to migrating current. Best of all, all these can be done immediately upon installation. In this way, it saves time, as well. The following infographic illustrates how Docker brings down costs and increases productivity in an enterprise:Image sourceDocker container:Next, let us understand what a container in a Docker is. We can think of it as being a standard unit of software that has the purpose of packaging the code and all its dependencies together.It comes with all that an application needs to run, namely settings, code, system tools, runtime, and system libraries.The point of making a Docker container in this fashion is to help the application run in a fast and dependable manner between one computing environment and another. A Docker container image has these characteristics:LightweightStandaloneExecutableIn this sense, the container lies at the heart of a Docker.Docker containers that run on Docker Engine:Let us get down to understanding the Docker containers that power the Docker Engine.Standardization: Docker containers were created according to the industry standard for containers. The aim of doing this is that the containers could be made portable.Lightweight: Since containers share the machine’s OS system kernel; there is no need for an OS per application. What does this do? It increases server efficiencies and brings down the costs of the server as well as those associated with licensing.Security: Security is assured for applications in containers. It is a fact that  Docker comes with the industry-best default isolation capabilities.Let us explain a few Docker commands from the architecture shown above:docker run – Used for running a command in a new containerdocker start – For starting one or more stopped containersdocker stop – For stopping one or more running containersdocker build – Used for building an image form in a Docker filedocker pull – For pulling an image or a repository from a registrydocker push – Given for pushing an image or a repository to a registrydocker export – For exporting a container’s filesystem as a tar archivedocker exec – To run a command in a run-time containerdocker search – For searching the Docker Hub for imagesdocker volume- To create and attach to containers to store data.docker network- allows you to attach a container to as many networks as you like. You can also attach an already running container.docker attach – To attach to a running containerdocker commit – For creating a new image from a container’s changesdocker daemon – Having listened for Docker API requests, the Docker daemon (dockerd) manages Docker objects. These include networks, volumes, containers, and images. It also communicates with other daemons when managing Docker services.docker Images – A read-only template, an image has instructions that are used to create a Docker container. Many times, images are based on other images and carry some degree of customization. An image-based on ubuntu can install the Apache web server, your application, and the configuration details that the application needs to run.Understanding Docker Hub RegistryA registry service that is cloud-based; the Docker Hub Registry allows the user to do the following:Link to code repositoriesBuild images and test themStores images that are manually pushedLinks to Docker Cloud to help deploy images to a host.In summary, we can understand the Docker Hub Registry as a tool that offers a centralized resource for discovering a container image, managing distribution and change, facilitating collaboration between the user and team, and automating workflow throughout the development pipeline.Ref URL: https://docs.docker.com/docker-hub/Create a docker hub account in https://hub.docker.com/Pull a docker imagedocker pull ubuntupull a docker image with old versiondocker pull ubuntu:16.04create a custom tag to docker imagedocker tag ubuntu: latest admin/ubuntu: demologin to your docker hub registry “sh docker logindocker push admin/ubuntu: demotestingRemove all images in docker serverdocker image rm -f Pull your custom image from your docker accountdocker pull admin/ubuntu:demoInstallation Docker on Amazon Web Services (AWS) cloud:Why Amazon Web Services:AWS is a highly preferred cloud service. It enjoys a position of primacy in the global cloud services market due to the following reasons:Market pioneersUnshakeable customer faithCost-effectivenessEase and affordability of building a storage system with no worry of estimating usageSuitability for small businesses, since it is ideal for building a business from bottom to top.Advantages of AWS:Easy of usabilityAgilitySecurityReliabilityServices without capacity limitsCost-effectivenessFlexibility24×7 support.Steps to Install docker on Amazon Linux:We need Amazon web services account “https://aws.amazon.com”.Create AWS account and login to console. Choose Ec2 service from console.Click on Launch instance and choose Amazon Linux Ami Ec2 server free tier Eligible.Choose free tier Eligible Ec2 t2. Micro.Here we need configure instance details like region, subnets, vpc.Add storage. By default it will give us 8GB, and we can modify it after launching Ec2.Create security groups and check port 22 is open to allow SSH connection and we can add incoming ports in security groups.Review details of Ec2 instance and click on Launch.Create New key pair or if we have existing key pair, we can use the same; and download and click on Launch instance.Convert Keypair from .PEM file to. PPK using puttygen.  We can Download puttygen and putty from here “https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html”.Login ec2 instance using putty and Ec2 Public Ip address.Click on SSH in Right panel and click Auth and add PPK key pair for ec2 to login.When we login to ec2 with New key pair we will get security alert. Click on YES and login as “Ec2-user”. If we need to login as root “sudo su – “.Update packages for security purpose using command “sudo yum update -y”.Now we need to install docker on Amazon Linux. Use command “ sudo yum install docker -y”.To check Docker version, we can see output below:Start docker with “sudo service docker start” command.Check Docker status.Now we can download any docker images by using “docker pull command”.Check if the docker container is running with “docker ps” command.To Login into docker container use “docker exec -it –user root container id bash.Check current docker containers and stopped container with “docker ps -a” command.To check downloaded docker images with “docker images” command.Conclusion:A tool with which creating, deploying and running applications is made much easier, a Docker is a set of packages that uses containers. It is of high value to both developers and system administrators, who can look at their core work without having to worry about writing the code, which runs on any system.Docker Enterprise is of immense value to the IT industry, as it brings down the maintenance and infrastructure costs. It can be deployed immediately and can be migrated easily.
5487
Introduction to Docker, Docker containers & Do...

Docker is a tool that makes creating, deploying, a... Read More