Search

DevOps blog posts

What is Blue Green Deployment?

Deployment is the process of updating code and other activities on the server to make software available for use.In the current situation, there is an increase in demand for continuous deployment to stay current with software updates, so as to provide the user with good quality application experience. There are many techniques available in the market for this, and in this article, we will be discussing about Blue Green Deployment.What is Blue-Green Deployment?Blue Green Deployment is a software release model that consists of two identical production environments; Blue and Green, configured in a way where one environment is live and the other is in staging (idle) state.The idea behind this is to redirect traffic between two environments running with a different version of the application.This process eliminates downtime and reduces the risk that happens due to deployment.  In case any error occurs with the new version, we can immediately roll back to the stable version by swapping the environment.In some organizations, it is also termed as Red Black Deployment.Working of Blue Green Deployment:To implement Blue-Green deployment, there should be two identical environments. Also, this requires Router or Load Balancer so that traffic can be routed to the desired environment.In the image, we have two production environments, Blue and Green. The Blue environment is one where current (let's say version 1) of the application is running and is live as well. All the traffic of the application is directed to Blue by Router or load balancer (as per the infra set up). Meanwhile, version 2 of the application is deployed and tested on Green Environment.Currently, at this stage, we can refer to Blue environment as Live and Green as idle or at staging state.Once the code for version 2 is tested and ready to be live for production, we redirect the traffic from Blue Environment to Green Environment, making Green as Live and Blue as a staging environment. If any problem is detected with the Infrastructure or application after version 2 is made live, we can roll back to the previous version just by swapping the environment.Blue Green deployment matches all requirements of seamless, safe and fully reversible conditions for ideal deployment, but there are some practices that need to be adopted for smooth process, for eg. automating the workflow so that there is minimum human intervention, reducing chances of manual error.Along with that, it is also important to keep monitoring in place for Blue Green application.Tools and Services for Set-Up:Based upon infrastructure and application, there are various services i.e. Docker, Kubernetes,  Cloud, Cloudfoundry etc available which can be used to implement Blue-Green Deployment.We will be discussing further on Blue Green deployment on Cloud and the steps to implement it.The advent of Cloud in Blue-Green Deployment:The advent of cloud computing in deployment has helped the system to reduce associated risks.Cloud utilities of infra management, billing and automation have made it easier to implement Blue-Green Deployment, making it a quicker task at a lower cost.AWS Services for Blue-Green Deployment:By utilizing AWS for Blue-Green Deployment, we can access many services that help in automation of deployment and infrastructure i.e. AWS CLI, SDK, ELB, Elastic Beanstalk, CloudFormation etc. There are a number of solutions that AWS provides which we can use, some of them being:DNS Routing with Route53Swapping of Autoscaling Group with ELBUsing Elastic Beanstalk and swapping ApplicationBlue-Green Deployment using AWS Code DeployCloning Stack in OpsWork and Updating DNS.We will discuss Using Elastic Beanstalk and swapping Application in detail:Using Elastic Beanstalk and swapping ApplicationElastic Beanstalk provides us with the ease in deployment. Once we upload application code with some version  on Elastic Beanstalk and provide information about application, it deploys our application in Blue Environment and provide its  URL. The above Environment configuration is then copied and used to launch the new version of application-i.e. Green Environment with its  different and own URL.This point of time our application is Up with two environments but traffic is going only to Blue Environment. For Switching the environment to Green and serving traffic to it, we need to choose other  Environment details from Elastic Beanstalk Console and Swap it using Action menu. It leads Elastic Beanstalk to perform DNS Switch and once DNS changes are done, we can terminate Blue Environment. In this way, traffic will be redirected to Green Environment.For Rollback, we need to invoke the Switch Environment URL again.Steps to perform Blue-Green deployment in AWS:Open Elastic Beanstalk console from AWS and select the Region where we need to set up an environment. Either we can launch a new environment or clone the existing environment. Deploy and test the new application environment. For deploying, the new version chooses Environment and name from the list. Click on upload and deploy. We can use screen form to upload source bundle. On the Overview page, choose Environment action and choose Swap environment URL.Choose the environment name, under Select an environment to swap column and click on Swap.Who Can Benefit from Blue-Green Deployments?Blue-Green Deployment provides us with minimum Down Time and reliable deployment.Blue-Green Deployment has become useful in deploying an application for Development Teams, but it can be used under the below scenarios:There should be identical and isolated environments.There should be the provision for router or Load Balancer.System should work with Continuous Update.Different Types of DeploymentThere are a number of deployment techniques being used in the industry to deploy the application. As a DevOps Engineer, it becomes important to know the insights about different techniques based on our infrastructure providing and choose the right technique as per the impact on the end-user. Blue-Green Deployment: Blue Green deployment provides us with high availability and rollback in case of critical bugs found. It consists of two environments running in parallel. One environment will be live and others will be in staging, thereby, making our application downtime free. A/B Deployment: A/B Deployment is similar to Blue-Green Deployment with the difference that we send a small amount of traffic to another Server (another environment). The usage of A/B Deployment is generally when we need to check the utilization of features in the application.Along with that, it can also be used to check user feedback on the new version. Canary Deployment: Canary deployment is used when we need to release the full features of the application in subsets. Generally in Canary, we have a set of servers assigned to a different set of users. This deployment is important when we need to deploy features along with getting feedback Rolling Deployment: In Rolling Deployment, there is a process where we replace currently running code server with a new version in a tandem way. Pausing the deployment is much easier in this.Advantages of Blue-Green DeploymentNo Downtime Deployment:  With Blue Green Deployment, whenever there is a critical bug found on the production server, traffic is redirected to other environments. This leads to no downtime for the end-user. Standby: Whenever there is a system failure, we can immediately perform rollback and recover safely without disturbing the end-user.  With Blue Green deployment, once we switch to the new version of application, the older version of the application is still available. Therefore, in case of recovery, we can easily swap the environment and redirect the traffic back to the old version. Blue Green has proven to be impactful in reducing risk in the application development process. Immediate Rollback: In some cases where the new feature is not working properly, we can switch to the older feature of application by performing a rollback. Testing in Prod Environment: There are scenarios when deploying a new set of code works fine on local, but when deployed in the larger infrastructure, it becomes problematic. By using Blue-Green Deployment, we can check the performance of code on the Prod server without disturbing users.Disadvantages of Blue-Green Deployment:Since many people are heading toward Blue-Green Deployment, there are some cases where this process is not recommended.In some cases, it involves risk which makes deployment more prone to failure and breakdown.Database Sync Up:- Schema changes are complex to decouple. In the case of Blue Green deployment, syncing of database and data changes should be synchronized between the Blue and Green environment. In case of relational database, it can lead to discrepancies.QA/UAT Identify of Failure: In some scenarios, with large infra, it is possible that sometimes QA test cases will not detect errors/bugs in a non-live environmentDashboard Required: Since we have two identical production environments with a different version of code, while running the deployment it becomes important to monitor insights with packages and code at any point of time to trigger things.Cost: For Blue-Green Deployment, we have two sets of environments running in parallel all time, thus increasing the cost of two production environments and maintaining them.Conclusion:Blue Green deployment is one of favourable technique to deploy application . Since every deployment technique and application has its own pros and cons , therefore team should collaborate and work on choosing the right deployment technique for their application according to tools, and services used to host your application. For deployment technique to work on, there is no fixed approach that will suit and work in every scenario. there should be extensive research before settling for any deployment technique.
What is Blue Green Deployment?
kanav

What is Blue Green Deployment?

Deployment is the process of updating code and other activities on the server to make software available for use.In the current situation, there is an increase in demand for continuous deployment to stay current with software updates, so as to provide the user with good quality application experience. There are many techniques available in the market for this, and in this article, we will be discussing about Blue Green Deployment.What is Blue-Green Deployment?Blue Green Deployment is a software release model that consists of two identical production environments; Blue and Green, configured in a way where one environment is live and the other is in staging (idle) state.The idea behind this is to redirect traffic between two environments running with a different version of the application.This process eliminates downtime and reduces the risk that happens due to deployment.  In case any error occurs with the new version, we can immediately roll back to the stable version by swapping the environment.In some organizations, it is also termed as Red Black Deployment.Working of Blue Green Deployment:To implement Blue-Green deployment, there should be two identical environments. Also, this requires Router or Load Balancer so that traffic can be routed to the desired environment.In the image, we have two production environments, Blue and Green. The Blue environment is one where current (let's say version 1) of the application is running and is live as well. All the traffic of the application is directed to Blue by Router or load balancer (as per the infra set up). Meanwhile, version 2 of the application is deployed and tested on Green Environment.Currently, at this stage, we can refer to Blue environment as Live and Green as idle or at staging state.Once the code for version 2 is tested and ready to be live for production, we redirect the traffic from Blue Environment to Green Environment, making Green as Live and Blue as a staging environment. If any problem is detected with the Infrastructure or application after version 2 is made live, we can roll back to the previous version just by swapping the environment.Blue Green deployment matches all requirements of seamless, safe and fully reversible conditions for ideal deployment, but there are some practices that need to be adopted for smooth process, for eg. automating the workflow so that there is minimum human intervention, reducing chances of manual error.Along with that, it is also important to keep monitoring in place for Blue Green application.Tools and Services for Set-Up:Based upon infrastructure and application, there are various services i.e. Docker, Kubernetes,  Cloud, Cloudfoundry etc available which can be used to implement Blue-Green Deployment.We will be discussing further on Blue Green deployment on Cloud and the steps to implement it.The advent of Cloud in Blue-Green Deployment:The advent of cloud computing in deployment has helped the system to reduce associated risks.Cloud utilities of infra management, billing and automation have made it easier to implement Blue-Green Deployment, making it a quicker task at a lower cost.AWS Services for Blue-Green Deployment:By utilizing AWS for Blue-Green Deployment, we can access many services that help in automation of deployment and infrastructure i.e. AWS CLI, SDK, ELB, Elastic Beanstalk, CloudFormation etc. There are a number of solutions that AWS provides which we can use, some of them being:DNS Routing with Route53Swapping of Autoscaling Group with ELBUsing Elastic Beanstalk and swapping ApplicationBlue-Green Deployment using AWS Code DeployCloning Stack in OpsWork and Updating DNS.We will discuss Using Elastic Beanstalk and swapping Application in detail:Using Elastic Beanstalk and swapping ApplicationElastic Beanstalk provides us with the ease in deployment. Once we upload application code with some version  on Elastic Beanstalk and provide information about application, it deploys our application in Blue Environment and provide its  URL. The above Environment configuration is then copied and used to launch the new version of application-i.e. Green Environment with its  different and own URL.This point of time our application is Up with two environments but traffic is going only to Blue Environment. For Switching the environment to Green and serving traffic to it, we need to choose other  Environment details from Elastic Beanstalk Console and Swap it using Action menu. It leads Elastic Beanstalk to perform DNS Switch and once DNS changes are done, we can terminate Blue Environment. In this way, traffic will be redirected to Green Environment.For Rollback, we need to invoke the Switch Environment URL again.Steps to perform Blue-Green deployment in AWS:Open Elastic Beanstalk console from AWS and select the Region where we need to set up an environment. Either we can launch a new environment or clone the existing environment. Deploy and test the new application environment. For deploying, the new version chooses Environment and name from the list. Click on upload and deploy. We can use screen form to upload source bundle. On the Overview page, choose Environment action and choose Swap environment URL.Choose the environment name, under Select an environment to swap column and click on Swap.Who Can Benefit from Blue-Green Deployments?Blue-Green Deployment provides us with minimum Down Time and reliable deployment.Blue-Green Deployment has become useful in deploying an application for Development Teams, but it can be used under the below scenarios:There should be identical and isolated environments.There should be the provision for router or Load Balancer.System should work with Continuous Update.Different Types of DeploymentThere are a number of deployment techniques being used in the industry to deploy the application. As a DevOps Engineer, it becomes important to know the insights about different techniques based on our infrastructure providing and choose the right technique as per the impact on the end-user. Blue-Green Deployment: Blue Green deployment provides us with high availability and rollback in case of critical bugs found. It consists of two environments running in parallel. One environment will be live and others will be in staging, thereby, making our application downtime free. A/B Deployment: A/B Deployment is similar to Blue-Green Deployment with the difference that we send a small amount of traffic to another Server (another environment). The usage of A/B Deployment is generally when we need to check the utilization of features in the application.Along with that, it can also be used to check user feedback on the new version. Canary Deployment: Canary deployment is used when we need to release the full features of the application in subsets. Generally in Canary, we have a set of servers assigned to a different set of users. This deployment is important when we need to deploy features along with getting feedback Rolling Deployment: In Rolling Deployment, there is a process where we replace currently running code server with a new version in a tandem way. Pausing the deployment is much easier in this.Advantages of Blue-Green DeploymentNo Downtime Deployment:  With Blue Green Deployment, whenever there is a critical bug found on the production server, traffic is redirected to other environments. This leads to no downtime for the end-user. Standby: Whenever there is a system failure, we can immediately perform rollback and recover safely without disturbing the end-user.  With Blue Green deployment, once we switch to the new version of application, the older version of the application is still available. Therefore, in case of recovery, we can easily swap the environment and redirect the traffic back to the old version. Blue Green has proven to be impactful in reducing risk in the application development process. Immediate Rollback: In some cases where the new feature is not working properly, we can switch to the older feature of application by performing a rollback. Testing in Prod Environment: There are scenarios when deploying a new set of code works fine on local, but when deployed in the larger infrastructure, it becomes problematic. By using Blue-Green Deployment, we can check the performance of code on the Prod server without disturbing users.Disadvantages of Blue-Green Deployment:Since many people are heading toward Blue-Green Deployment, there are some cases where this process is not recommended.In some cases, it involves risk which makes deployment more prone to failure and breakdown.Database Sync Up:- Schema changes are complex to decouple. In the case of Blue Green deployment, syncing of database and data changes should be synchronized between the Blue and Green environment. In case of relational database, it can lead to discrepancies.QA/UAT Identify of Failure: In some scenarios, with large infra, it is possible that sometimes QA test cases will not detect errors/bugs in a non-live environmentDashboard Required: Since we have two identical production environments with a different version of code, while running the deployment it becomes important to monitor insights with packages and code at any point of time to trigger things.Cost: For Blue-Green Deployment, we have two sets of environments running in parallel all time, thus increasing the cost of two production environments and maintaining them.Conclusion:Blue Green deployment is one of favourable technique to deploy application . Since every deployment technique and application has its own pros and cons , therefore team should collaborate and work on choosing the right deployment technique for their application according to tools, and services used to host your application. For deployment technique to work on, there is no fixed approach that will suit and work in every scenario. there should be extensive research before settling for any deployment technique.
5390
What is Blue Green Deployment?

Deployment is the process of updating code and oth... Read More

Introduction to Docker, Docker containers & Docker Hub

Docker is a tool that makes creating, deploying, and running applications easier with the use of containers. Now, what are containers? These can be described as something that makes it possible for developers to spruce up an application with all the parts needed for it. These could include libraries, for instance, along with other dependencies. Docker assembles all these and presents them as one package. The container gives the developer the assurance that the application will run on just about any Linux machine, no matter to what extent any of its customized settings in a particular machine could be at variance from those on the machine on which the code is written and tested.Who is Docker for:Docker is aimed to benefit both developers and system administrators. This makes it a part of many DevOps (developers + operations) toolchains. The main benefit that Docker carries for developers is that they can concentrate on their core job of writing the code without having to bog themselves down with which system it will run on.How Docker is useful in the IT industry:The most vital use of the Docker Enterprise container platform is that it offers value to a business by drastically bringing down its cost on infrastructure and maintenance. It can also do the same when it comes to migrating current. Best of all, all these can be done immediately upon installation. In this way, it saves time, as well. The following infographic illustrates how Docker brings down costs and increases productivity in an enterprise:Image sourceDocker container:Next, let us understand what a container in a Docker is. We can think of it as being a standard unit of software that has the purpose of packaging the code and all its dependencies together.It comes with all that an application needs to run, namely settings, code, system tools, runtime, and system libraries.The point of making a Docker container in this fashion is to help the application run in a fast and dependable manner between one computing environment and another. A Docker container image has these characteristics:LightweightStandaloneExecutableIn this sense, the container lies at the heart of a Docker.Docker containers that run on Docker Engine:Let us get down to understanding the Docker containers that power the Docker Engine.Standardization: Docker containers were created according to the industry standard for containers. The aim of doing this is that the containers could be made portable.Lightweight: Since containers share the machine’s OS system kernel; there is no need for an OS per application. What does this do? It increases server efficiencies and brings down the costs of the server as well as those associated with licensing.Security: Security is assured for applications in containers. It is a fact that  Docker comes with the industry-best default isolation capabilities.Let us explain a few Docker commands from the architecture shown above:docker run – Used for running a command in a new containerdocker start – For starting one or more stopped containersdocker stop – For stopping one or more running containersdocker build – Used for building an image form in a Docker filedocker pull – For pulling an image or a repository from a registrydocker push – Given for pushing an image or a repository to a registrydocker export – For exporting a container’s filesystem as a tar archivedocker exec – To run a command in a run-time containerdocker search – For searching the Docker Hub for imagesdocker volume- To create and attach to containers to store data.docker network- allows you to attach a container to as many networks as you like. You can also attach an already running container.docker attach – To attach to a running containerdocker commit – For creating a new image from a container’s changesdocker daemon – Having listened for Docker API requests, the Docker daemon (dockerd) manages Docker objects. These include networks, volumes, containers, and images. It also communicates with other daemons when managing Docker services.docker Images – A read-only template, an image has instructions that are used to create a Docker container. Many times, images are based on other images and carry some degree of customization. An image-based on ubuntu can install the Apache web server, your application, and the configuration details that the application needs to run.Understanding Docker Hub RegistryA registry service that is cloud-based; the Docker Hub Registry allows the user to do the following:Link to code repositoriesBuild images and test themStores images that are manually pushedLinks to Docker Cloud to help deploy images to a host.In summary, we can understand the Docker Hub Registry as a tool that offers a centralized resource for discovering a container image, managing distribution and change, facilitating collaboration between the user and team, and automating workflow throughout the development pipeline.Ref URL: https://docs.docker.com/docker-hub/Create a docker hub account in https://hub.docker.com/Pull a docker imagedocker pull ubuntupull a docker image with old versiondocker pull ubuntu:16.04create a custom tag to docker imagedocker tag ubuntu: latest admin/ubuntu: demologin to your docker hub registry “sh docker logindocker push admin/ubuntu: demotestingRemove all images in docker serverdocker image rm -f Pull your custom image from your docker accountdocker pull admin/ubuntu:demoInstallation Docker on Amazon Web Services (AWS) cloud:Why Amazon Web Services:AWS is a highly preferred cloud service. It enjoys a position of primacy in the global cloud services market due to the following reasons:Market pioneersUnshakeable customer faithCost-effectivenessEase and affordability of building a storage system with no worry of estimating usageSuitability for small businesses, since it is ideal for building a business from bottom to top.Advantages of AWS:Easy of usabilityAgilitySecurityReliabilityServices without capacity limitsCost-effectivenessFlexibility24×7 support.Steps to Install docker on Amazon Linux:We need Amazon web services account “https://aws.amazon.com”.Create AWS account and login to console. Choose Ec2 service from console.Click on Launch instance and choose Amazon Linux Ami Ec2 server free tier Eligible.Choose free tier Eligible Ec2 t2. Micro.Here we need configure instance details like region, subnets, vpc.Add storage. By default it will give us 8GB, and we can modify it after launching Ec2.Create security groups and check port 22 is open to allow SSH connection and we can add incoming ports in security groups.Review details of Ec2 instance and click on Launch.Create New key pair or if we have existing key pair, we can use the same; and download and click on Launch instance.Convert Keypair from .PEM file to. PPK using puttygen.  We can Download puttygen and putty from here “https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html”.Login ec2 instance using putty and Ec2 Public Ip address.Click on SSH in Right panel and click Auth and add PPK key pair for ec2 to login.When we login to ec2 with New key pair we will get security alert. Click on YES and login as “Ec2-user”. If we need to login as root “sudo su – “.Update packages for security purpose using command “sudo yum update -y”.Now we need to install docker on Amazon Linux. Use command “ sudo yum install docker -y”.To check Docker version, we can see output below:Start docker with “sudo service docker start” command.Check Docker status.Now we can download any docker images by using “docker pull command”.Check if the docker container is running with “docker ps” command.To Login into docker container use “docker exec -it –user root container id bash.Check current docker containers and stopped container with “docker ps -a” command.To check downloaded docker images with “docker images” command.Conclusion:A tool with which creating, deploying and running applications is made much easier, a Docker is a set of packages that uses containers. It is of high value to both developers and system administrators, who can look at their core work without having to worry about writing the code, which runs on any system.Docker Enterprise is of immense value to the IT industry, as it brings down the maintenance and infrastructure costs. It can be deployed immediately and can be migrated easily.
5487
Introduction to Docker, Docker containers & Do...

Docker is a tool that makes creating, deploying, a... Read More

DevOps Institute recognizes KnowledgeHut as their Premier Partner

KnowledgeHut, a global leader in the workforce development industry was accredited with Premier Partner status by the DevOps Institute. The accreditation reinforces KnowledgeHut’s global outreach, scale and positions the organization for larger engagements.The DevOps Institute is a worldwide association that helps DevOps professionals to advance in their careers. The institute has 200 partners around the world. As part of their Global Education Partner Program, the DevOps Institute has announced three tiers in partnerships: registered, premier and elite. 'Premier partners' are those organizations that have embraced the curriculum and certification portfolio and have provided insights and feedback into the direction of service offerings, further raising the awareness of DevOps in their respective regions.KnowledgeHut endeavors to educate the market in India and other geographies around the holistic framework of Skills, Knowledge, Ideas, Learning (SKIL) to advance DevOps and improve organizational efficiency.Find out more about our aligned DevOps courses here.
DevOps Institute recognizes KnowledgeHut as their ...

KnowledgeHut, a global leader in the workforce dev... Read More

Installation Guide to Jenkins

Jenkins is a Java-based open-source automation tool with plugins designed for ongoing integration. Jenkins is used to constantly develop and test software projects that help developers to incorporate project modifications and make it simpler for users to achieve a new build. The Jenkins allows developers to quickly locate and resolve flaws in a code base and to automatically test their structures. Jenkins can be changed and expanded readily on all operating platforms and various devices, whether OS X, Windows or Linux.  It immediately deploys code, produces test reports. During integration and continuous delivery, Jenkins can be configured according to the demands.System Requirements for Jenkins InstallationFollowing are the software and hardware requirements for installing Jenkins:Minimum hardware requirements:256 MB of RAM1 GB of drive space (although 10 GB is a recommended minimum if running Jenkins as a Docker container)Recommended hardware configuration for a small team:1 GB+ of RAM50 GB+ of drive spaceInstallation on WindowsYou must first install JDK. Jenkins promotes JDK8 only at this time. Jenkins can be installed when Java is running. The recent Jenkins package for Windows (presently version 2.191) can be downloaded. Click on the Jenkins exe file to unzip the file into a folder.To begin the installation click on "Next."To install Jenkins in another directory, click the "Change..." button. I'll hold the default choice in this instance and click on "Next."To begin the installation process, click on the "Install" button.The installation is being processed.When finished, you can finish the setup by clicking the "Finish" button.The URL http:/localhost:8080 will automatically be redirected to a Jenkins local page or the browser can be pasted.Copy and paste the password from the C:\Program Files (x85)\Jenkins\secrets\initialAdminPassword file for Jenkins unlocking. Click on the button "Continue."The suggested plugins or chosen plugins that you select can be installed. We will install the suggested plugins to maintain it easy.Wait for the complete installation of plugins.The next step is to build a Jenkins admin user. Click "Save and Continue." Please enter your information.To finish the Jenkins setup, click on "Save and Finish."To begin Jenkins, click on "Start using Jenkins."Below is the default page of Jenkins.Jenkins Installation on Linux/CentOS 7 systemMake sure that you are signed in as a user with sudo privileges before continuing this tutorial.The first step is to install Java, Jenkins being a Java application. To set up OpenJDK 8 package, execute the following command:$ sudo yum install java-1.8.0-openjdk-develJenkins does not currently support Java 10 (and Java 11). Make sure that Java 8 is the default Java version when multiple Java versions are installed on your computer.The next step is to allow the repository of Jenkins. To do so, use the following curl command to import the GPG key:$ curl --silent --location http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo | sudo tee /etc/yum.repos.d/jenkins.repoAdd your system's repository with:$ sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.keyUpon activation of the repository, install the recent stable Jenkins version by typing:$ sudo yum install jenkinsUpon completion of the installation, begin the Jenkins service with:$ sudo systemctl start jenkinsTo verify if it has been successful, check with below command:$ systemctl status jenkinsSomething like this you should see:Outputjenkins.service - LSB: Jenkins Automation ServerLoaded: loaded (/etc/rc.d/init.d/jenkins; bad; vendor preset: disabled)Active: active (running) since Thu 2018-09-20 14:58:21 UTC; 15s agoDocs: man:systemd-sysv-generator(8)Process: 2367 ExecStart=/etc/rc.d/init.d/jenkins start (code=exited, status=0/SUCCESS)CGroup: /system.slice/jenkins.serviceFinally, allow the Jenkins service to start on system boot $ sudo systemctl enable jenkinsOutputjenkins.service is not a native service, redirecting to /sbin/chkconfig.Executing /sbin/chkconfig jenkins onOpening Firewall portIf you install Jenkins on a remote CentOS firewall-protected server, port 8080 is necessary. To open   the required port, use the following instructions:$ sudo firewall-cmd --permanent --zone=public --add-port=8080/tcpsudo firewall-cmd --reloadSetting JenkinsOpen your browser and type in your domain or IP address followed by port 8080 in order to set your fresh Jenkins setup:http://your_ip_or_domain:8080Below screen will be displayed which prompts you to enter the admin password generated during setup:To print the password on your terminal, use the following instructions: $ sudo cat /var/lib/jenkins/secrets/initialAdminPasswordThe alphanumeric password should be 32 characters long, as shown below:Output 3226*****************************Copy your terminal password, paste it in the password field for the Administrator and click on Continue.On the next screen, you are asked if you would like to install or pick certain plugins. To begin the installation process instantly, just click on the Install suggested plug-ins box.When the installation is finished, the first administrative user is prompted for the setting. Fill in all the necessary data. and click the Save and Continue.On your next page, the URL for the Jenkins instance will be requested. An automatically produced URL will be added to the URL field.To finish the configuration, click the Save and Finish button to verify the setup.Finally,  click start using  Jenkins Button to start the process and the user we created in one of the past steps as admin user will log in Jenkins dashboard.You have effectively mounted Jenkins on your CentOS scheme when you have reached this point.Jenkins Installation on MacPrerequisiteA Mac machine with Mac OSX Yosemite or higher with admin accessInstallation of Java Development Kit on the machine.Access to Git, Svn, etc. remote repository.Download Jenkins installer.pkg file from Jenkins ' official website and get through the wizard setup.The jenkins setup wizard sets up a distinct Jenkins user on your system.We need to make some changes in the ‘Users & Groups’ section as well. Do follow below steps.Open ‘System Preferences -> Users & Groups’Click on the Lock icon located in the bottom left corner which reads, ‘Click the lock to make changes’. Enter your login password.Under the ‘Other Users’ section you may see the user without any name but with admin rights. This is our Jenkins user. Let's rename it.Right-click the empty user and select Advanced Options. This will show you all the details. Give the ‘Full name’ as Jenkins. Press OKClick on ‘Reset Password’. Enter a new password and make sure that you remember this.Now our Jenkins user is almost ready.This is just like another mac user with admin rights.Now restart your Mac machine and log in with Jenkins user with the password which you just reset.Click the lock to save the changes and restart the system to login with Jenkins user account.In localhost Jenkins resides at port 8080.Open your browser, go to localhost:8080 and make the original set-up, which consists of installing some plugins and creating account for safety purpose.Setting Jenkins as Launch agentJenkins operates by default as a daemon. A daemon is a non-interactive background that operates in the entire scheme and is not linked to a particular user.Much of CI runs simulators and other GUI apps, so another option is required. You can modify Jenkins as a launch agent to resolve this. On behalf of the user, a launch agent operates behind the scenes.You need to edit the settings folder and alter your place to begin rebooting automatically if you want to alter how the Jenkins process is started.Enter the below command to unload Jenkins as a Daemonsudolaunchctlunload/Library/LaunchDaemons/org.jenkins-ci.plistNext, migrate to the LaunchAgents folder the.plist file which defines how Jenkins will be running.sudo mv /Library/LaunchDaemons/org.jenkins-ci.plist /Library/LaunchAgents/Start the jenkins again and now it will run as launch agent.
9438
Installation Guide to Jenkins

Jenkins is a Java-based open-source automation too... Read More

How to Install Docker on Windows, Mac, & Linux: A Step-By-Step Guide

Docker is intended to benefit developers and system managers and makes it a component of a number of toolchains for DevOps (developers + activities). This implies that designers can concentrate their attention on writing code without worrying about the scheme that it will eventually run on. It also gives them the opportunity to take advantage of one of the thousands of programs intended to operate as part of their implementation in a container at Docker. Docker offers flexibility for the operational team and decreases possibly a smaller overhead footprint and lower overhead the number of devices required.Let’s now deep dive into installation steps for docker on different platforms.Install Docker on Windows The community version of Docker for Microsoft Windows is Docker Desktop for Windows.Download from Docker Hub. System RequirementsThe software and hardware requirements need to operate Client Hyper-V on Windows 10 effectively are:Software Requirements:Windows-10 64-bit system requirements: Pro, Enterprise or EducationWindows characteristics of Hyper-V and Containers must be activatedHardware Requirements:The support for virtualization of hardware-level Client Hyper-V in BIOS settings must be allowed with the 64-bit processor with second-level address translation (SLAT). Minimum 4 GB RAMTo run Docker Desktop, Microsoft Hyper-V is needed. The Windows installer Docker Desktop allows Hyper-V and restarts your computer if needed. VirtualBox no longer operates when Hyper-V is activated. All VirtualBox VM images are however maintained.The DOCKer VMs (including the default one generated during the installation of the Toolbox) are no longer started. VirtualBox The Docker desktop can not use these VMs side-by-side. You can still handle remote VMs using the docker.What is included in Installation?The installation of Docker Desktop consists of the Docker Engine, Docker CLI, Docker Compose, Docker Machine, and Kitematic. Docker Desktop containers and images are shared among all user accounts on the machines where they are installed. All Windows accounts are building and running containers using the same VM. Nested virtualization situations, such as operating Docker Desktop with VMWare or Parallels, might operate. See Running Docker Desktop in nested situations for more data.Installation steps To run the installer, double-click Docker Desktop Installer.exe to install Docker Desktop on Windows. The installer can be accessed from Docker Hub if you have not previously downloaded (Docker Desktop Installer.exe). It typically downloads to your download directory or can be executed at the bottom of your internet browser from the latest download bar.Follow the installation wizard directions for licensing, authorizing the installer and proceeding with the installation. If advised, authorize your system password during the installation of the Docker Desktop Installer. The networking elements, connections to the applications of Docker and the management of Hyper-V VMs need to be privately accessible.Click Finish in the setup window and launch the application Docker Desktop.Start Docker DesktopAfter installation, Docker Desktop will not begin automatically. Search for Docker and select the search outcomes for Docker Desktop.If the whale icon remains stable in the status bar, Docker Desktop is up and running and can be accessed from any terminal window.You also get a pop-up message with the next steps, as well as a link to this documentation, after the Docker Desktop app is installed.When you're done initializing, click on the whale icon in the Notifications region and pick About Docker to check that your recent version is available.Install Docker on MacThe very first step is to download the Docker Toolbox for Mac. Get the downloadable link- Download from Docker HubSystem RequirementDocker Desktop for Mac starts only when all these requirements can be met:Mac hardware must be 2010 models or newer, including Extended Page Tables (EPT) and Unrestricted Mode, with Intel hardware to provide memory management unit (MMU) virtualization. This support can be checked to see if the following command is being run on your computer: sysctl kern.hv_supportmacOS Sierra 10.12 and newer versions of macOS are endorsed. The upgrade to the newest version of macOS is recommended.VirtualBox (incompatible with Docker Desktop on Mac) before version 4.3.30 must not be installed. It's alright if you have a newer VirtualBox version installed.Installation stepsDouble-click Docker.dmg and drag the whale Moby to the application folder to open the installer.In the Applications directory, double-click Docker.app to launch Docker. In the instance below, the applications folder is in the Grid view modeYou are led to allow Docker.app with your system password after starting it. Privileged access is required to install Docker app connections and networking elements.The whale in the top status bar shows that Docker runs from a terminal and is available.You will also get a success message, with the next steps and a link to this documentation, if you have just installed the app. To reject this pop-up, click on the whale in the status bar.To get Preferences and other options, click on the whale (whale menu).To check that you have the latest version, select About Docker.Notes:Getting started provides an overview of Docker Desktop for Mac, basic Docker command examples, how to get help or give feedback, and links to all topics in the Docker Desktop for Mac guide.Troubleshooting describes common problems, workarounds, how to run and submit diagnostics, and submit issues.Install Docker on LinuxLet’s use a Ubuntu example to begin installing Docker. If you don't already have it, you can use Oracle Virtual Box to install a virtual Linux example. A straightforward Ubuntu server mounted on the Oracle Virtual Box is shown in the following screenshot. There is an OS user called a demo defined with full root access to the scheme:Step 1 − We must first make sure you have the correct version of the Linux kernel running before installing Docker. Only version 3.8 or greater is intended for Docker on Linux kernel. We can do this with the instructions below.Uname: The system data for the Linux system is returned by this method. This method will return the kernel name, kernel release, kernel version information on the Linux system.uname -aa − Used for ensuring the return of the system data.Step 2 − You need to install packages from the internet onto the Linux system via the following command, the recent packages can be updated to the OS.apt-get Optionssudo− The sudo command is used to make sure the command runs with root access.update− Update option ensures that all packages on the Linux system are updated.sudo apt-get update Step 3- The next step is to install the certificates needed to later download required Docker packages for a job with the Docker site. The following command can be used.sudo apt-get install apt-transport-https ca-certificates Step 4− Adding fresh GPG key will be the next step. This key must guarantee that the required packages for Docker are all encrypted.This command is intended to download the key from hkp:/ha.pool.sks-keyservers.net:80 and add it to the adv keychain by means of the ID58118E89F3A912897C070ADBF76221572C52609D. Please note that to download the necessary Docker packages, this specific key is needed.Step 5 − Next, you need to add the appropriate site to docker.list of the apt package manager, depending on the version of Ubuntu which you hold, to allow it to detect and download the Docker packages from the Docker site.Precise 12.04 (LTS) ─ deb https://apt.dockerproject.org/repoubuntu-precise mainTrusty 14.04 (LTS) ─ deb https://apt.dockerproject.org/repo/ ubuntu-trusty mainWily 15.10 ─ deb https://apt.dockerproject.org/repo ubuntu-wily mainXenial 16.04 (LTS) - https://apt.dockerproject.org/repo ubuntu-xenial mainecho "deb https://apt.dockerproject.org/repo ubuntu-trusty main”     | sudo tee /etc/apt/sources.list.d/docker.listStep 6 –The next step is to update the packages on Ubuntu scheme with the apt-get update command.Step 7 ‐ if we want to make sure that the package manager points towards the correct repository then we can do this by issuing the apt-cache command.apt-cache policy docker-engineStep 8– Edit the update command apt-get to guarantee that all local system packages are up-to-date.Step 9- The Linux-image-extra-* kernel packages that allow the user to use the aufs storage driver are required for Ubuntu Trusty, Wily and Xenial. The newer variants of Docker use this engine.The following command can be used:sudo apt-get install linux-image-extra-$(uname -r)  linux-image-extra-virtualStep 10− Installing Docker is the final step and this can be done with the following command:sudo apt-get install –y docker-engineHere, apt-get utilizes the installation feature to download and install Docker from the Docker page. The Docker engine is the official package for Ubuntu based devices by the Docker Corporation.The docker running version can be checked by running below command:docker version
5953
How to Install Docker on Windows, Mac, & Linux...

Docker is intended to benefit developers and syste... Read More

11 Top Features of Docker That You Must Know

Docker is an open platform to develop, ship and run applications containers on a common operating system. It enables you to separate applications from infrastructures so that software is delivered quickly. Infrastructure can be managed by Docker in the same way as one managed their applications. The delay between writing code and running it for production can be significantly reduced with the help of Docker’s methodologies for quick shipping, testing, and deployment of codes. Features of Docker:Docker provides various features, some of which are listed and discussed below.Faster and easier configurationApplication isolationIncrease in productivitySwarm Services Routing Mesh Security Management Rapid scaling of Systems Better Software Delivery Software-defined networkingHas the Ability to Reduce the Size1. Faster and Easier configuration: It is one of the key features of Docker that helps you in configuring the system in a faster and easier manner. Due to this feature, codes can be deployed in less time and with fewer efforts. The infrastructure is not linked with the environment of the application as Docker is used with a wide variety of environments. 2. Application isolation:Docker provides containers that are used to run applications in an isolated environment. Since each container is independent, Docker can execute any kind of application. 3. Increase in productivity:It helps in increasing productivity by easing up the technical configuration and rapidly deploying applications. Moreover, it not only provides an isolated environment to execute applications, but it reduces the resources as well.4. Swarm: Swarm is a clustering and scheduling tool for Docker containers. At the front end, it uses the Docker API, which helps us to use various tools to control it.  It is a self-organizing group of engines that enables pluggable backends.5. Services: Services is a list of tasks that specifies the state of a container inside a cluster. Each task in the Services lists one instance of a container that should be running, while Swarm schedules them across the nodes. 7. Security Management: It saves secrets into the swarm and chooses to give services access to certain secrets, including a few important commands to the engine such as secret inspect, secret create, etc.8. Rapid scaling of Systems: Containers require less computing hardware and get more work done. They allow data centre operators to cram more workload into less hardware, meaning sharing of hardware, resulting in lower costs. 9. Better Software Delivery: Software Delivery with the help of containers is said to be more efficient. Containers are portable, self-contained and include an isolated disk volume. This isolated volume goes along with the container as it develops and is deployed to various environments. 10. Software-defined networking:Docker supports Software-defined networking. Without having touched a single router, the Docker CLI and Engine enables operators to define isolated networks for containers. Operators and Developers design systems with complex network topologies, as well as define the networks in configuration files. Since the application’s containers can run in an isolated virtual network, with controlled ingress and egress path, it acts as a security benefit as well.11. Has the Ability to Reduce the Size:Since it provides a smaller footprint of the OS via containers, Docker holds the capability to reduce the size of the development. Who is Docker for?Docker as a tool benefits both developers and system administrators, and hence is a part of various toolchains of DevOps (Developers+Operations). It helps developers to focus on writing the code and not worry about the system that it will run on. Moreover, they can make use of one of the thousands of programs that are already designed to run in a Docker container as a part of their applications and get a head start. As for Operations, Docker provides flexibility as well as reduces the number of systems needed due to its lower overhead and small footprint. To Sum Up…We have discussed the top 11 Docker Features that help it stand out from the crowd and gives it huge popularity. It is popular due to its revolutionized development in the software industry, creating vast economies of scale. Hence, containers and Dockers hold the potential to open up new opportunities for your enterprise. 
6051
11 Top Features of Docker That You Must Know

Docker is an open platform to develop, ship and ru... Read More