Search

DevOps Filter

What is DevOps

You landed up here which means that you are willing to know more about DevOps and hey, you can admit it! And of course, the business community has been taking this trend to the next level not because it looks fancy but of course, this process has proven the commitment. The growth and adoption of this disruptive model are increasing the productivity of the company.  So here, we will get an idea of how this model works and how we can enable this across the organization. According to DevOps.com, during the period of the year 2015 to 2016, the adoption rate of DevOps increased significantly, with a rise of 9 per cent in its usage rate. You may have a look at DevOps Foundation training course to know more about the benefits of learning DevOps.1. What is DevOpsDevOps is a practice culture having a union of people, process, and tools which enables faster delivery. This culture helps to automate the process and narrow down the gaps between development and IT. DevOps is a kind of abstract impression that focuses on key principles like Speed, Rapid Delivery, Scalability, Security, Collaboration & Monitoring etc.A definition mentioned in Gartner says:“DevOps represents a change in IT culture, focusing on rapid IT service delivery through the adoption of agile, lean practices in the context of a system-oriented approach. DevOps emphasizes people (and culture) and seeks to improve collaboration between operations and development teams. DevOps implementations utilize technology— especially automation tools that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective.”2. History of DevOpsLet's talk about a bit of history in cloud computing. The word “cloud compute” got coined in early 1996 by Compaq computer, but it took a while to make this platform easily accessible even though they claimed its a $2 billion market per year.  In August 2006 Amazon had introduced cloud infra which was easily accessible, and then it became a trend, and post this, in April 2008 Google, early 2010 Microsoft and then April 2011 IBM has placed his foot in this vertical. This showed a trend that all Giants are highly believed in this revolution and found potential in this technology.And in the same era DevOps process got pitched in Toronto Conference in 2008 by Patrick Debois and Andrew Shafer. He proposed that there is a better approach can be adopted to resolve the conflicts we have with dev and operation time so far. This again took a boom when 2 Flickr employee delivered a  seminar that how they are able to place 10+ deployment in a day. They came up with a proven model which can resolve the conflicts of Dev and operation having component build, test and deploy this should be an integrated development and operations process.3. Why market has adopted so aggressivelyLouis Columbus has mentioned in  Forbes that by the end of 2020 the 83% of the workload will be on the cloud where major market contributor will be AWS & Google. The new era is working more on AI, ML, Crypto, Big Data, etc. which is playing a key role in cloud computing adoption today but at the same time, IT professionals say that the security is their biggest concern in adopting a cloud computing. Moreover, Cloud helped so many startups to grow at initial stages and later they converted as a leader in there market place, which has given confidence to the fresh ideas.This entire cloud enablement has given more confidence to the team to adopt DevOps culture as cloud expansion is helping to experiment more with less risk.4. Why is Devop used?To reduce day to day manual work done by IT team To avoid manual error while designing infra Enables a smooth platform for technology or cloud migration Gives an amazing version control capabilities A better way to handle resources whether it’s cloud infra or manpower Gives an opportunity to pitch customer a realistic feature rollout commitment To adopt better infra scaling process even though you receive 4X traffic Enables opportunity to build a stable infrastructure5. Business need and value of DevOpsLet's understand this by sharing a story of, a leading  Online video streaming platform 'Netflix' which was to disclose the acquisition or we can say a company called Blockbuster LLC got an opportunity in 2000 to buy Netflix in $50 million. Netflix previously was working only on  DVD-by-mail service, but in 2016 Netflix made a business of $8.83 billion which was tremendous growth in this vertical. Any idea of how this happened? This started with an incident at Netflix office where due to a database mismatch a DVD shipping got disrupted for 3 days which Forced Management to move on the cloud from relational systems in their data centers because this incident made a massive impact on core values. The shift happened from vertical to horizontal, and AWS later provided with Cloud service even I have read that in an early stage they gathered with AWS team and worked to scale the infrastructure. And, today Netflix is serving across 180 countries with the amount of 15,00,00,000 hours of video content to less or more 8,60,00,000 members.6. Goal Of DevOpsControl of quality and increase the frequency of deploymentsAllows to enable a risk-free experimentEnables to improve mean time to recovery and backupHelps to handle release failure without losing live dataHelps to avoid unplanned work and technical failureTo achieve compliance process and control on the audit trailAlert and monitoring in an early stage on system failureHelps to maintain SLA in a uniform fashionEnabling control for the business team7. How Does DevOps workSo DevOp model usually keeps Development and operation team tightly coupled or sometimes they get merged and they roll out the entire release cycle. Sometime we may see that development, operation and security & Network team is also involved and slowly this got coined as DevSecOps. So the integration of this team makes sure that they are able to crack development, testing, deployment, provisioning infra, monitoring, network firewalling, infrastructure accessibility and accountability. This helps them to build a clean application development lifecycle to deliver a quality product.8. DevOps workflow/LifecycleDevOps Workflow (Process)DevOps workflow ensures that we are spending time on the right thing that means the right time is involved in building product/infrastructure. And how it enables we can analyze in below diagram. When we look into the below diagram, it seems DevOps process in an extended version of agile methodologies but it doesn’t mean that it can fit in other SDLC methodologies.  There is enough scope in other SDLC process as well. Once we merge process and Tools workflow diagram, it showcases a general DevOps environment. So the team puts an effort to keep pushing the releases and at the same time by enabling automation and tools we try maintaining the quality and speed.DevOps Workflow (Process)DevOps Workflow (Tool)9. DevOps valuesI would like to split DevOps values into  two groups: Business Values Organization Values Business values are moreover customer centricHow fast we recover if there is any failure?How we can pitch the exact MRR to a customer and acquire more customers?How fast we can deliver the product to customersHow to roll out beta access asap if there any on-demand requirement?Organizational ValuesBuilding CultureEnabling communication and collaborationOptimize and automate the whole systemEnabling Feedbacks loopsDecreasing silosMetrics and Measurement10. Principle of DevOpsAutomated: Automate as much as you can in a linear and agile manner so you can build an end to end automated pipeline for software development life cycle in an effective manner which includes quality, rework, manual work and cost. And it’s not only about the cycle it is also about the migration from one technology to another technology, one cloud to another cloud etc.Collaborative: The goal of this culture is to keep a hold on both development and operations. Keep an eye and fix the gaps to keep moving thing in an agile way, which needs a good amount of communication and coordination. By encouraging collaborative environment an organization gets ample of ideas which help to resolve issue way faster. The beauty of the collaboration is it really handles all unplanned and manual work at an early stage which ends up given a quality build and process.Customer Centric approach: DevOps team always reacts as a startup and must keep a finger on the pulse to measure customer demands. The metrics they generate give an insight to the business team to see the trend of usage and burn rate. But of course, to find a signal in the noise, you should be focused and collect only those metrics which really matters.Performance Orientation: Performance is a principle and a discipline which gives an insight to the team to understand the implication of bad performance. Before moving to production if we get metrics and reports handy it gives confidence to technology and business both. This gives an opportunity to plan how to scale infrastructure, how to handle if there is a huge spike or the high usage, and the utilization of the infrastructure.Quality Indicators per application: Another set which is a predefined principle is to set measurable quality Assigning a quality gate to indicators with predefined targets, covering fit for purpose and security gives an opportunity to deliver a complete quality application.11. DevOps Key practices:i) CI/CD When we say “continuous” it doesn’t translate that “always running” but of course “always ready to run”.Continuous integration is nothing but the development philosophy and practices that drive teams to check-in code to version control system as often as possible. So keeping your build clean and QA ready developer's changes need to be validated by running automated tests against the build that could be a Junit, iTest. The goal of CI is to place a consistent and automated way to build and test applications which results in better collaboration between teams, and eventually a better-quality product.Continuous Delivery is an adjunct of CI which enables a facility to make sure that we can release new changes to your customers quickly in a sustainable way.Typical CD involves the below steps:Pull code from version control system like bitbucket and execute build.Execute any required infrastructure steps command line/script to stand up or tear down cloud infrastructure.Move build to right compute environmentAble to handle all the configuration generation process.Pushing application components to their appropriate services, such as web servers, API services, and database services.Executing any steps required to restarts services or call service endpoints that are needed for new code pushes.Executing continuous tests and rollback environments if tests fail.Providing log data and alerts on the state of the delivery.Below there is a table which will help us better understand what we need to put as an effort and what we will gain if we enable CI/CD in place:Practice TypeEffort RequiredGainContinuous integrationa) Need to prepare automated your team will need to write automated tests for each new feature,improvement or bug fix.b) You need a continuous integration server that can monitor the main repository and run the tests automatically for every new commits pushed.c) Developers need to merge their changes as often as possible, at least once a day.a Will give control on regressions which can be captured in early stage of automated testing.b) Less context switching as developers are alerted as soon as they break the build and can work on fixing it before they move to another task.c) Building the release is easy as all integration issues have been solved early.d) Testing costs are reduced drastically -your Cl server can run hundreds of tests in the matter of seconds.e) Your QA team spend less time testing and can focus on significant improvements to the quality culture.Continuous Delivery a) You need a strong foundation in continuous integration and your test suite needs to cover enough of your codebase.b) Deployments need to be automated. The trigger is still manual but once a deployment is started  there shouldn't be a need for human intervention.c) Your team will most likely need to embrace feature flags so that incomplete features do not affect customers in productiona) The complexity of deploying software has been taken away. Your team doesnt't have to spend days preparing for a release anymore.b) You can release more often,thus accelerating the feedback loop with your  customers.c) There is much less pressure on decisions for small changes,hence encoraging iterating faster.ii) Maintaining infra in a secure and compliant wayKeeping infrastructure secure and compliant is also a responsibility of DevOps or some organization are nowadays pitching as SecOps. A general and traditional way of security methodologies or rules get fails when you have multi-layer or multi-cloud system running for your product organization and sometimes it really fail when you moving with the continuous delivery process. So the job is to ensure that the team is putting clear and neat visibility on risk and vulnerabilities which may cause a big problem. Below adding a basic rule to avoid small loophole (Specific to GCP) :Level 1:- Define your resource hierarchy.- Create an Organization node and define the project structure.- Automation of project creation which helps to achieve uniformity and testability.Level 2:- Manage your Google identities.- Synchronize your existing identity platform.- Have single-sign-on (SSO).- Don’t add multiple users instead of that select resource level permission.- Be specific while providing resource access also with action type (Read, Write & Admin).- Always use service accounts to access meta-data because to authenticate this it uses keys instead of password and GCP rotates the service account keys for code running on GCP.Level 3:- Add VPC for network definition and create firewall rules to manage traffic.- Define a specific rule to control external access and avoid having unwanted port opens.- Create a centralized network control template which can be applied across the project.Level 4:- Enable centralized logging and monitoring (Preferred Stackdriver).- Enable audit log which helps you to collect the activity of Admin, System Event and Data Access in a phrase of who did what, where and when?Level 5:- Enable cold line data storage if there is a need to keeping a copy for disaster management.- Further reference for placing security standard in AWS there is an article I have posted a few months back. 12. DevOps Myths or What DevOps is not?Before I mention the myths I would clarify the biggest myth which every early stage learner carries that “DevOps practice can be rolled out in a day and output will be available from the 1st day”. This is too early to reach this conclusion, as the definition of DevOps always says that it's a culture and process which can be built in a day. But of course, you will get an opportunity to overcome your mistakes at an early stage. Let's discuss a few more myths:It’s now only about the tool, (It’s a component of the whole DevOps practice)Dev and Ops team should have used the same set of tools (How to overcome- push them to integrate both)Only startups can follow this practice (Azure has published an article on best practices of DevOps which says it can be applied anywhere)Joining DevOps/ DevOps tool conf with fancy sticker (Good you join but don’t pretend that now you are carrying DevOps tag)Pushing build in production in every 5 mins (This is not what Continuous delivery)DevOps doesn’t fit the existing system (Overcome: You may need to find the right approach to make an attempt)13. Benefits of DevOpsBusiness Benefitsa) Horizontal and vertical growth: When I’m using “Horizontal and Vertical Growth” I’m keeping customer satisfaction on X, Business on Y2 and time on the Y-axis. Now the question is how it helps to populate growth in 2 axis, and my answer will be the quick turnaround time for minor and major issues. Once we adopt DevOps we scale and built in such a fashion that in less time the graph shows a rapid jump.b) Improving ROI of Data: Having DevOps in an organization ensures that we can design a decent ROI from data at an early stage more quickly. If we will do a raw survey now Software industry is playing with data and have control over there a team should have an end to end control on data. And if we define DevOps it will help the team to crunch data in various ways by automating small jobs. By automation, we can segregate and justify data and this helps to populate either in Dashboard or can present offline to the customer.Technical Benefitsc) Scalability & Quality: If a business starts reaching to more user we start looking to increase infrastructure and bandwidth. But on the other hand, it starts popping up a question whether we are scaling our infra in the right way and also if a lot of people are pushing changes (Your code commits/builds) are having the same or greater quality we have earlier. Both the questions are somehow now reserved by the DevOps team. If your business pitch that we might be going to hit 2000+ client and they will be having billion of traffic and we are ready to handle, DevOps take these responsibilities and says yes can scale infra at any point of time. And if the same time internal release team says I want to deliver 10 feature in next 10 days independently, DevOps says quality can be maintained.Culture  Benefitsd) Agility & Velocity: They key param of adopting DevOps is to improve the velocity of product development. DevOps enables Agility and if both are in sync we can easily observe the velocity. The expectation of end users are always high and at the same time, the deliverable time span is short. To achieve this we have to ensure that we are able to our rollout new features to customers at much higher frequencies otherwise your competitors may win the market.e) Enabling Transparency:  A Practice to enable total transparency is a key impact on the DevOps culture. Sharing knowledge across the team gives you an opportunity to work faster and get aligned with the goal. Transparency will encourage an increasingly well-rounded team with heightened understandings.14. How to adopt a DevOps modelThe ideal way is to pick up a small part of the project or product but sometimes we start adopting when we are touching the bottleneck. So whatever you start few things need to be taken care like Goal should be clean and the team is are in sync to chase the same, loop whole which turns to a downtime, how can testing (Stress, performance, load ) to avoid production glitches and at the same time enable automated deployment process. All this could have some basic plan and move forward it can be elaborated in detailed format. While adopting a DevOps model, need to make sure that the team is always looking into metrics so they can justify no’s and make assumption towards the goal. If you want to have a roadmap of DevOps adoption then you really need to find the gaps up front and the typical problem you face every day which really holds your release process or spoils your team time.15. DevOps automation toolJenkins: Jenkins is an open source automation server which is used to automate the software build, and deliver or deploy the build.  It can be installed through native system packages, Docker, or even run standalone by any machine with a Java Runtime Environment (JRE) installed. In short, Jenkins enables continuous integration which helps to accelerate the development. There are ample of plugins available which enable the integration for Various DevOps stages. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc. More in-depth information about the same can be found here in our training material on Jenkins and if you are inquisitive about what sort of questions related to Jenkins are asked in job interviews, then feel free to view our set of 22 Jenkins interview questions.Ansible: An open-source platform which helps to automate the IT engine which actually pushes off the slavery work from DevOps day to day life. Usually, Ansible helps in 3 day to day task, Provisioning, configuration management and application deployment. Beauty is it can automate traditional servers, virtualization platforms, or the cloud one. It is built on playbooks which can be applied to an extensive variety of systems for deploying your app. To know more, you may have a look at our Ansible training material here or go through our set of 20 interview questions on Ansible.Chef: It’s an open source config management system and works as a master-client model. It is having a transparent design and works based on instruction which needs to be defined properly. Before you plan using this tool you need to make sure that you have a proper Git practice going on and you have an idea about Ruby as this is completely built on Ruby. The industry says that this is good to have for development-focused environments and enterprise mature architecture. Our comprehensively detailed training course on Chef will give you more insight into this tool.Puppet:  So Puppet works as a master-client setup and utilized as model driven. This is built on Ruby but you can customize this as scripting language somewhere close to JSON. Puppet helps you to get control of full-fledged configuration management. This tool somewhere helps Admin (Part of DevOps) to add stability and maturity towards config management. A more detailed explanation of Puppet and its functionality can be found in our training material.Docker:  A tool designed to help developers and administrators to provide flexibility to reduce the count of the system as docker don’t create a complete virtual operating system, instead they allow applications to use the same Linux kernel as the system. So somewhere we can say we use Docker to create, deploy, and run applications by using containers. Just a stats submitted by docket that over 3.5 million applications placed in containers using docker and 37 billion containerized applications have been downloaded. Specifically, Docker CI /CD provided an opportunity to have exactly like a live server and run multiple dev infra form the same host with different config and OS. You may visit our training course on Docker to get more information.Kubernetes: Platform developed to manage containerized applications which provide high availability and scalability. According to the usage we can downgrade and upgrade the system, we can perform rolling updates, rollback feasibility, switch traffic between to different versions of the application.  So we can have multiple instances having Kubernetes installed and can be operated as Kubernetes cluster. Post this you get an API endpoint of Kube cluster, configure Kubel and you are ready to serve. Read our all-inclusive course on Kubernetes to gather more information on the same.Docker and Kubernetes, although being widely tools as DevOps automation tools, have notable differences between their setup, installation and their attributes, clearly mentioned in our blog stressing the differences between Docker and Kubernetes.Alert:Pingdom: Pingdom is a platform which enables monitoring to check the availability,  performance, transaction monitoring (website hyperlink) and incident collection of your websites, servers or web applications. Beauty is if you are using a collaboration tool like slack or flock you can just integrate by using the webhook (Pretty much simpler no code required )you can easily get notified at any time. Pingdom also provides an API so you can have your customized dashboard (Recently started) and the documentation is having enough details and self-explanatory.Nagios: It’s an Open Source Monitoring Tool to monitor the computer network. We can monitor server, applications, incident manager etc and you can certainly configure email, SMS, Slack notifications and phone calls even. Nagios is licensed under GNU GPLv2. Listing some major components which can be monitored with Nagios:Once we install Nagios we get a dashboard to monitor network services like SMTP, HTTP, SNMP, FTP, SSH, POP, etc and can view current network status, problem history, log files, notifications that have been triggered by the system, etc.We can monitor Servers resources like disk drives, memory, processor, server load usage, system logs, etc.Image copyright stackdriver- D-4Stackdriver: Stackdriver is again a Monitoring tool to get the visibility of performance, uptime, and overall health for cloud-powered applications. Stackdriver monitoring collects events and metadata from Google Cloud Platform, Amazon Web Services (AWS). Stackdriver consumes data and generates insights via dashboards, charts, and alerts. And for alerting we can integrate to collaboration tools like Slack, PagerDuty, HipChat, Campfire, and more.Image copyright stackdriver- D-2Adding one sample log where we can see what all parameter it collects and also i have just separated them in fashion which will help us to understand what all it logs it actually collects:Log InformationUser Details and Authorization InfoRequest Type and Caller IPResource and Operation DetailsTimeStamp and Status Details{  insertId:    logName:  operation: {   first:     id:   producer:  } protoPayload: {   @type:   authenticationInfo: {   principalEmail:       }   authorizationInfo: [   0: {   granted:     permission:       }   ]methodName:     request: {    @type:     } requestMetadata:  { callerIp: callerSuppliedUserAgent:   } resourceName: response: { @type:    Id:  insertTime:    name:  operationType:    progress:    selfLink:    status:      targetId:     targetLink:    user:    zone:     }   serviceName:    }receiveTimestamp:  resource: {  labels: {  instance_id:  project_id:      zone:      }  type:    }  severity:  timestamp:   }Monitoring:Grafana: It is an open source visualization tool and can be used on top of different data stores like InfluxDB,Elasticsearch and Logz.io.We can create comprehensive charts with smart axis formats (such as lines and points) as a result of Grafana’s fast, client-side rendering — even over long ranges of time — that uses Flot as a default option. We can get the 3 different levels of access, watcher, Editor and Admin, even we can enable G-Auth for having good access control. A detail information guide can be found hereImage copyright stackdriver- D-5Elasticsearch:It's an open source realtime distributed, RESTful search and analytics engine. It collects unstructured data and stores in a cultivated format which is optimized and available for language based search. The beauty of Elastic is scalability, speed, document-oriented, schema-free. It scales horizontally to handle kajillions of events per second, while automatically managing how indices and queries are distributed across the cluster for smooth operations.Cost OptimizationreOptimize.io : Once we run ample of servers we usually end up with burning good amount not intentionally but of course because of not have a clear visualization. At reOptimze helps thereby providing a detailed insight about the cloud expense the integration can be done with 3-4 simple steps but before that you might need to look into the prerequest which can be accessed here. Just a heads up that they only need read access for all these and permission docs can be found here . Image copyright reOptimizeD-616. DevOps vs AgileDevOpsAgileDevOps culture can be enabled in the software industry to deliver reliable build.Agile is a generic culture which can be deployed in any department.The key focus area is to have involvement at an end to end process.Agile helps the management team to push a frequent release.Enables quality build with rapid delivery.Keep team aware of frequent changes for any release and feature.Agile sprints work within the immediate future, A sprint life cycle varies between 7-30 days.DevOps don’t have such scheduled metrix, they work to avoid such unscheduled disruptions.Team size also differs, in Agile wee can minimal resource can be one as well.DevOps works on collaboration and bridge a big set of the team. 17. Future of DevOpsThe industry is moving more on cloud which enables a few more responsibilities to DevOps. And immediate hot topic could be DevSecOps because more automation tends to more connectivity means more exposure. AI or ML is more data-centric and learning based which gives an opportunity to DevOps to share hand to train ML modules, unique analysis correlating, code, test results, user behavior, and production quality & performance. There is also an opportunity to break the stereotype that DevOps can be only adopted by startup and surely the next 2-3 years this will become a general practice in Enterprise.18. Importance of DevOps training certificationCertifications work like an add-on, and add-on always gives some crisp and cumulative results. Certification works similar to that if someone adds any professional certificates to resume it gives an add-on value. In the process of certification, professionals help you to understand in detail and the deep dive of DevOps culture help individual to get a clear picture. While certification you may have to look into the vendor reputation, an academic who is giving approval, the transparency, session hour and many more components.19. Conclusion: I have been working closely and observing the DevOps team from a year & so and found every day we learn something new. The more we dive we see a lot can be done and can be achieved in a couple of different ways. As the industry is growing the responsibility of DevOps seems to increase which creates a possible chance for professional but always set a new bar of quality. Now, since you have come to know everything about DevOps, feel free to read our blog on how you can become a DevOps Engineer.

What is DevOps

4526
  • by MD Zaid Imam
  • 30th Nov, 2018
  • Last updated on 16th Apr, 2021
  • 9 mins read
What is DevOps

You landed up here which means that you are willing to know more about DevOps and hey, you can admit it! And of course, the business community has been taking this trend to the next level not because it looks fancy but of course, this process has proven the commitment. The growth and adoption of this disruptive model are increasing the productivity of the company.  So here, we will get an idea of how this model works and how we can enable this across the organization. According to DevOps.com, during the period of the year 2015 to 2016, the adoption rate of DevOps increased significantly, with a rise of 9 per cent in its usage rate. You may have a look at DevOps Foundation training course to know more about the benefits of learning DevOps.

1. What is DevOps

DevOps is a practice culture having a union of people, process, and tools which enables faster delivery. This culture helps to automate the process and narrow down the gaps between development and IT. DevOps is a kind of abstract impression that focuses on key principles like Speed, Rapid Delivery, Scalability, Security, Collaboration & Monitoring etc.

A definition mentioned in Gartner says:

DevOps represents a change in IT culture, focusing on rapid IT service delivery through the adoption of agile, lean practices in the context of a system-oriented approach. DevOps emphasizes people (and culture) and seeks to improve collaboration between operations and development teams. DevOps implementations utilize technology— especially automation tools that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective.

2. History of DevOps

Let's talk about a bit of history in cloud computing. The word “cloud compute” got coined in early 1996 by Compaq computer, but it took a while to make this platform easily accessible even though they claimed its a $2 billion market per year.  In August 2006 Amazon had introduced cloud infra which was easily accessible, and then it became a trend, and post this, in April 2008 Google, early 2010 Microsoft and then April 2011 IBM has placed his foot in this vertical. This showed a trend that all Giants are highly believed in this revolution and found potential in this technology.

And in the same era DevOps process got pitched in Toronto Conference in 2008 by Patrick Debois and Andrew Shafer. He proposed that there is a better approach can be adopted to resolve the conflicts we have with dev and operation time so far. This again took a boom when 2 Flickr employee delivered a  seminar that how they are able to place 10+ deployment in a day. They came up with a proven model which can resolve the conflicts of Dev and operation having component build, test and deploy this should be an integrated development and operations process.

3. Why market has adopted so aggressively

Louis Columbus has mentioned in  Forbes that by the end of 2020 the 83% of the workload will be on the cloud where major market contributor will be AWS & Google. The new era is working more on AI, ML, Crypto, Big Data, etc. which is playing a key role in cloud computing adoption today but at the same time, IT professionals say that the security is their biggest concern in adopting a cloud computing. Moreover, Cloud helped so many startups to grow at initial stages and later they converted as a leader in there market place, which has given confidence to the fresh ideas.

Why market has adopted so aggressively

This entire cloud enablement has given more confidence to the team to adopt DevOps culture as cloud expansion is helping to experiment more with less risk.

4. Why is Devop used?

To reduce day to day manual work done by IT team To avoid manual error while designing infra Enables a smooth platform for technology or cloud migration Gives an amazing version control capabilities A better way to handle resources whether it’s cloud infra or manpower Gives an opportunity to pitch customer a realistic feature rollout commitment To adopt better infra scaling process even though you receive 4X traffic Enables opportunity to build a stable infrastructure

5. Business need and value of DevOps

Let's understand this by sharing a story of, a leading  Online video streaming platform 'Netflix' which was to disclose the acquisition or we can say a company called Blockbuster LLC got an opportunity in 2000 to buy Netflix in $50 million. Netflix previously was working only on  DVD-by-mail service, but in 2016 Netflix made a business of $8.83 billion which was tremendous growth in this vertical. Any idea of how this happened? This started with an incident at Netflix office where due to a database mismatch a DVD shipping got disrupted for 3 days which Forced Management to move on the cloud from relational systems in their data centers because this incident made a massive impact on core values. The shift happened from vertical to horizontal, and AWS later provided with Cloud service even I have read that in an early stage they gathered with AWS team and worked to scale the infrastructure. And, today Netflix is serving across 180 countries with the amount of 15,00,00,000 hours of video content to less or more 8,60,00,000 members.

6. Goal Of DevOps

  • Control of quality and increase the frequency of deployments
  • Allows to enable a risk-free experiment
  • Enables to improve mean time to recovery and backup
  • Helps to handle release failure without losing live data
  • Helps to avoid unplanned work and technical failure
  • To achieve compliance process and control on the audit trail
  • Alert and monitoring in an early stage on system failure
  • Helps to maintain SLA in a uniform fashion
  • Enabling control for the business team

7. How Does DevOps work

So DevOp model usually keeps Development and operation team tightly coupled or sometimes they get merged and they roll out the entire release cycle. Sometime we may see that development, operation and security & Network team is also involved and slowly this got coined as DevSecOps. So the integration of this team makes sure that they are able to crack development, testing, deployment, provisioning infra, monitoring, network firewalling, infrastructure accessibility and accountability. This helps them to build a clean application development lifecycle to deliver a quality product.

8. DevOps workflow/Lifecycle

DevOps Workflow (Process)

DevOps workflow ensures that we are spending time on the right thing that means the right time is involved in building product/infrastructure. And how it enables we can analyze in below diagram. When we look into the below diagram, it seems DevOps process in an extended version of agile methodologies but it doesn’t mean that it can fit in other SDLC methodologies.  There is enough scope in other SDLC process as well. Once we merge process and Tools workflow diagram, it showcases a general DevOps environment. So the team puts an effort to keep pushing the releases and at the same time by enabling automation and tools we try maintaining the quality and speed.

DevOps Workflow (Process)DevOps Workflow (Process)

DevOps Workflow (Tool)

DevOps Workflow (Tool)

9. DevOps values

I would like to split DevOps values into  two groups: 

  • Business Values 
  • Organization Values Business values are moreover customer centric
    • How fast we recover if there is any failure?
    • How we can pitch the exact MRR to a customer and acquire more customers?
    • How fast we can deliver the product to customers
    • How to roll out beta access asap if there any on-demand requirement?
  • Organizational Values
    • Building Culture
    • Enabling communication and collaboration
    • Optimize and automate the whole system
    • Enabling Feedbacks loops
    • Decreasing silos
    • Metrics and Measurement

10. Principle of DevOps

Automated: Automate as much as you can in a linear and agile manner so you can build an end to end automated pipeline for software development life cycle in an effective manner which includes quality, rework, manual work and cost. And it’s not only about the cycle it is also about the migration from one technology to another technology, one cloud to another cloud etc.

Collaborative: The goal of this culture is to keep a hold on both development and operations. Keep an eye and fix the gaps to keep moving thing in an agile way, which needs a good amount of communication and coordination. By encouraging collaborative environment an organization gets ample of ideas which help to resolve issue way faster. The beauty of the collaboration is it really handles all unplanned and manual work at an early stage which ends up given a quality build and process.

Customer Centric approach: DevOps team always reacts as a startup and must keep a finger on the pulse to measure customer demands. The metrics they generate give an insight to the business team to see the trend of usage and burn rate. But of course, to find a signal in the noise, you should be focused and collect only those metrics which really matters.

Performance Orientation: Performance is a principle and a discipline which gives an insight to the team to understand the implication of bad performance. Before moving to production if we get metrics and reports handy it gives confidence to technology and business both. This gives an opportunity to plan how to scale infrastructure, how to handle if there is a huge spike or the high usage, and the utilization of the infrastructure.

Quality Indicators per application: Another set which is a predefined principle is to set measurable quality Assigning a quality gate to indicators with predefined targets, covering fit for purpose and security gives an opportunity to deliver a complete quality application.

11. DevOps Key practices:

i) CI/CD 

When we say “continuous” it doesn’t translate that “always running” but of course “always ready to run”.

Continuous integration is nothing but the development philosophy and practices that drive teams to check-in code to version control system as often as possible. So keeping your build clean and QA ready developer's changes need to be validated by running automated tests against the build that could be a Junit, iTest. The goal of CI is to place a consistent and automated way to build and test applications which results in better collaboration between teams, and eventually a better-quality product.

Continuous Delivery is an adjunct of CI which enables a facility to make sure that we can release new changes to your customers quickly in a sustainable way.

Typical CD involves the below steps:

  • Pull code from version control system like bitbucket and execute build.
  • Execute any required infrastructure steps command line/script to stand up or tear down cloud infrastructure.
  • Move build to right compute environment
  • Able to handle all the configuration generation process.
  • Pushing application components to their appropriate services, such as web servers, API services, and database services.
  • Executing any steps required to restarts services or call service endpoints that are needed for new code pushes.
  • Executing continuous tests and rollback environments if tests fail.
  • Providing log data and alerts on the state of the delivery.

Below there is a table which will help us better understand what we need to put as an effort and what we will gain if we enable CI/CD in place:

Practice TypeEffort RequiredGain
Continuous integrationa) Need to prepare automated your team will need to write automated tests for each new feature,improvement or bug fix.

b) You need a continuous integration server that can monitor the main repository and run the tests automatically for every new commits pushed.

c) Developers need to merge their changes as often as possible, at least once a day.
a Will give control on regressions which can be captured in early stage of automated testing.

b) Less context switching as developers are alerted as soon as they break the build and can work on fixing it before they move to another task.

c) Building the release is easy as all integration issues have been solved early.

d) Testing costs are reduced drastically -your Cl server can run hundreds of tests in the matter of seconds.

e) Your QA team spend less time testing and can focus on significant improvements to the quality culture.

Continuous Delivery a) You need a strong foundation in continuous integration and your test suite needs to cover enough of your codebase.

b) Deployments need to be automated. The trigger is still manual but once a deployment is started  there shouldn't be a need for human intervention.

c) Your team will most likely need to embrace feature flags so that incomplete features do not affect customers in production

a) The complexity of deploying software has been taken away. Your team doesnt't have to spend days preparing for a release anymore.

b) You can release more often,thus accelerating the feedback loop with your  customers.

c) There is much less pressure on decisions for small changes,hence encoraging iterating faster.

ii) Maintaining infra in a secure and compliant way

Keeping infrastructure secure and compliant is also a responsibility of DevOps or some organization are nowadays pitching as SecOps. A general and traditional way of security methodologies or rules get fails when you have multi-layer or multi-cloud system running for your product organization and sometimes it really fail when you moving with the continuous delivery process. So the job is to ensure that the team is putting clear and neat visibility on risk and vulnerabilities which may cause a big problem. Below adding a basic rule to avoid small loophole (Specific to GCP) :

Level 1:

- Define your resource hierarchy.

- Create an Organization node and define the project structure.

- Automation of project creation which helps to achieve uniformity and testability.

Level 2:

- Manage your Google identities.

- Synchronize your existing identity platform.

- Have single-sign-on (SSO).

- Don’t add multiple users instead of that select resource level permission.

- Be specific while providing resource access also with action type (Read, Write & Admin).

- Always use service accounts to access meta-data because to authenticate this it uses keys instead of password and GCP rotates the service account keys for code running on GCP.

Level 3:

- Add VPC for network definition and create firewall rules to manage traffic.

- Define a specific rule to control external access and avoid having unwanted port opens.

- Create a centralized network control template which can be applied across the project.

Level 4:

- Enable centralized logging and monitoring (Preferred Stackdriver).

- Enable audit log which helps you to collect the activity of Admin, System Event and Data Access in a phrase of who did what, where and when?

Level 5:

- Enable cold line data storage if there is a need to keeping a copy for disaster management.

- Further reference for placing security standard in AWS there is an article I have posted a few months back.

 12. DevOps Myths or What DevOps is not?

Before I mention the myths I would clarify the biggest myth which every early stage learner carries that “DevOps practice can be rolled out in a day and output will be available from the 1st day. This is too early to reach this conclusion, as the definition of DevOps always says that it's a culture and process which can be built in a day. But of course, you will get an opportunity to overcome your mistakes at an early stage. Let's discuss a few more myths:

  1. It’s now only about the tool, (It’s a component of the whole DevOps practice)
  2. Dev and Ops team should have used the same set of tools (How to overcome- push them to integrate both)
  3. Only startups can follow this practice (Azure has published an article on best practices of DevOps which says it can be applied anywhere)
  4. Joining DevOps/ DevOps tool conf with fancy sticker (Good you join but don’t pretend that now you are carrying DevOps tag)
  5. Pushing build in production in every 5 mins (This is not what Continuous delivery)
  6. DevOps doesn’t fit the existing system (Overcome: You may need to find the right approach to make an attempt)

13. Benefits of DevOps

Business Benefits

a) Horizontal and vertical growth: When I’m using “Horizontal and Vertical Growth” I’m keeping customer satisfaction on X, Business on Y2 and time on the Y-axis. Now the question is how it helps to populate growth in 2 axis, and my answer will be the quick turnaround time for minor and major issues. Once we adopt DevOps we scale and built in such a fashion that in less time the graph shows a rapid jump.

Business understanding growth

b) Improving ROI of Data: Having DevOps in an organization ensures that we can design a decent ROI from data at an early stage more quickly. If we will do a raw survey now Software industry is playing with data and have control over there a team should have an end to end control on data. And if we define DevOps it will help the team to crunch data in various ways by automating small jobs. By automation, we can segregate and justify data and this helps to populate either in Dashboard or can present offline to the customer.

Technical Benefits

c) Scalability & Quality: If a business starts reaching to more user we start looking to increase infrastructure and bandwidth. But on the other hand, it starts popping up a question whether we are scaling our infra in the right way and also if a lot of people are pushing changes (Your code commits/builds) are having the same or greater quality we have earlier. Both the questions are somehow now reserved by the DevOps team. If your business pitch that we might be going to hit 2000+ client and they will be having billion of traffic and we are ready to handle, DevOps take these responsibilities and says yes can scale infra at any point of time. And if the same time internal release team says I want to deliver 10 feature in next 10 days independently, DevOps says quality can be maintained.

Culture  Benefits

d) Agility & Velocity: They key param of adopting DevOps is to improve the velocity of product development. DevOps enables Agility and if both are in sync we can easily observe the velocity. The expectation of end users are always high and at the same time, the deliverable time span is short. To achieve this we have to ensure that we are able to our rollout new features to customers at much higher frequencies otherwise your competitors may win the market.

e) Enabling Transparency:  A Practice to enable total transparency is a key impact on the DevOps culture. Sharing knowledge across the team gives you an opportunity to work faster and get aligned with the goal. Transparency will encourage an increasingly well-rounded team with heightened understandings.

14. How to adopt a DevOps model

The ideal way is to pick up a small part of the project or product but sometimes we start adopting when we are touching the bottleneck. So whatever you start few things need to be taken care like Goal should be clean and the team is are in sync to chase the same, loop whole which turns to a downtime, how can testing (Stress, performance, load ) to avoid production glitches and at the same time enable automated deployment process. All this could have some basic plan and move forward it can be elaborated in detailed format. While adopting a DevOps model, need to make sure that the team is always looking into metrics so they can justify no’s and make assumption towards the goal. If you want to have a roadmap of DevOps adoption then you really need to find the gaps up front and the typical problem you face every day which really holds your release process or spoils your team time.

15. DevOps automation tool

Jenkins: Jenkins is an open source automation server which is used to automate the software build, and deliver or deploy the build.  It can be installed through native system packages, Docker, or even run standalone by any machine with a Java Runtime Environment (JRE) installed. In short, Jenkins enables continuous integration which helps to accelerate the development. There are ample of plugins available which enable the integration for Various DevOps stages. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc. More in-depth information about the same can be found here in our training material on Jenkins and if you are inquisitive about what sort of questions related to Jenkins are asked in job interviews, then feel free to view our set of 22 Jenkins interview questions.

Ansible: An open-source platform which helps to automate the IT engine which actually pushes off the slavery work from DevOps day to day life. Usually, Ansible helps in 3 day to day task, Provisioning, configuration management and application deployment. Beauty is it can automate traditional servers, virtualization platforms, or the cloud one. It is built on playbooks which can be applied to an extensive variety of systems for deploying your app. To know more, you may have a look at our Ansible training material here or go through our set of 20 interview questions on Ansible.

Chef: It’s an open source config management system and works as a master-client model. It is having a transparent design and works based on instruction which needs to be defined properly. Before you plan using this tool you need to make sure that you have a proper Git practice going on and you have an idea about Ruby as this is completely built on Ruby. The industry says that this is good to have for development-focused environments and enterprise mature architecture. Our comprehensively detailed training course on Chef will give you more insight into this tool.

Puppet:  So Puppet works as a master-client setup and utilized as model driven. This is built on Ruby but you can customize this as scripting language somewhere close to JSON. Puppet helps you to get control of full-fledged configuration management. This tool somewhere helps Admin (Part of DevOps) to add stability and maturity towards config management. A more detailed explanation of Puppet and its functionality can be found in our training material.

Docker:  A tool designed to help developers and administrators to provide flexibility to reduce the count of the system as docker don’t create a complete virtual operating system, instead they allow applications to use the same Linux kernel as the system. So somewhere we can say we use Docker to create, deploy, and run applications by using containers. Just a stats submitted by docket that over 3.5 million applications placed in containers using docker and 37 billion containerized applications have been downloaded. Specifically, Docker CI /CD provided an opportunity to have exactly like a live server and run multiple dev infra form the same host with different config and OS. You may visit our training course on Docker to get more information.

Kubernetes: Platform developed to manage containerized applications which provide high availability and scalability. According to the usage we can downgrade and upgrade the system, we can perform rolling updates, rollback feasibility, switch traffic between to different versions of the application.  So we can have multiple instances having Kubernetes installed and can be operated as Kubernetes cluster. Post this you get an API endpoint of Kube cluster, configure Kubel and you are ready to serve. Read our all-inclusive course on Kubernetes to gather more information on the same.

Docker and Kubernetes, although being widely tools as DevOps automation tools, have notable differences between their setup, installation and their attributes, clearly mentioned in our blog stressing the differences between Docker and Kubernetes.

Alert:

Pingdom: Pingdom is a platform which enables monitoring to check the availability,  performance, transaction monitoring (website hyperlink) and incident collection of your websites, servers or web applications. Beauty is if you are using a collaboration tool like slack or flock you can just integrate by using the webhook (Pretty much simpler no code required )you can easily get notified at any time. Pingdom also provides an API so you can have your customized dashboard (Recently started) and the documentation is having enough details and self-explanatory.

Pingdom

Nagios: It’s an Open Source Monitoring Tool to monitor the computer network. We can monitor server, applications, incident manager etc and you can certainly configure email, SMS, Slack notifications and phone calls even. Nagios is licensed under GNU GPLv2. Listing some major components which can be monitored with Nagios:

  • Once we install Nagios we get a dashboard to monitor network services like SMTP, HTTP, SNMP, FTP, SSH, POP, etc and can view current network status, problem history, log files, notifications that have been triggered by the system, etc.
  • We can monitor Servers resources like disk drives, memory, processor, server load usage, system logs, etc.

Nagios

Image copyright stackdriver- D-4


Stackdriver: 

Stackdriver is again a Monitoring tool to get the visibility of performance, uptime, and overall health for cloud-powered applications. Stackdriver monitoring collects events and metadata from Google Cloud Platform, Amazon Web Services (AWS). Stackdriver consumes data and generates insights via dashboards, charts, and alerts. And for alerting we can integrate to collaboration tools like Slack, PagerDuty, HipChat, Campfire, and more.

Stackdriver

Image copyright stackdriver- D-2

Adding one sample log where we can see what all parameter it collects and also i have just separated them in fashion which will help us to understand what all it logs it actually collects:

Log InformationUser Details and Authorization InfoRequest Type and Caller IPResource and Operation DetailsTimeStamp and Status Details
{
 insertId:  
 logName:
 operation: {
  first:  
  id:
  producer:
 }

protoPayload: {
  @type:
  authenticationInfo: {
  principalEmail:    
  }
  authorizationInfo: [
  0: {
  granted:  
  permission:    
  }
  ]
methodName:  
  request: {
   @type:  
  }
requestMetadata:
 {
callerIp:
callerSuppliedUserAgent:
  }

resourceName:
response: {
@type:  
 Id:
 insertTime:  
 name:
 operationType:  
 progress:  
 selfLink:  
 status:    
 targetId:    
targetLink:  
 user:  
 zone:  
  }
  serviceName:  
 }
receiveTimestamp:
 resource: {
 labels: {
 instance_id:
 project_id:    
 zone:    
 }
 type:  
 }
 severity:
 timestamp:  
}

Monitoring:

Grafana: It is an open source visualization tool and can be used on top of different data stores like InfluxDB,Elasticsearch and Logz.io.We can create comprehensive charts with smart axis formats (such as lines and points) as a result of Grafana’s fast, client-side rendering — even over long ranges of time — that uses Flot as a default option. We can get the 3 different levels of access, watcher, Editor and Admin, even we can enable G-Auth for having good access control. A detail information guide can be found here

Grafana

Image copyright stackdriver- D-5

Elasticsearch:It's an open source realtime distributed, RESTful search and analytics engine. It collects unstructured data and stores in a cultivated format which is optimized and available for language based search. The beauty of Elastic is scalability, speed, document-oriented, schema-free. It scales horizontally to handle kajillions of events per second, while automatically managing how indices and queries are distributed across the cluster for smooth operations.

Cost Optimization

reOptimize.io : Once we run ample of servers we usually end up with burning good amount not intentionally but of course because of not have a clear visualization. At reOptimze helps thereby providing a detailed insight about the cloud expense the integration can be done with 3-4 simple steps but before that you might need to look into the prerequest which can be accessed here. Just a heads up that they only need read access for all these and permission docs can be found here .

reOptimize.io

 Image copyright reOptimizeD-6

16. DevOps vs Agile

DevOpsAgile
DevOps culture can be enabled in the software industry to deliver reliable build.Agile is a generic culture which can be deployed in any department.
The key focus area is to have involvement at an end to end process.Agile helps the management team to push a frequent release.
Enables quality build with rapid delivery.Keep team aware of frequent changes for any release and feature.
Agile sprints work within the immediate future, A sprint life cycle varies between 7-30 days.DevOps don’t have such scheduled metrix, they work to avoid such unscheduled disruptions.
Team size also differs, in Agile wee can minimal resource can be one as well.DevOps works on collaboration and bridge a big set of the team.

 17. Future of DevOps

The industry is moving more on cloud which enables a few more responsibilities to DevOps. And immediate hot topic could be DevSecOps because more automation tends to more connectivity means more exposure. AI or ML is more data-centric and learning based which gives an opportunity to DevOps to share hand to train ML modules, unique analysis correlating, code, test results, user behavior, and production quality & performance. There is also an opportunity to break the stereotype that DevOps can be only adopted by startup and surely the next 2-3 years this will become a general practice in Enterprise.

18. Importance of DevOps training certification

Certifications work like an add-on, and add-on always gives some crisp and cumulative results. Certification works similar to that if someone adds any professional certificates to resume it gives an add-on value. In the process of certification, professionals help you to understand in detail and the deep dive of DevOps culture help individual to get a clear picture. While certification you may have to look into the vendor reputation, an academic who is giving approval, the transparency, session hour and many more components.

19. Conclusion: I have been working closely and observing the DevOps team from a year & so and found every day we learn something new. The more we dive we see a lot can be done and can be achieved in a couple of different ways. As the industry is growing the responsibility of DevOps seems to increase which creates a possible chance for professional but always set a new bar of quality. Now, since you have come to know everything about DevOps, feel free to read our blog on how you can become a DevOps Engineer.

MD Zaid

MD Zaid Imam

Project Manager

Md Zaid Imam is currently serving as Project Manager at Radware. With a zeal for project management and business analytics, Zaid likes to explore UI/UX, backend development, and DevOps. Playing a crucial role at his current job, Zaid has helped his team to deliver world-class product features that cater to the demand of current industry requirements of bot mitigation arena (Processing 50 billion API calls per month). Zaid is a regular contributor on Hashnode.


Website : https://zaid.hashnode.dev

Join the Discussion

Your email address will not be published. Required fields are marked *

1 comments

bala 11 Feb 2019

You blog post is just completely quality and informative. Many new facts and information which I have not heard about before. Keep sharing more blog posts.

Suggested Blogs

How to Install Docker on Ubuntu

Docker is a platform that packages the application and all its dependencies in the container so that the application works seamlessly. The Container makes the application run its resource in an isolated process similar to the virtual machines, but it is more portable. For a detailed introduction to the different components of a Docker container, you can check out Introduction to Docker, Docker Containers & Docker Hub This tutorial covers the installation and use of Docker Community Edition (CE) on an Ubuntu 20.04 machine. Pre-requisitesAudienceThis tutorial is meant for those who are interested in learning Docker as a container service System Requirements Ubuntu 20.04 64-bit operating system. (If Linux OS is not in system, we can run docker using Virtual Box, PFB the steps) A user account with sudo privileges An account on Docker Hub to pull or push an image from Hub. Ubuntu Installation on Oracle Virtual Box If you want to use Ubuntu 20.04 without making any change to the Windows Operating system, you can proceed with the Oracle Virtual box.  Virtual Box is free and open-source virtualization software from Oracle. It enables you to install other operating systems in virtual machines. It is recommended that the system should have at least 4GB of RAM to get decent performances from the virtual operating system. Below are the steps for downloading Ubuntu 20.04 on Oracle Virtual box:Navigate to the website of Oracle Virtual Box, download the .exe file and get the latest stable version. 1. Once done with downloading the virtual box, we can navigate to and download the  Ubuntu disk image (.iso file) by clicking on the download option 2. Once the download has been completed for Ubuntu .iso file, open the virtual box and click on "New" present on top.  3. Enter the details of your virtual machine by giving any name, type as "Linux " and Version as Ubuntu (64 bit)  4. Choose the memory (RAM ) that needs to be allocated to the Virtual machine  and click on Next. (I have chosen 3000 MB) 5. After the RAM allocation ,Click on  Create a virtual disk now. This serves as the hard disk of the virtual Linux system. It is where the virtual system will store its files 6. Now, we want to select the Virtual Hard Disk.  7. We can choose either the “Dynamically allocated” or the “Fixed size” option for creating the virtual hard disk. 8. Finally, we have  to specify our Ubuntu OS's size. The recommended size is 10 GB, but it  can be increased if required.8. Finally, we have  to specify our Ubuntu OS's size. The recommended size is 10 GB, but it  can be increased if required.9. Ubuntu OS is ready to install in Virtual Box, but before starting the Virtual system, we need to a make few changes in settings. Click on storage under the setting.  10. Click on Empty under Controller IDE. Navigate to Attributes and browse the Optical Drive option. 11. Choose the .iso file from the location where it is downloaded. Once selected, click on OK and start the Virtual box by clicking on start present on the Top menu.12. Click ok and start the machine. 13. Proceed with "Install Ubuntu" 14. Under "Updates and other software" section, check "Normal installation", and the two options under “Other options” and continue.15. In Installation type, check Erase disk and install Ubuntu.16. Choose your current location and set up your profile. Click Continue.  17. It may take 10-15 minutes to complete the installation 18. Once the installation finishes, restart the virtual systemWe are done with pre-request, and can now proceed with using this Ubuntu. Docker Installation Process on Ubuntu  Method 1: Install Docker on Ubuntu Using Default Repositories One of the easiest ways is the installation of Docker from the standard Ubuntu 20.04 repositories, but It’s possible that the Ubuntu default repositories have not updated to the latest revision of Docker. It happens because in some cases Docker is not supporting that particular Ubuntu version. Therefore, there can be a scenario where  Ubuntu default repositories have not updated to the latest version. Log in to Virtual Box. Run “docker” as command to check if it is previously installed.To install Docker on Ubuntu box, first update the packages. It will ask for a password. Enter it and allow the system to complete the updates.sudo apt updateTo install Docker from Ubuntu default repositories, use the below command: sudo apt install docker.io To check the installed version, use the below: docker --version Since discussed above, it has installed the 19.03.8 version of docker whereas the latest version is 20.04  Method 2: Install Docker from Official Repository For installing docker on ubuntu 20.04 with the latest version, we’ll proceed with enabling the Docker repository, importing the repository GPG key, and finally installing the package. To install the docker on Ubuntu box, update your existing list of packages. It will ask for a password. Enter it and allow the system to complete the updates. sudo apt update  We need to install a few prerequisite packages to add HTTPS repository : sudo apt install apt-transport-https ca-certificates curl software-properties-common Import the repository’s GPG key using the following curl command: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - Add the Docker APT repository to the system sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"Again, update the package database with the Docker packages sudo apt update Finally, install Docker using below command: sudo apt install docker-ce To check the installed version use below: docker --versionTo start, enable and check the status of docker, use below command: sudo systemctl  status docker  sudo systemctl  start  docker  sudo systemctl  enable  docker To check system wide information regarding docker installation, we use the command “docker info”. Information that is shown includes the kernel version, number of containers and unique images. The output will contain details as given below, depending upon the daemon running: Source:$ docker info  Client:   Context:    default   Debug Mode: true  Server:   Containers: 14    Running: 3    Paused: 1    Stopped: 10   Images: 52   Server Version: 1.13.0   Storage Driver: overlay2    Backing Filesystem: extfs    Supports d_type: true    Native Overlay Diff: false   Logging Driver: json-file   Cgroup Driver: cgroupfs   Plugins:    Volume: local    Network: bridge host macvlan null overlay   Swarm: active    NodeID: rdjq45w1op418waxlairloqbm    Is Manager: true    ClusterID: te8kdyw33n36fqiz74bfjeixd    Managers: 1    Nodes: 2    Orchestration:     Task History Retention Limit: 5    Raft:     Snapshot Interval: 10000     Number of Old Snapshots to Retain: 0     Heartbeat Tick: 1     Election Tick: 3    Dispatcher:     Heartbeat Period: 5 seconds    CA Configuration:     Expiry Duration: 3 months    Root Rotation In Progress: false    Node Address: 172.16.66.128 172.16.66.129    Manager Addresses:     172.16.66.128:2477   Runtimes: runc   Default Runtime: runc   Init Binary: docker-init   containerd version: 8517738ba4b82aff5662c97ca4627e7e4d03b531   runc version: ac031b5bf1cc92239461125f4c1ffb760522bbf2   init version: N/A (expected: v0.13.0)   Security Options:    apparmor    seccomp     Profile: default   Kernel Version: 4.4.0-31-generic   Operating System: Ubuntu 16.04.1 LTS   OSType: linux   Architecture: x86_64   CPUs: 2   Total Memory: 1.937 GiB   Name: ubuntu   ID: H52R:7ZR6:EIIA:76JG:ORIY:BVKF:GSFU:HNPG:B5MK:APSC:SZ3Q:N326   Docker Root Dir: /var/lib/docker   Debug Mode: true    File Descriptors: 30    Goroutines: 123    System Time: 2016-11-12T17:24:37.955404361-08:00    EventsListeners: 0   Http Proxy: http://test:test@proxy.example.com:8080   Https Proxy: https://test:test@proxy.example.com:8080   No Proxy: localhost,127.0.0.1,docker-registry.somecorporation.com   Registry: https://index.docker.io/v1/   WARNING: No swap limit support   Labels:    storage=ssd    staging=true   Experimental: false   Insecure Registries:    127.0.0.0/8   Registry Mirrors:     http://192.168.1.2/     http://registry-mirror.example.com:5000/   Live Restore Enabled: false Note: In case you get below error after running “docker info” command, one way is to add sudo in front and run the command, OR you can refer to the same error-resolving steps mentioned under Running Docker Images section. Running Docker Images and Verifying the process: To check whether you can access and download the images from Docker Hub, run the following command: sudo docker run hello-worldIn case of errors received after running the docker run command, you can correct it using the following steps, otherwise proceed with the next step of checking the image. ERROR: docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/containers/create: dial unix /var/run/docker.sock: connect: permission denied. See 'docker run --help'.   Create the docker group if it does not exist sudo groupadd docker Add your user to the docker group.   sudo usermod -aG docker $USER   Eg:- sudo usermod -aG docker kanav Run the following command or Logout and login again and run ( if that doesn't work you may need to reboot your machine first)  newgrp docker Check if docker can be run without root docker run hello-world If the problem still continues, try to reboot it and run the command. To check the image, use this command: sudo docker images Uninstall Procedure: Below are the common commands used to remove images and containers: sudo  apt-get  purge docker-ce docker-ce-cli containerd.io To completely uninstall Docker, use below: To identify what are the installed packages, this is the command: dpkg -l | grep -i dockersudo apt-get purge -y docker-engine docker docker.io docker-ce docker-ce-cli  sudo apt-get autoremove -y --purge docker-engine docker docker.io docker-ce   To remove images, containers, volumes, or user created configuration files, these commands can be used: sudo rm -rf /var/lib/docker /etc/docker sudo rm /etc/apparmor.d/docker sudo groupdel docker sudo rm -rf /var/run/docker.sock  Conclusion: If you found this Install Docker on Ubuntu blog relevant and useful, do check out the Docker-Training workshop from KnowledgeHut, where you can get equipped with all the basic and advanced concepts of Docker! 
5570
How to Install Docker on Ubuntu

Docker is a platform that packages the application... Read More

How to Install Kubernetes on Windows

Kubernetes is a container-based platform for managing cloud resources and developing scalable apps. It is widely regarded as the most common platform for automating, deploying, and scaling the entire cloud infrastructure. The platform runs on all major operating systems and is the most widely used open-source cloud tool.  Kubernetes can scale your entire infrastructure, monitor each service's health, act as a load balancer, and automate deployments, among other things. You can deploy your pods (docker containers) and services across the cloud by installing and configuring as many nodes (clusters) as you want.Let’s get started. We will guide you through the complete roadmap on how to install Kubernetes for Windows users. This tutorial will show you how to set up Kubernetes and deploy the official web GUI dashboard, which will allow you to manage and monitor everything. PrerequisitesFor installing Kubernetes in your system, here are a few prerequisites that need special attention. The hardware and software requirements are discussed below:Hardware requirementsMaster node with at least 2 GB memory. (Additional will be great)Worker node with 700 MB memory capacity.Your Mouse/Keyboard (monitor navigation)Software requirementsHype-VDocker DesktopUnique MAC addressUnique product UUID for every nodeEnsuring that there is a full range of connectivity between all the machines in the cluster is a must.Installation ProcedureStep 1: Install & Setup Hyper-VAs we all know, Windows has its virtualization software, known as Hyper-V, which is essentially VirtualBox on steroids. Hyper-V allows you to manage your virtual machines (VMs) using either the free Microsoft GUI tool or the command line. It's simple to enable Hyper-V, but first, make sure your PC meets the following requirements:Your operating system should be Windows 10 (Enterprise, Pro, or Education), withAt least 4GB of RAM and CPU Virtualization support, though you should double-check that it's turned on in your BIOS settings.You can disable or enable features like Hyper-V that may not be pre-installed when Windows is installed. Always keep in mind that some of the features require internet access to download additional Windows Update components.To enable Hyper-V on your machine, follow the steps below:1. Open the Control Panel.2. Select Programs from the left panel.3. Next, go to Programs and Features, then Turn Windows Features On or Off.4. Examine Hyper-V and the Hypervisor Platform for Windows.5. Select OK.Your system will now begin installing Hyper-V in the background; it may be necessary to reboot a few times until everything is properly configured. Don't hold your breath for a notification or anything! Verify that Hyper-V is installed successfully on your machine by running the following command as Administrator in PowerShell:Get-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-VOnce the state is shown as Enabled for above command in Power shell, we are good to go.Step 2: Download Docker for Windows and install it.Kubernetes is a container orchestration system built on top of Docker. It is essentially just a tool for communicating with Docker containers and managing everything at an enterprise level. Simply go to install Docker and click to Get Docker Desktop for Windows (stable).Windows users can use Docker Desktop.Docker Desktop for Windows is a version of Docker optimized for Windows 10. It's a native Windows application that makes developing, shipping, and running dockerized apps simple. Docker Desktop for Windows is the fastest and most reliable way to develop Docker apps on Windows, as it uses Windows-native Hyper-V virtualization and networking. Docker Desktop for Windows can run Docker containers on both Linux and Windows.Installation of Docker DesktopLet us take a look on the different steps involved in installing docker desktop.Double-click Docker for Windows Installer to run the installer.Docker starts automatically once the installation is complete. Docker is running and accessible from a terminal, as indicated by the whale in the notification area.Run Try out some Docker commands in a command-line terminal like PowerShell!  Run the Docker version to check the version.Run Docker run hello-world to verify that Docker can pull and run images.Boom!As long as the Docker Desktop for Windows app is running, Docker is accessible from any terminal. The Docker whale in the taskbar has a setting button that can be accessed from the UI.For a detailed step by step installation guide with screenshot, visit the blog - How to Install Docker on Windows, Mac, & Linux: A Step-By-Step GuideWARNING: FOLLOW THE INSTRUCTIONS BELOW! If Docker was successfully installed but you can't find its tray icon, you'll need to restart your computer. Check the official troubleshooting guide here if the issue persists. Step 3: Install Kubernetes on Windows 10Docker includes a graphical user interface (GUI) tool that allows you to change some settings or install and enable Kubernetes.To install Kubernetes, simply follow the on-screen instructions on the screen:1. Right-click the Docker tray icon and select Properties.2. Select "Settings" from the drop-down menu.3. Select "Kubernetes" from the left panel.4. Check Enable Kubernetes and click "Apply"Docker will install additional packages and dependencies during the installation process. It may take between 5 and 10 minutes to install, depending on your Internet speed and PC performance. Wait until the message 'Installation complete!' appears on the screen. The Docker app can be used after Kubernetes has been installed to ensure that everything is working properly. Both icons at the bottom left will turn green if both services (Docker and Kubernetes) are running successfully and without errors.Example.Step 4: Install Kubernetes DashboardThe official web-based UI for managing Kubernetes resources is Kubernetes Dashboard. It isn't set up by default. Kubernetes applications can be easily deployed using the cli tool kubectl, which allows you to interact with your cloud and manage your Pods, Nodes, and Clusters. You can easily create or update Kubernetes resources by passing the apply argument followed by your YAML configuration file.Use the following commands to deploy and enable the Kubernetes Dashboard.1. Get the yaml configuration file from here.2. Use this to deploy it. kubectl apply -f .\recommended.yaml3. Run the following command to see if it's up and running.:kubectl.exe get -f .\recommended.yaml.txtStep 5: Access the dashboardThe dashboard can be accessed with tokens in two ways: the first is by using the default token created during Kubernetes installation, and the second (more secure) method is by creating users, giving them permissions, and then receiving the generated token. We'll go with the first option for the sake of simplicity.1. Run the following command PowerShell (not cmd)((kubectl -n kube-system describe secret default | Select-String "token:") -split " +")[1]2. Copy the generated token3. Runkubectl proxy.4. Open the following link on your browser: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/5. SelectToken & paste the generated token6. Sign InFinallyYou'll be able to see the dashboard and your cloud resources if everything is set up correctly. You can then do almost all of the "hard" work without having to deal with the CLI every time. You may occasionally get your hands dirty with the command line, but if you don't understand Docker and Kubernetes or don't have the time to manage your own cloud, it's better to stick with some PaaS providers that can be quite expensive.Kubernetes Uninstallation ProcessThe procedures for uninstalling cert-manager on Kubernetes are outlined below. Depending on which method you used to install cert-manager - static manifests or helm - you have two options.Warning: To uninstall cert-maneger, follow the same steps as you did to install it, but in reverse. Whether cert-manager was installed from static manifests or helm, deviating from the following process can result in issues and potentially broken states. To avoid this, make sure you follow the steps outlined below when uninstalling.Step 1: Before continuing, make sure that all user-created cert-manager resources have been deleted. You can check for any existing resources with the following command:$ kubectl get Issuers,ClusterIssuers,Certificates,CertificateRequests,Orders,Challenges --all-namespacesAfter you've deleted all of these resources, you can uninstall cert-manager by following the steps outlined in the installation guide.Step 2: Using regular manifests to uninstall.Uninstalling from a regular manifest installation is as simple as reversing the installation process and using the delete command.kubectl.2. Delete the installation manifests using a link to your currently running version vX.Y. Z like so:$ kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/vX.Y.Z/cert-manager.yamlStep 3: Uninstalling with Helm.1. Uninstalling cert-manager from a Helm installation is as simple as reversing the installation process and using the delete command on both the server and the client. kubectl and helm.$ helm --namespace cert-manager delete cert-manager2. Next, delete the cert-manager namespace:$ kubectl delete namespace cert-manager3. Finally, delete the cert-manger CustomResourceDefinitions using the link to the version vX.Y.Z you installed:$ kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/vX.Y.Z/cert-manager.crds.yamlThe namespace is in the process of being terminated.The namespace may become stuck in a terminating state if it is marked for deletion without first deleting the cert-manager installation. This is usually because the APIService resource is still present, but the webhook is no longer active and thus no longer reachable.4. To fix this, make sure you ran the above commands correctly, and if you're still having problems, run:$ kubectl delete apiservice v1beta1.ConclusionIn this tutorial, we have explained in detail how to install Kubernetes with Hyper-V. Also, we have tackled what requirements we need, both in terms of the software and hardware. We have explained how to install Hyper-V and Docker on Windows 10.   It is important to note that the fundamental difference between Kubernetes and Docker is that Kubernetes is meant to run across a cluster and Docker is meant to run through nodes.   Kubernetes is also more extensive than Docker Swarm and is meant to coordinate a cluster of nodes at scale in production in an efficient manner. Each software is crucial to having a smooth installation process.   We finally looked at how to install and uninstall Kubernetes.
1707
How to Install Kubernetes on Windows

Kubernetes is a container-based platform for manag... Read More

How To Install Jenkins on Ubuntu

Jenkins is a Java-built open-source Continuous Integration (CI) and CD platform. Basically, Jenkins builds software projects, checks and deploys them. This is one of the most practical programming tools you can master, and today we will show you how Jenkins is installed on Ubuntu 18.04. Use this powerful tool to activate your VPS server!Jenkins is loved by teams of all sizes, for different language projects like Java, Ruby, Dot Net, PHP etc. Jenkins is a platform that is autonomous, and can be used on Windows, Linux or any other operating system.  Prerequisites Hardware Requirements: RAM- 4 GB (Recommended) Storage- more than 50 GB of Hard Disk Space (Recommended)        Software Requirements: Java: Java Development Kit (JDK) or Java Runtime Environment (JRE).  Web Browser: Any browser such as Google Chrome, Mozilla Firefox, Microsoft Edge. Operating System: An Ubuntu 18.04 server installed with a non-root sudo user and firewall. For help in the planning of production capability of a Jenkins installation see Choosing the right hardware for Masters. Why Use Jenkins? You need to consider continuous integration (CI) and continuous delivery (CD) to understand Jenkins: Continuous integration – the practice of continuous production combined with the main industry.  Continuous delivery – the code is constantly delivered to an area after the code is ready for delivery. It could be for production or staging. The commodity is supplied to a consumer base that can provide QA or inspection by customers. Developers update the code regularly in the shared repository (such as GitHub or TFS). Improvements made in the source code are made at the end of the day, making it difficult to identify the errors. So, Jenkins is used here. Once a developer changes the repository, Jenkins will automatically enable the build and immediately warn you in the event of an error (Continuous Integration CI). Installation Procedure: Step 1: Install Java Skip to the next section if you have Java already installed on your system. To check, please run the following command in the terminal: java --version Jenkins needs Java for running, but it doesn't include certain distributions by default, and Java versions of Jenkins are incompatible. Multiple Java implementations are available to you. OpenJDK is currently the most popular one, which we will use in this guide. Being an open-source Java application, Jenkins requires the installation of OpenJDK 8 on your system. The apt repositories can directly access OpenJDK 8. The installation of OpenJDK from standard repositories is recommended. Open and enter the following in the terminal window: $ sudo apt update  $ sudo apt install openjdk-8-jdk The download and installation will be requested. Press the "Y" button and press the Enter button to finish the process. Java 8 will be installed on your system. We are ready to download Jenkins package now as we have our requirements ready! Step 2: Install Jenkins The default Ubuntu packages for Jenkins are always behind the current version of the project itself. You may use the project-maintained packages to install Jenkins to take advantage of the newest patches and features. 1. add the framework repository key: $ wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add  The device returns OK when the key is inserted. 2. Next, link the repository of Debian packages to the sources.list of the server: $ sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list' 3. When both are in place, upgrade to apt to use the new repository: $ sudo apt update 4. Install Jenkins: $ sudo apt install jenkins Now we're going to start the Jenkins server, as Jenkins and its dependencies are in place. Step 3: Start Jenkins 1. You can start Jenkins using systemctl: $ sudo systemctl start jenkins 2. As systemctl does not display performance, you can use the status command to check that Jenkins has successfully launched: $ sudo systemctl status jenkinsIf all went well, the start of the performance should demonstrate that the service is active and ready to boot: Output: jenkins.service - LSB: Start Jenkins at boot time     Loaded: loaded (/etc/init.d/jenkins; generated)     Active: active (exited) since Sat 2021-04-17 00:34:17 IST; 26s ago       Docs: man:systemd-sysv-generator(8)    Process: 17609 ExecStart=/etc/init.d/jenkins start (code=exited, status=0/SUCC As Jenkins is running, so adjust the firewall rules to complete our further setup of Jenkins from the web browser. Step 4: Opening the Firewall 1. Jenkins works by default on port 8080, so let's open the port with ufw: $ sudo ufw allow 8080  2. Check ufw’s status: $ sudo ufw status You will see that traffic from anywhere is permitted to port 8080. Output: Status: active  To                         Action      From  --                         ------      ----  8000                       ALLOW       Anywhere                    CUPS                       ALLOW       Anywhere                    27017                      ALLOW       Anywhere                    27017                      ALLOW       192.168.1.10                8080                       ALLOW       Anywhere                    8000 (v6)                  ALLOW       Anywhere (v6)               CUPS (v6)                  ALLOW       Anywhere (v6)               27017 (v6)                 ALLOW       Anywhere (v6)               8080 (v6)                  ALLOW       Anywhere (v6) 3. If the firewall is inactive, the following commands will allow OpenSSH and turn it back on: $ sudo ufw allow OpenSSH  $ sudo ufw enable We can finish the initial configuration with Jenkins installed and our firewall configured. Note: If you decide to continue to use Jenkins, use a Nginx Reverse Proxy at Ubuntu 18.04 to configure Jenkins with SSL when your exploration has been completed to protect your passwords and any sensitive system or product information sent between the machine and the server in plain text. Step 5: Setting Up Jenkins 1. To set up installation, visit Jenkins on its default 8080 port with your server domain name or IP address: http://your_server_ip_or_domain:8080 You should see the Unlock Jenkins screen, which displays the initial password's location:2. You can use the cat command to display the password: $ sudo cat /var/lib/jenkins/secrets/initialAdminPassword 3. Copy the alphanumeric terminal 32-character password and paste into the Administrator Password field, then click Continue. Output: 0aaaf00d9afe48e5b7f2a494d1881326 The following screen shows the ability to install or select certain plugins: 4. We will click on the option to install proposed plugins to start the installation process immediately. 5. When the installation is done, the first administrative user will be prompted. You can save this step and use your initial password to continue as an Admin. However, we will take some time to create the user. The Jenkins default server is NOT encrypted to prevent data from being protected. Use the Nginx Reverse Proxy on Ubuntu 18.04 to configure Jenkins with SSL. This protects the information of users and builds transmitted through the web interface. 6. You will see a configuration instance page, which asks you to confirm your Jenkins instance's URL of choice. Confirm either your server's domain name or the IP address of your server.  7. Click Save and Finish once you have confirmed the relevant information. A confirmation page will show you that "Jenkins is ready!"  Hit Start using Jenkins button and it will take you to the Jenkins dashboard.  Congratulations! You have completed the installation of Jenkins. Step 6: Creation of New Build Jobs in Jenkins: The freestyle job is a highly versatile and user-friendly choice. It's easy to set up and many of its options appear in many other build jobs. For all projects, you can use it. Follow the following steps: You have to login to your Jenkins Dashboard by visiting2) Create New item: Click on the New Item on the left-hand side of the dashboard.3) Fill the project description: You can enter the job details as per your need.4) Source Code Management: Under source code management, enter the repository URL.You can also use a Local repository. 5) Build Environment: Now in the Build section, Click on the “Add build Setup” Select "Execute Windows batch command".Now, add the java commands. In this article, we have used javac HelloWorld.java and java HelloWorld.   6) Save the project: Click Apply and save the project. 7) Build Source Code and check its status: Click on “Build Now” on the left-hand side of the screen to create the source code. 8) Console Output: Select the build number and click on “Console Output” to check the status of the build run. When it shows success, it means that we have successfully run the HelloWorld program from the cGitHub Repository. In case of failure, you can check the job logs by clicking on failure icon and debug the root cause.Uninstall Jenkins Follow the instructions to uninstall Jenkins: $ sudo apt-get remove jenkins Uninstall Jenkins: $ sudo apt-get remove --auto-remove jenkins Purging your data: $ sudo apt-get purge jenkins or you can use: $ sudo apt-get purge --auto-remove jenkins Conclusion: Installing Jenkins on Ubuntu is really that easy. Jenkins has a low learning curve and so you can start to work with it as quickly as possible. In the above article we have learned how to install Jenkins in an Ubuntu machine where all the steps are explained clearly. In case you want to learn more about the core concepts of Jenkins Jobs, Pipelines, Distributed System, Plugins, and how to use Jenkins in depth you can enroll for our course Jenkins Certification Course. 
5550
How To Install Jenkins on Ubuntu

Jenkins is a Java-built open-source Continuous In... Read More

Useful links