You landed up here which means that you are willing to know more about DevOps and hey, you can admit it! And of course, the business community has been taking this trend to the next level not because it looks fancy but of course, this process has proven the commitment. The growth and adoption of this disruptive model are increasing the productivity of the company. So here, we will get an idea of how this model works and how we can enable this across the organization. According to DevOps.com, during the period of the year 2015 to 2016, the adoption rate of DevOps increased significantly, with a rise of 9 per cent in its usage rate. You may have a look at DevOps Foundation training course to know more about the benefits of learning DevOps.
1. What is DevOps
DevOps is a practice culture having a union of people, process, and tools which enables faster delivery. This culture helps to automate the process and narrow down the gaps between development and IT. DevOps is a kind of abstract impression that focuses on key principles like Speed, Rapid Delivery, Scalability, Security, Collaboration & Monitoring etc.
A definition mentioned in Gartner says:
“DevOps represents a change in IT culture, focusing on rapid IT service delivery through the adoption of agile, lean practices in the context of a system-oriented approach. DevOps emphasizes people (and culture) and seeks to improve collaboration between operations and development teams. DevOps implementations utilize technology— especially automation tools that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective.”
2. History of DevOps
Let's talk about a bit of history in cloud computing. The word “cloud compute” got coined in early 1996 by Compaq computer, but it took a while to make this platform easily accessible even though they claimed its a $2 billion market per year. In August 2006 Amazon had introduced cloud infra which was easily accessible, and then it became a trend, and post this, in April 2008 Google, early 2010 Microsoft and then April 2011 IBM has placed his foot in this vertical. This showed a trend that all Giants are highly believed in this revolution and found potential in this technology.
And in the same era DevOps process got pitched in Toronto Conference in 2008 by Patrick Debois and Andrew Shafer. He proposed that there is a better approach can be adopted to resolve the conflicts we have with dev and operation time so far. This again took a boom when 2 Flickr employee delivered a seminar that how they are able to place 10+ deployment in a day. They came up with a proven model which can resolve the conflicts of Dev and operation having component build, test and deploy this should be an integrated development and operations process.
3. Why market has adopted so aggressively?
Louis Columbus has mentioned in Forbes that by the end of 2020 the 83% of the workload will be on the cloud where major market contributor will be AWS & Google. The new era is working more on AI, ML, Crypto, Big Data, etc. which is playing a key role in cloud computing adoption today but at the same time, IT professionals say that the security is their biggest concern in adopting a cloud computing. Moreover, Cloud helped so many startups to grow at initial stages and later they converted as a leader in there marketplace, which has given confidence to the fresh ideas.
This entire cloud enablement has given more confidence to the team to adopt DevOps culture as cloud expansion is helping to experiment more with less risk.
4. Why is DevOps used?
To reduce day to day manual work done by IT team to avoid manual error while designing infra Enables a smooth platform for technology or cloud migration gives an amazing version control capabilities. A better way to handle resources whether it’s cloud infra or manpower Gives an opportunity to pitch customer a realistic feature rollout commitment to adopt better infra scaling process even though you receive 4X traffic Enables opportunity to build a stable infrastructure.
5. Business need and value of DevOps
Let's understand this by sharing a story of a leading Online video streaming platform 'Netflix' which was to disclose the acquisition, or we can say a company called Blockbuster LLC got an opportunity in 2000 to buy Netflix in $50 million. Netflix previously was working only on DVD-by-mail service, but in 2016 Netflix made a business of $8.83 billion which was tremendous growth in this vertical. Any idea of how this happened? This started with an incident at Netflix office where due to a database mismatch a DVD shipping got disrupted for 3 days which Forced Management to move on the cloud from relational systems in their data centers because this incident made a massive impact on core values. The shift happened from vertical to horizontal, and AWS later provided with Cloud service even I have read that in an early stage they gathered with AWS team and worked to scale the infrastructure. And, today Netflix is serving across 180 countries with the amount of 15,00,00,000 hours of video content to less or more 8,60,00,000 members.
6. Goal of DevOps
- Control of quality and increase the frequency of deployments.
- Allows to enable a risk-free experiment.
- Enables to improve mean time to recovery and backup.
- Helps to handle release failure without losing live data.
- Helps to avoid unplanned work and technical failure.
- To achieve compliance process and control on the audit trail.
- Alert and monitoring in an early stage on system failure.
- Helps to maintain SLA in a uniform fashion.
- Enabling control for the business team.
7. How Does DevOps work
So, DevOps model usually keeps Development and operation team tightly coupled or sometimes they get merged and they roll out the entire release cycle. Sometimes we may see that development, operation and security & Network team is also involved and slowly this got coined as DevSecOps. So, the integration of this team makes sure that they are able to crack development, testing, deployment, provisioning infra, monitoring, network firewalling, infrastructure accessibility and accountability. This helps them to build a clean application development lifecycle to deliver a quality product.
8. DevOps workflow/Lifecycle
DevOps Workflow (Process)
DevOps workflow ensures that we are spending time on the right thing that means the right time is involved in building product/infrastructure. And how it enables we can analyze in below diagram. When we look into the below diagram, it seems DevOps process in an extended version of agile methodologies, but it doesn’t mean that it can fit in other SDLC methodologies. There is enough scope in other SDLC process as well. Once we merge process and Tools workflow diagram, it showcases a general DevOps environment. So, the team puts an effort to keep pushing the releases and at the same time by enabling automation and tools we try maintaining the quality and speed.DevOps Workflow (Process)
DevOps Workflow (Tool)
9. DevOps values
I would like to split DevOps values into two groups:
- Business Values
- Organization Values Business values are moreover customer centric.
- How fast we recover if there is any failure?
- How can we pitch the exact MRR to a customer and acquire more customers?
- How fast we can deliver the product to customers.
- How to roll out beta access asap if there any on-demand requirement?
- Organizational Values
- Building Culture
- Enabling communication and collaboration
- Optimize and automate the whole system.
- Enabling Feedbacks loops
- Decreasing silos
- Metrics and Measurement
10. Principle of DevOps
Automated: Automate as much as you can in a linear and agile manner so you can build an end-to-end automated pipeline for software development life cycle in an effective manner which includes quality, rework, manual work and cost. And it’s not only about the cycle it is also about the migration from one technology to another technology, one cloud to another cloud etc.
Collaborative: The goal of this culture is to keep a hold on both development and operations. Keep an eye and fix the gaps to keep moving thing in an agile way, which needs a good amount of communication and coordination. By encouraging collaborative environment an organization gets ample of ideas which help to resolve issue way faster. The beauty of the collaboration is it really handles all unplanned and manual work at an early stage which ends up given a quality build and process.
Customer Centric approach: DevOps team always reacts as a startup and must keep a finger on the pulse to measure customer demands. The metrics they generate give an insight to the business team to see the trend of usage and burn rate. But of course, to find a signal in the noise, you should be focused and collect only those metrics which really matters.
Performance Orientation: Performance is a principle and a discipline which gives an insight to the team to understand the implication of bad performance. Before moving to production if we get metrics and reports handy it gives confidence to technology and business both. This gives an opportunity to plan how to scale infrastructure, how to handle if there is a huge spike or the high usage, and the utilization of the infrastructure.
Quality Indicators per application: Another set which is a predefined principle is to set measurable quality Assigning a quality gate to indicators with predefined targets, covering fit for purpose and security gives an opportunity to deliver a complete quality application.
11. DevOps Key practices:
When we say “continuous” it doesn’t translate that “always running” but of course “always ready to run”.
Continuous integration is nothing but the development philosophy and practices that drive teams to check-in code to version control system as often as possible. So, keeping your build clean and QA ready developer's changes need to be validated by running automated tests against the build that could be a Junit, iTest. The goal of CI is to place a consistent and automated way to build and test applications which results in better collaboration between teams, and eventually a better-quality product.
Continuous Delivery is an adjunct of CI which enables a facility to make sure that we can release new changes to your customers quickly in a sustainable way.
Typical CD involves the below steps:
- Pull code from version control system like bitbucket and execute build.
- Execute any required infrastructure steps command line/script to stand up or tear down cloud infrastructure.
- Move build to right compute environment.
- Able to handle all the configuration generation process.
- Pushing application components to their appropriate services, such as web servers, API services, and database services.
- Executing any steps required to restarts services or call service endpoints that are needed for new code pushes.
- Executing continuous tests and rollback environments if tests fail.
- Providing log data and alerts on the state of the delivery.
Below there is a table which will help us better understand what we need to put as an effort and what we will gain if we enable CI/CD in place:
|Practice Type||Effort Required||Gain|
|Continuous integration||a) Need to prepare automated your team will need to write automated tests for each new feature, improvement or bug fix.|
b) You need a continuous integration server that can monitor the main repository and run the tests automatically for every new commit pushed.
c) Developers need to merge their changes as often as possible, at least once a day.
|a Will give control on regressions which can be captured in early stage of automated testing.|
b) Less context switching as developers are alerted as soon as they break the build and can work on fixing it before they move to another task.
c) Building the release is easy as all integration issues have been solved early.
d) Testing costs are reduced drastically -your Cl server can run hundreds of tests in the matter of seconds.
e) Your QA team spend less time testing and can focus on significant improvements to the quality culture.
|Continuous Delivery ||a) You need a strong foundation in continuous integration and your test suite needs to cover enough of your codebase.|
b) Deployments need to be automated. The trigger is still manual but once a deployment is started there shouldn't be a need for human intervention.
c) Your team will most likely need to embrace feature flags so that incomplete features do not affect customers in production.
|a) The complexity of deploying software has been taken away. Your team doesn't have to spend days preparing for a release anymore.|
b) You can release more often, thus accelerating the feedback loop with your customers.
c) There is much less pressure on decisions for small changes, hence encouraging iterating faster.
ii) Maintaining infra in a secure and compliant way
Keeping infrastructure secure and compliant is also a responsibility of DevOps or some organizations are nowadays pitching as SecOps. A general and traditional way of security methodologies or rules get fails when you have multi-layer or multi-cloud system running for your product organization and sometimes it really fails when you are moving with the continuous delivery process. So, the job is to ensure that the team is putting clear and neat visibility on risk and vulnerabilities which may cause a big problem. Below adding a basic rule to avoid small loophole (Specific to GCP):
- Define your resource hierarchy.
- Create an organization node and define the project structure.
- Automation of project creation which helps to achieve uniformity and testability.
- Manage your Google identities.
- Synchronize your existing identity platform.
- Have single-sign-on (SSO).
- Don’t add multiple users instead of that select resource level permission.
- Be specific while providing resource access also with action type (Read, Write & Admin).
- Always use service accounts to access meta-data because to authenticate this it uses keys instead of password and GCP rotates the service account keys for code running on GCP.
- Add VPC for network definition and create firewall rules to manage traffic.
- Define a specific rule to control external access and avoid having unwanted port opens.
- Create a centralized network control template which can be applied across the project.
- Enable centralized logging and monitoring (Preferred Stackdriver).
- Enable audit log which helps you to collect the activity of Admin, System Event and Data Access in a phrase of who did what, where and when?
- Enable cold line data storage if there is a need to keeping a copy for disaster management.
- Further reference for placing security standard in AWS there is an article I have posted a few months back.
12. DevOps Myths or What DevOps is not?
Before I mention the myths, I would clarify the biggest myth which every early-stage learner carries that “DevOps practice can be rolled out in a day and output will be available from the 1st day”. This is too early to reach this conclusion, as the definition of DevOps always says that it's a culture and process which can be built in a day. But of course, you will get an opportunity to overcome your mistakes at an early stage. Let's discuss a few more myths:
- It’s now only about the tool, (It’s a component of the whole DevOps practice)
- Dev and Ops team should have used the same set of tools (How to overcome- push them to integrate both)
- Only startups can follow this practice (Azure has published an article on best practices of DevOps which says it can be applied anywhere)
- Joining DevOps/ DevOps tool conf with fancy sticker (Good you join but don’t pretend that now you are carrying DevOps tag)
- Pushing build in production in every 5 mins (This is not what Continuous delivery)
- DevOps doesn’t fit the existing system (Overcome: You may need to find the right approach to make an attempt)
13. Benefits of DevOps
a) Horizontal and vertical growth: When I’m using “Horizontal and Vertical Growth” I’m keeping customer satisfaction on X, Business on Y2 and time on the Y-axis. Now the question is how it helps to populate growth in 2 axis, and my answer will be the quick turnaround time for minor and major issues. Once we adopt DevOps we scale and built in such a fashion that in less time the graph shows a rapid jump.
b) Improving ROI of Data: Having DevOps in an organization ensures that we can design a decent ROI from data at an early stage more quickly. If we will do a raw survey now Software industry is playing with data and have control over there a team should have an end-to-end control on data. And if we define DevOps, it will help the team to crunch data in various ways by automating small jobs. By automation, we can segregate and justify data, and this helps to populate either in Dashboard or can present offline to the customer.
c) Scalability & Quality: If a business starts reaching to more user, we start looking to increase infrastructure and bandwidth. But on the other hand, it starts popping up a question whether we are scaling us infra in the right way and also if a lot of people are pushing changes (Your code commits/builds) are having the same or greater quality we have earlier. Both the questions are somehow now reserved by the DevOps team. If your business pitch that we might be going to hit 2000+ client and they will be having billion of traffic and we are ready to handle, DevOps take these responsibilities and says yes can scale infra at any point of time. And if the same time internal release team says I want to deliver 10 features in next 10 days independently, DevOps says quality can be maintained.
d) Agility & Velocity: They key param of adopting DevOps is to improve the velocity of product development. DevOps enables Agility and if both are in sync we can easily observe the velocity. The expectation of end users are always high and at the same time, the deliverable time span is short. To achieve this we have to ensure that we are able to our rollout new features to customers at much higher frequencies otherwise your competitors may win the market.
e) Enabling Transparency: A Practice to enable total transparency is a key impact on the DevOps culture. Sharing knowledge across the team gives you an opportunity to work faster and get aligned with the goal. Transparency will encourage an increasingly well-rounded team with heightened understandings.
14. How to adopt a DevOps model
The ideal way is to pick up a small part of the project or product but sometimes we start adopting when we are touching the bottleneck. So whatever you start few things need to be taken care like Goal should be clean and the team is are in sync to chase the same, loop whole which turns to a downtime, how can testing (Stress, performance, load ) to avoid production glitches and at the same time enable automated deployment process. All this could have some basic plan and move forward it can be elaborated in detailed format. While adopting a DevOps model, need to make sure that the team is always looking into metrics so they can justify no’s and make assumption towards the goal. If you want to have a roadmap of DevOps adoption then you really need to find the gaps up front and the typical problem you face every day which really holds your release process or spoils your team time.
15. DevOps automation tool
Jenkins: Jenkins is an open source automation server which is used to automate the software build, and deliver or deploy the build. It can be installed through native system packages, Docker, or even run standalone by any machine with a Java Runtime Environment (JRE) installed. In short, Jenkins enables continuous integration which helps to accelerate the development. There are ample of plugins available which enable the integration for Various DevOps stages. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc. More in-depth information about the same can be found here in our training material on Jenkins and if you are inquisitive about what sort of questions related to Jenkins are asked in job interviews, then feel free to view our set of 22 Jenkins interview questions.
Ansible: An open-source platform which helps to automate the IT engine which actually pushes off the slavery work from DevOps day to day life. Usually, Ansible helps in 3 day to day task, Provisioning, configuration management and application deployment. Beauty is it can automate traditional servers, virtualization platforms, or the cloud one. It is built on playbooks which can be applied to an extensive variety of systems for deploying your app. To know more, you may have a look at our Ansible training material here or go through our set of 20 interview questions on Ansible.
Chef: It’s an open source config management system and works as a master-client model. It is having a transparent design and works based on instruction which needs to be defined properly. Before you plan using this tool you need to make sure that you have a proper Git practice going on and you have an idea about Ruby as this is completely built on Ruby. The industry says that this is good to have for development-focused environments and enterprise mature architecture. Our comprehensively detailed training course on Chef will give you more insight into this tool.
Puppet: So Puppet works as a master-client setup and utilized as model driven. This is built on Ruby but you can customize this as scripting language somewhere close to JSON. Puppet helps you to get control of full-fledged configuration management. This tool somewhere helps Admin (Part of DevOps) to add stability and maturity towards config management. A more detailed explanation of Puppet and its functionality can be found in our training material.
Docker: A tool designed to help developers and administrators to provide flexibility to reduce the count of the system as docker don’t create a complete virtual operating system, instead they allow applications to use the same Linux kernel as the system. So somewhere we can say we use Docker to create, deploy, and run applications by using containers. Just a stats submitted by docket that over 3.5 million applications placed in containers using docker and 37 billion containerized applications have been downloaded. Specifically, Docker CI /CD provided an opportunity to have exactly like a live server and run multiple dev infra form the same host with different config and OS. You may visit our training course on Docker to get more information.
Kubernetes: Platform developed to manage containerized applications which provide high availability and scalability. According to the usage we can downgrade and upgrade the system, we can perform rolling updates, rollback feasibility, switch traffic between to different versions of the application. So we can have multiple instances having Kubernetes installed and can be operated as Kubernetes cluster. Post this you get an API endpoint of Kube cluster, configure Kubel and you are ready to serve. Read our all-inclusive course on Kubernetes to gather more information on the same.
Docker and Kubernetes, although being widely tools as DevOps automation tools, have notable differences between their setup, installation and their attributes, clearly mentioned in our blog stressing the differences between Docker and Kubernetes.
Pingdom: Pingdom is a platform which enables monitoring to check the availability, performance, transaction monitoring (website hyperlink) and incident collection of your websites, servers or web applications. Beauty is if you are using a collaboration tool like slack or flock you can just integrate by using the webhook (Pretty much simpler no code required )you can easily get notified at any time. Pingdom also provides an API so you can have your customized dashboard (Recently started) and the documentation is having enough details and self-explanatory.
Nagios: It’s an Open Source Monitoring Tool to monitor the computer network. We can monitor server, applications, incident manager etc and you can certainly configure email, SMS, Slack notifications and phone calls even. Nagios is licensed under GNU GPLv2. Listing some major components which can be monitored with Nagios:
- Once we install Nagios we get a dashboard to monitor network services like SMTP, HTTP, SNMP, FTP, SSH, POP, etc and can view current network status, problem history, log files, notifications that have been triggered by the system, etc.
- We can monitor Servers resources like disk drives, memory, processor, server load usage, system logs, etc.
Image copyright stackdriver- D-4
Stackdriver is again a Monitoring tool to get the visibility of performance, uptime, and overall health for cloud-powered applications. Stackdriver monitoring collects events and metadata from Google Cloud Platform, Amazon Web Services (AWS). Stackdriver consumes data and generates insights via dashboards, charts, and alerts. And for alerting we can integrate to collaboration tools like Slack, PagerDuty, HipChat, Campfire, and more.
Image copyright stackdriver- D-2
Adding one sample log where we can see what all parameter it collects and also i have just separated them in fashion which will help us to understand what all it logs it actually collects:
|Log Information||User Details and Authorization Info||Request Type and Caller IP||Resource and Operation Details||TimeStamp and Status Details|
Grafana: It is an open source visualization tool and can be used on top of different data stores like InfluxDB,Elasticsearch and Logz.io.We can create comprehensive charts with smart axis formats (such as lines and points) as a result of Grafana’s fast, client-side rendering — even over long ranges of time — that uses Flot as a default option. We can get the 3 different levels of access, watcher, Editor and Admin, even we can enable G-Auth for having good access control. A detail information guide can be found here
Image copyright stackdriver- D-5
Elasticsearch:It's an open source realtime distributed, RESTful search and analytics engine. It collects unstructured data and stores in a cultivated format which is optimized and available for language based search. The beauty of Elastic is scalability, speed, document-oriented, schema-free. It scales horizontally to handle kajillions of events per second, while automatically managing how indices and queries are distributed across the cluster for smooth operations.
reOptimize.io: Once we run ample of servers, we usually end up with burning good amount not intentionally but of course because of not have a clear visualization. At reOptimize helps thereby providing a detailed insight about the cloud expense the integration can be done with 3-4 simple steps but before that you might need to look into the prerequest which can be accessed here. Just a heads up that they only need read access for all these and permission docs can be found here.
Image copyright reOptimizeD-6
16. DevOps vs Agile
|DevOps culture can be enabled in the software industry to deliver reliable build.||Agile is a generic culture which can be deployed in any department.|
|The key focus area is to have involvement at an end to end process.||Agile helps the management team to push a frequent release.|
|Enables quality build with rapid delivery.||Keep team aware of frequent changes for any release and feature.|
|Agile sprints work within the immediate future, A sprint life cycle varies between 7-30 days.||DevOps don’t have such scheduled metrix, they work to avoid such unscheduled disruptions.|
|Team size also differs, in Agile wee can minimal resource can be one as well.||DevOps works on collaboration and bridge a big set of the team.|
17. Future of DevOps
The industry is moving more on cloud which enables a few more responsibilities to DevOps. And immediate hot topic could be DevSecOps because more automation tends to more connectivity means more exposure. AI or ML is more data-centric and learning based which gives an opportunity to DevOps to share hand to train ML modules, unique analysis correlating, code, test results, user behavior, and production quality & performance. There is also an opportunity to break the stereotype that DevOps can be only adopted by startup and surely the next 2-3 years this will become a general practice in Enterprise.
18. Importance of DevOps training certification
Certifications work like an add-on, and add-on always gives some crisp and cumulative results. Certification works similar to that if someone adds any professional certificates to resume it gives an add-on value. In the process of certification, professionals help you to understand in detail and the deep dive of DevOps culture help individual to get a clear picture. While certification you may have to look into the vendor reputation, an academic who is giving approval, the transparency, session hour and many more components.
19. Conclusion: I have been working closely and observing the DevOps team from a year & so and found every day we learn something new. The more we dive we see a lot can be done and can be achieved in a couple of different ways. As the industry is growing the responsibility of DevOps seems to increase which creates a possible chance for professional but always set a new bar of quality. Now, since you have come to know everything about DevOps, feel free to read our blog on how you can become a DevOps Engineer.