Kickstart your career with best deals on top training courses NY10 Click to Copy

Search

Series List

Why Stop Inventing New DevOps Combinations?

DevOps - What's in a name?The term DevOps is well known by now. It was initially introduced by Patrick Dubois a Belgian IT consultant who organized an agile oriented event in October 2009 and named it DevOpsDays, targeting not only developers but also systems administrators, managers, and toolsmiths from all over the world. After the conference, the conversations continued on Twitter with the hashtag #DevOps.If you want to know more about the origin of the DevOps, you can check the video given below which gives you a lot of background about the reason why Patrick Dubois initially started this DevOpsDays conference:DevOps and the rise of the combinations and derivatives With the increasing popularity of DevOps, more people start to give their definition of DevOps. The different definitions of DevOps that go around can differ, depending on what aspect(s) of DevOps you want to focus.In a previous article, I wrote about how to explain DevOps in 5 letters - CALMS or CALMR i.e CALMS framework for DevOpsSome other definitions tend to focus primarily on the automation aspect, omitting the Agile foundation. As a consequence, you get the first combination of DevOps, named BizDevOps or BusDevOps. There are different interpretations about what BizDevOps actually means. “BizDevOps, also known as DevOps 2.0, is an approach to software development that encourages developers, operations staff and business teams to work together so the organization can develop software more quickly, be more responsive to user demand and ultimately maximize revenue.”At the same time, it is the most disputable definition. This definition assumes that DevOps is mainly a technology-driven initiative that hardly involves business people. But as mentioned in my previous article, the foundation of DevOps is culture, which goes back to the agile principles. And we all know that agile without business is only symptomatic. So DevOps without business is as symptomatic as agile without business.According to the Dzone article, DevOps is focusing on a single application or system whereas BizDevOps is focusing on the entire enterprise with all its complex processes and the mixture of applications and systems that support these complex processes.According to this article, BizDevOps provides an answer to dealing with:OK, fair point, but these aspects could as well be tackled by defining proper value streams and Agile Release Trains to deal with all the links and dependencies between these systems and applications. I don't see the need to come up with a different term.I guess you understand by now that I am not a big fan of the BizDevOps term and the confusion it creates. But it can get worse. It was some likely clever tool vendors that came up with the term DevSecOps. And if it is not the tool vendors that invented it, at least they were so clever to jump on the wagon to support the need for more security awareness in DevOps.Nowadays, large tool vendors using of the term DevSecOps instead of DevOps.Here's my opinion on this: security should be an integral part of DevOps. It should be a part of the culture:Don't only think about what something functionally should do, but also what can go wrong (think Abuse or Misuse cases). It is also a part of the automation. All security related tests should be automated as much as possible. Think about scanning vulnerabilities in your own source code, vulnerabilities in external libraries that you use, scanning your container images for vulnerabilities, or even - up to some extent - automated penetration testing. It is also a part of Lean principles: when a security test in your build pipeline fails (e.g. scanning your source code discovers a critical vulnerability), you stop the line.So again, the is no reason why the term DevSecOps should exist at all.Now that we have business and security covered, we can go on and see who else could feel denied or at least ignored? Maybe DBA's? Or any other person involved in data management? Maybe, that is the reason why we also have DevDataOps nowadays.I could go on for a while like this. But you get the point by now: it is uselessMaybe the DAD is right!I recently got to read an interesting article on disciplined agile delivery, the information portal from Mark Lines and Scott Ambler of their Disciplined Agile Delivery, or short DAD. DAD is not - as they call it - an agile methodology, but a process selection framework. DAD is the kernel of a layered model, like an onion, that they call Disciplined Agile and that consists of the following layers:Let’s explore each aspect in Disciplined Agile Framework mentioned in the diagram.1. Disciplined Agile Delivery (DAD)Disciplined Agile Delivery (DAD) aspect consists of initial modeling and planning, forming the team, securing funding, continuous architecture, continuous testing, continuous development, and governance all the way through the lifecycle. The Disciplined Agile Delivery (DAD) framework supports multiple delivery life cycles, basic/Agile lifecycle based on Scrum, a lean lifecycle based on Kanban, and a modern Agile lifecycle for continuous delivery. This aspect is responsible for addressing all the aspects of solution delivery.2. Disciplined DevOpsDisciplined DevOps streamlines the IT solution development and IT operations activities, and supports organization-IT activities, to benefit more effective outcomes to the organizations.3. Disciplined Agile IT (DAIT)DAIT aspect helps to understand how to apply Agile and Lean strategies to IT organizations. This aspect comprises of all IT-level activities such as enterprise architecture, data management, portfolio management, IT governance, and other capabilities.4.Disciplined Agile Enterprise (DAE)DAE can predict and respond quickly to the changes in the marketplace by facilitating a change through an organizational culture and structure. This aspect can be applied to organizations having the learning mindset in the mainstream business and underlying lean and agile processes to drive innovation.The second one, Disciplined DevOps principles deal exactly with what I mentioned before: the different derivatives and combinations of DevOps. They start by giving an answer to the question of why it is so difficult to come to a common definition of DevOps:Specialized IT practitionersMany IT professionals still tend to specialize, choose a focus, like DBA, enterprise architect, operations engineer, or whatever. Each discipline will focus on its own aspect of DevOps.Agilists are focused on continuous deliveryBecause of their focus on releasing daily or even several times a day, a lot of discussions deal with bringing new features faster and more frequently to production and not paying attention to all aspects of DevOpsOperations professionals are often frustratedSystems administrators are crunched between the push of the development teams to deliver faster and more frequently and the typical stringent service management processes they have to deal with, that are not yet adapted to the need for more frequent changesTool vendors have limited offeringsA fool with a tool is still a fool… DevOps tool vendors only focus on these DevOps-aspects that their tools coverService vendors have limited offeringsSimilarly to tool vendors, service vendors will only focus on these DevOps aspects that their  services can currently coverTool vendors treat DevOps as a marketing buzzwordSurfing the waves of the hypes, vendors might be persuaded to rebrand their existing toolset to something DevOps-ish, because it sounds better in a sales pitch. Sounds like window dressing…The DevOps = Cloud visionApparently, some people think that implementing DevOps in your organization can only succeed if you move to a cloud-based platform. Although cloud-native development practices are a facilitator for implementing DevOps, it not a requirement. And moving to a cloud platform definitely isn’t a requirement.All these reasons make that person come up with DevOps combinations that give an answer to only part of the problem.Disciplined DevOps mentions the following visions:1. BizDevOpsBizDevOps is a basic DevOps vision that explicitly brings the customers into the picture. BizDevOps is also called BusDevOps. DevOps is not just for teams, but it can be potentially applicable to any team supporting an incremental delivery lifecycle. The BizDevOps workflow consists of Business Operations, activities of delivering of products and services to the organizations. BusDevOps seeks to streamline the entire value stream, not just the IT portion of it. Its workflow is depicted in the diagram below.2.   DevSecOpsAnother common improvement over the basic DevOps vision is something called DevSecOps. The aim behind this vision is to ensure data security by getting the various security issues, adopting the latest security practices, and finding out and addressing the highest priority security gaps [DevSecOps]. This vision includes collaborative security engineers, exploit testing, real-time security monitoring, and building “rugged software” that has built-in security controls. The workflow of DevSecOps is shown in the figure.  3. DevDataOpsThe aim behind DevDataOps is to maintain a balance between the current needs of data management consists of providing timely and accurate information to the organization and DevOps to respond to the marketplace. Supporting data management activities include the definition, support, and evolution of data and information standards and guidelines; the creation, support, evolution, and operation of data sources of record within your organization; and the creation, support, evolution, and operation of  data warehouse (DW)/business intelligence (BI) solutions. The following figure depicting the workflow of DevDataOps.Or should we just stick to the term DevOps?Even though the message of Scott Ambler and Mark Lines is perfectly reasonable, not everybody might the term Disciplined DevOps. It fits their framework like a glove: everything boils down to Disciplined. If you don’t want to be framed into the Disciplined Agile/DevOps framework (pun intended), you may as well stick to the term DevOps and make sure that you cover all the aspects, which include business, security, data, release management and support.
Rated 4.0/5 based on 11 customer reviews
Why Stop Inventing New DevOps Combinations? 4666
Why Stop Inventing New DevOps Combinations?

DevOps - What's in a name?

The term DevOps is well known by now. It was initially introduced by Patrick Dubois a Belgian IT consultant who organized an agile oriented event in October 2009 and named it DevOpsDays, targeting not only developers but also systems administrators, managers, and toolsmiths from all over the world. After the conference, the conversations continued on Twitter with the hashtag #DevOps.

If you want to know more about the origin of the DevOps, you can check the video given below which gives you a lot of background about the reason why Patrick Dubois initially started this DevOpsDays conference:


DevOps and the rise of the combinations and derivatives 

With the increasing popularity of DevOps, more people start to give their definition of DevOps. The different definitions of DevOps that go around can differ, depending on what aspect(s) of DevOps you want to focus.

In a previous article, I wrote about how to explain DevOps in 5 letters - CALMS or CALMR i.e CALMS framework for DevOps
CALMS modelSome other definitions tend to focus primarily on the automation aspect, omitting the Agile foundation. As a consequence, you get the first combination of DevOps, named BizDevOps or BusDevOps. There are different interpretations about what BizDevOps actually means. 

“BizDevOps, also known as DevOps 2.0, is an approach to software development that encourages developers, operations staff and business teams to work together so the organization can develop software more quickly, be more responsive to user demand and ultimately maximize revenue.”

At the same time, it is the most disputable definition. This definition assumes that DevOps is mainly a technology-driven initiative that hardly involves business people. But as mentioned in my previous article, the foundation of DevOps is culture, which goes back to the agile principles. And we all know that agile without business is only symptomatic. So DevOps without business is as symptomatic as agile without business.

According to the Dzone article, DevOps is focusing on a single application or system whereas BizDevOps is focusing on the entire enterprise with all its complex processes and the mixture of applications and systems that support these complex processes.

According to this article, BizDevOps provides an answer to dealing with:

Devops
OK, fair point, but these aspects could as well be tackled by defining proper value streams and Agile Release Trains to deal with all the links and dependencies between these systems and applications. I don't see the need to come up with a different term.

I guess you understand by now that I am not a big fan of the BizDevOps term and the confusion it creates. But it can get worse. It was some likely clever tool vendors that came up with the term DevSecOps. And if it is not the tool vendors that invented it, at least they were so clever to jump on the wagon to support the need for more security awareness in DevOps.

Nowadays, large tool vendors using of the term DevSecOps instead of DevOps.

Here's my opinion on this: security should be an integral part of DevOps. It should be a part of the culture:

Don't only think about what something functionally should do, but also what can go wrong (think Abuse or Misuse cases). It is also a part of the automation. All security related tests should be automated as much as possible. Think about scanning vulnerabilities in your own source code, vulnerabilities in external libraries that you use, scanning your container images for vulnerabilities, or even - up to some extent - automated penetration testing. It is also a part of Lean principles: when a security test in your build pipeline fails (e.g. scanning your source code discovers a critical vulnerability), you stop the line.

So again, the is no reason why the term DevSecOps should exist at all.

Now that we have business and security covered, we can go on and see who else could feel denied or at least ignored? Maybe DBA's? Or any other person involved in data management? Maybe, that is the reason why we also have DevDataOps nowadays.

I could go on for a while like this. But you get the point by now: it is useless

Maybe the DAD is right!

I recently got to read an interesting article on disciplined agile delivery, the information portal from Mark Lines and Scott Ambler of their Disciplined Agile Delivery, or short DAD. DAD is not - as they call it - an agile methodology, but a process selection framework. DAD is the kernel of a layered model, like an onion, that they call Disciplined Agile and that consists of the following layers:
Disciplined Agile
Let’s explore each aspect in Disciplined Agile Framework mentioned in the diagram.

1. Disciplined Agile Delivery (DAD)

Disciplined Agile Delivery (DAD) aspect consists of initial modeling and planning, forming the team, securing funding, continuous architecture, continuous testing, continuous development, and governance all the way through the lifecycle. The Disciplined Agile Delivery (DAD) framework supports multiple delivery life cycles, basic/Agile lifecycle based on Scrum, a lean lifecycle based on Kanban, and a modern Agile lifecycle for continuous delivery. This aspect is responsible for addressing all the aspects of solution delivery.

2. Disciplined DevOps

Disciplined DevOps streamlines the IT solution development and IT operations activities, and supports organization-IT activities, to benefit more effective outcomes to the organizations.

3. Disciplined Agile IT (DAIT)

DAIT aspect helps to understand how to apply Agile and Lean strategies to IT organizations. This aspect comprises of all IT-level activities such as enterprise architecture, data management, portfolio management, IT governance, and other capabilities.

4.Disciplined Agile Enterprise (DAE)

DAE can predict and respond quickly to the changes in the marketplace by facilitating a change through an organizational culture and structure. This aspect can be applied to organizations having the learning mindset in the mainstream business and underlying lean and agile processes to drive innovation.

The second one, Disciplined DevOps principles deal exactly with what I mentioned before: the different derivatives and combinations of DevOps. They start by giving an answer to the question of why it is so difficult to come to a common definition of DevOps:

Specialized IT practitioners
Many IT professionals still tend to specialize, choose a focus, like DBA, enterprise architect, operations engineer, or whatever. Each discipline will focus on its own aspect of DevOps.

Agilists are focused on continuous delivery
Because of their focus on releasing daily or even several times a day, a lot of discussions deal with bringing new features faster and more frequently to production and not paying attention to all aspects of DevOps

Operations professionals are often frustrated
Systems administrators are crunched between the push of the development teams to deliver faster and more frequently and the typical stringent service management processes they have to deal with, that are not yet adapted to the need for more frequent changes

Tool vendors have limited offerings
A fool with a tool is still a fool… DevOps tool vendors only focus on these DevOps-aspects that their tools cover

Service vendors have limited offerings
Similarly to tool vendors, service vendors will only focus on these DevOps aspects that their  services can currently cover

Tool vendors treat DevOps as a marketing buzzword
Surfing the waves of the hypes, vendors might be persuaded to rebrand their existing toolset to something DevOps-ish, because it sounds better in a sales pitch. Sounds like window dressing…

The DevOps = Cloud vision

Apparently, some people think that implementing DevOps in your organization can only succeed if you move to a cloud-based platform. Although cloud-native development practices are a facilitator for implementing DevOps, it not a requirement. And moving to a cloud platform definitely isn’t a requirement.
All these reasons make that person come up with DevOps combinations that give an answer to only part of the problem.

Disciplined DevOps mentions the following visions:

1. BizDevOps

BizDevOps is a basic DevOps vision that explicitly brings the customers into the picture. BizDevOps is also called BusDevOps. DevOps is not just for teams, but it can be potentially applicable to any team supporting an incremental delivery lifecycle. The BizDevOps workflow consists of Business Operations, activities of delivering of products and services to the organizations. BusDevOps seeks to streamline the entire value stream, not just the IT portion of it. Its workflow is depicted in the diagram below.

BizDevOps


2.   DevSecOps

Another common improvement over the basic DevOps vision is something called DevSecOps. The aim behind this vision is to ensure data security by getting the various security issues, adopting the latest security practices, and finding out and addressing the highest priority security gaps [DevSecOps]. This vision includes collaborative security engineers, exploit testing, real-time security monitoring, and building “rugged software” that has built-in security controls. The workflow of DevSecOps is shown in the figure.  

DevSecOps
3. DevDataOps

The aim behind DevDataOps is to maintain a balance between the current needs of data management consists of providing timely and accurate information to the organization and DevOps to respond to the marketplace. Supporting data management activities include the definition, support, and evolution of data and information standards and guidelines; the creation, support, evolution, and operation of data sources of record within your organization; and the creation, support, evolution, and operation of  data warehouse (DW)/business intelligence (BI) solutions. The following figure depicting the workflow of DevDataOps.

 DevDataOps

Or should we just stick to the term DevOps?

Even though the message of Scott Ambler and Mark Lines is perfectly reasonable, not everybody might the term Disciplined DevOps. It fits their framework like a glove: everything boils down to Disciplined. 

If you don’t want to be framed into the Disciplined Agile/DevOps framework (pun intended), you may as well stick to the term DevOps and make sure that you cover all the aspects, which include business, security, data, release management and support.

Koen

Koen Vastmans

Blog Author

I am an IT professional working in a major Belgian bank for over 26 years. I have been into software development for several years, mostly in Java, from COTS software integration over web applications to digital signing. The past 6 years I was an agile coach and trainer. I recently joined a team of cloud native development, to focus on DevOps processes and organisation.

My passion for agile and DevOps is my main driver to share my ideas about these topics.

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

Kubernetes vs Docker

What is Kubernetes?Kubernetes (also known as K8s) is a production-grade container orchestration system. It is an open source cluster management system initially developed by three Google employees during the summer of 2014 and grew exponentially and became the first project to get donated to the Cloud Native Computing Foundation(CNCF).It is basically an open source toolkit for building a fault-tolerant, scalable platform designed to automate and centrally manage containerized applications. With Kubernetes you can manage your containerized application more efficiently.Kubernetes is a HUGE project with a lot of code and functionalities. The primary responsibility of Kubernetes is container orchestration. That means making sure that all the containers that execute various workloads are scheduled to run physical or virtual machines. The containers must be packed efficiently following the constraints of the deployment environment and the cluster configuration. In addition, Kubernetes must keep an eye on all running containers and replace dead, unresponsive, or otherwise unhealthy containers.Kubernetes uses Docker to run images and manage containers. Nevertheless, K8s can use other engines, for example, rkt from the CoreOS. The platform itself can be deployed within almost any infrastructure – in the local network, server cluster, data center, any kind of cloud – public (Google Cloud, Microsoft Azure, AWS, etc.), private, hybrid, or even over the combination of these methods. It is noteworthy that Kubernetes supports the automatic placement and replication of containers over a large number of hosts. It  brings a number of features and which can be thought of as:\ As a container platformAs a microservices platformAs a portable cloud platform and a lot more.Kubernetes considers most of the operational needs for application containers. The top 10 reasons why Kubernetes is so popular are as follow:Largest Open Source project in the worldGreat Community SupportRobust Container deploymentEffective Persistent storageMulti-Cloud Support(Hybrid Cloud)Container health monitoringCompute resource managementAuto-scaling Feature SupportReal-world Use cases AvailableHigh availability by cluster federationBelow is the list of features which Kubernetes provides - Service Discovery and load balancing: Kubernetes has a feature which assigns the containers with their own IP addresses and a unique DNS name, which can be used to balance the load on them.Planning & Placement: Placement of the containers on the node is a crucial feature on which makes the decision based on the resources it requires and other restrictions.Auto Scaling: Based on the CPU usage, the vertical scaling of applications is automatically triggered using the command line.Self Repair: This is a unique feature in the Kubernetes which will restart the container automatically when it fails. If the Node dies, then containers are replaced or re-planned on the other Nodes. You can stop the containers if they don't respond to the health checks.Storage Orchestration: This feature of Kubernetes enables the user to mount the network storage system as a local file system.Batch execution: Kubernetes manages both batch and CI workloads along with replacing containers that fail.Deployments and Automatic Rollbacks: During the configuration changes for the application hosted on the Kubernetes, progressively monitors the health to ensure that it does not terminate all the instances at once, it makes an automatic rollback only in the case of failure.Configuration Management and Secrets: All classifies information like keys and passwords are stored under module called Secrets in Kubernetes. These Secrets are used especially while configuring the application without having to reconstruct the image.What is Docker?Docker is a lightweight containerization technology that has gained widespread popularity in the cloud and application packaging world. It is an open source framework that automates the deployment of applications in lightweight and portable containers. It uses a number of the Linux kernel’s features such as namespaces, cgroups, AppArmor profiles, and so on, to sandbox processes into configurable virtual environments. Though the concept of container virtualization isn’t new, it has been getting attention lately with bigwigs like Red Hat, Microsoft, VMware, SaltStack, IBM, HP, etc, throwing their weight behind newcomer Docker. Start-ups are betting their fortunes on Docker as well. CoreOS, Drone.io, and Shippable are some of the start-ups that are modeled to provide services based upon Docker. Red Hat has already included it as a primary supported container format for Red Hat Enterprise Linux 7.Why is Docker popular?The major factors driving Docker’s popularity are its speed, ease of use and the fact that it is largely free. In performance, it is even said to be comparable with KVM. A container-based approach, in which applications can run in isolation and without relying on a separate operating system, can really save huge amounts of hardware resources. Industry experts have started looking at it as hardware multi-tenancy for applications. Instead of having hundreds of VMs running per server, what if it were possible to have thousands of hardware-isolated applications?Docker is used to running software packages called "containers". A container is a standardized unit of software that packages up a code and all its dependencies so the application runs quickly and reliably from one computing environment to other. Containers are the “fastest growing cloud-enabling technology”* because they speed the delivery of software and cut the cost of operating it. Writing software is faster. Deploying it is easier — in your data center or your preferred cloud. And running it requires less hardware and support.Although container technology has existed for decades, Docker makes it work for the enterprise with core features enterprises require in a container platform and best-practice services to ensure success. And containers work on both legacy applications and new development.Existing, mission-critical applications can be “containerized,” often with little or no change. The result is instant savings in infrastructure, better security, and reduced labor. And new development happens faster because engineers only target a single platform instead of a variety of servers and clouds. Less code to write. Less testing. Faster delivery.Introduction to Docker swarm.Docker Swarm is the native clustering and scheduling tool for Docker.  It allows IT, administrators and developers, to establish and manage a cluster of Docker nodes as a single virtual system.  It is written in Go and released for the first time in November 2015 by Docker, Inc.The cluster management and orchestration features embedded in the Docker Engine are built using swarmkit. Swarmkit is a separate project which implements Docker’s orchestration layer and is used directly within Docker. It is a toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.Its main benefits are:Distributed: SwarmKit uses the Raft Consensus Algorithm in order to coordinate and does not rely on a single point of failure to perform decisions.Secure: Node communication and membership within a Swarm are secure out of the box. SwarmKit uses mutual TLS for node authentication, role authorization, and transport encryption, automating both certificate issuance and rotation.Simple: SwarmKit is operationally simple and minimizes infrastructure dependencies. It does not need an external database to operateCurrent versions of Docker include swarm mode for natively managing a cluster of Docker Engines called a swarm. One can use the Docker CLI to create a swarm, deploy application services to a swarm, and manage swarm behavior. All you need is to initiate it to use the latest features which comes with the Docker Engine. Docker Swarm Mode ArchitectureEvery node in Swarm Mode has a role which can be categorized as a Manager and Worker. Manager node has a responsibility to actually orchestrate the cluster, perform the health-check, running containers serving the API and so on. The worker node just executes the tasks which are actually containers. It cannot decide to schedule the containers on the different machine. It cannot change the desired state. The workers only take work and report back the status. You can enable node promotion or demotion easily through one-liner command.Managers and Workers use two different communication models. Managers have built-in RAFT system that allows them to share information for new leader election. At one time, the only manager is actually performing the scaling and they use a leader-follower model to figure out which one is supposed to be what. No External K-V store is required as a built-in internal distributed state store is available.Workers, on the other side, uses GOSSIP network protocol which is quite fast and consistent. Whenever any new container/tasks get generated in the cluster, the gossip is going to broadcast it to all the other containers in a specific overlay network that this new container has started. Please remember that ONLY the containers which are running in the specific overlay network will be communicated and NOT globally. Gossip is optimized for heavy traffic.How Docker swarm varies with Docker?Today Docker Platform support 3 variants of Swarm:Docker Swarm ( Classic)Swarmkit(a foundation for Docker Swarm Mode)Docker Swarm ModeLet us go through each one of them one by one Docker Swarm 1.0 was introduced for the first time in Docker Engine 1.9 Release during November 2015. It was a separate GITHUB repo and software which needed to be installed for turning a pool of Docker Engines into a single, virtual Engine.. It was announced as the easiest way to run Docker applications at scale on a cluster. You don’t have to worry about where to put containers, or how they talk to each other – it just handles all that for you.In 2016 during Dockercon, Docker Inc. announced Docker Swarm Mode for the first time. Swarm Mode came integrated directly into Docker Engine which means you don’t need to install it separately. All you need is to initiate it using `docker swarm init` command. With an optional “Swarm Mode” feature rightly integrated into core Docker Engine, native management of a cluster of Docker Engines, orchestration, decentralized design, service and application deployment, scaling, desired state reconciliation, multi-host networking, service discovery and routing mesh implementation is just a matter of few liner commands.Said that Docker Swarm mode is fundamentally different from Classic Swarm. The basic difference are listed below:Docker Swarm ModeDocker Classic SwarmDocker Swarm Mode comes integrated into Docker EngineDocker Swarm is a GITHUB repository and comes as a separate project. It is NOT integrated into Docker Engine.Comes with inbuilt Service DiscoveryNeed external KV store based on Consul & etc.Comes with inbuilt feature like: ScalingRolling UpdatesService DiscoveryLoad-Balancing Routing MeshTopological PlacementLack of inbuilt feature like Load Balancing, Scalability, Routing Mesh etc.Secured Control & Data PlaneControl Plane and Data Plane are insecureLet’s talk about Swarmkit a bit.Swarmkit is a plumbing open source project. It is a toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.Its main benefits are:Distributed: SwarmKit uses the Raft Consensus Algorithm in order to coordinate and does not rely on a single point of failure to perform decisions.Secure: Node communication and membership within a Swarm are secure out of the box. SwarmKit uses mutual TLS for node authentication, role authorization, and transport encryption, automating both certificate issuance and rotation.Simple: SwarmKit is operationally simple and minimizes infrastructure dependencies. It does not need an external database to operate.SwarmKit is completely built in Go and leverages a standard project structure to work well with Go tooling. If you want to learn more about Swarmkit, head over to https://github.com/docker/swarmkit/How Docker can be used with Kubernetes?From 30,000 feet, Docker and Kubernetes might appear to be similar technologies. They both are an open platform which allows you to run applications within Linux containers. But as you deep-dive little closer, you’ll find that the technologies operate at different layers of the stack, and can even be used together. Let’s talk about Docker first-Docker provides the ability to package and run an application in a loosely isolated environment called a container. At their core, containers are a way of packaging software. The unique feature about container is that when you run a container, you know exactly how it will run - it’s very predictable, repeatable and immutable. You are just left with no unexpected errors when you move it to a new machine, or between environments. All of your application’s code, libraries, and dependencies are packed together in the container as an immutable artifact. You can think of running a container like running a virtual machine, without the overhead of spinning up an entire operating system. Docker CLI provides the mechanism for managing the life cycle of the containers. Whereas the docker image defines the build time framework of runtime containers, CLI commands are there to start, stop, restart and perform lifecycle operations on these containers. Today, containers can be orchestrated and can be made to run on multiple hosts. The questions that need to be answered are how these containers are coordinated and scheduled? And how will the application running in these containers communicate with each other? The answer is Kubernetes.Today, Kubernetes mostly uses Docker to package, instantiate, and run containerized applications. Said that there are various another container runtime available but Docker is the most popular runtime binary used by Kubernetes. Both Kubernetes and Docker build a comprehensive standard for managing the containerized applications intelligently along with providing powerful capabilities. Docker provides a platform for building running and distributing Docker containers. Docker brings up its own clustering tool which can be used for orchestration. But Kubernetes is an orchestration platform for Docker containers which is more extensive than the Docker clustering tool and has the capacity to scale to the production level. Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker Swarm and is meant to coordinate clusters of nodes at scale in production in an efficient manner.  It is a plug and plays architecture for the container orchestration which provides features like high availability among the distributed nodes.For Example ~ Today it is possible to run Kubernetes under Docker EE 2.0 platform. Docker Enterprise Edition (EE) 2.0 is the only platform that manages and secures applications on Kubernetes in multi-Linux, multi-OS, and multi-cloud customer environments. As a complete platform that integrates and scales with your organization, Docker EE 2.0 gives you the most flexibility and choice over the types of applications supported, orchestrators used, and where it’s deployed. It also enables organizations to operationalize Kubernetes more rapidly with streamlined workflows and helps you deliver safer applications through integrated security solutions.Difference between Kubernetes and Dockeri) Kubernetes vs DockerSet up and installationKubernetesDockerIt requires a series of manual steps to setup Kubernetes Master and worker nodes components in a cluster of nodesInstalling Docker is a matter of one-liner command on Linux Platform like Debian, Ubuntu, and CentOS.Kubernetes can run on various platforms: from your laptop, to VMs on a cloud provider, to a rack of bare metal servers. For setting up a single node K8s cluster, one can use Minikube.To install a single-node Docker Swarm or Kubernetes cluster, one can deploy Docker for Mac & Docker for Windows.Kubernetes support for Windows server is under beta phase.Docker has official support for Windows 10 and Windows Server 2016 and 1709.Kubernetes Client and Server packages need to be upgraded manually on all the systems.It’s so easy to upgrade Docker Engine under Docker for Mac & Windows via just 1 click.Working in two systemsKubernetesDockerKubernetes operates at the application level rather than at the hardware level. Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes.Kubernetes can run on top of Docker but requires you to know the command line interface (CLI) specifications for both to access your data over the API.There is a kubernetes client called kubectl which talks to kube API which is running on your master node.Unlike Master components that usually run on a single node (unless High Availability Setup is explicitly stated), Node components run on every node.kubelet: agent running on the node to inspect the container health and report to the master as well as listening to new commands from the kube-apiserverkube-proxy: maintains the network rulescontainer runtime: software for running the containers (e.g. Docker, rkt, runc)Docker Platform is available in the form of two editions:Docker Community EditionDocker Enterprise EditionDocker Community comes with community-based support forums whereas Docker Enterprise Edition is offered as enterprise-class support with defined SLAs and private support channels.Docker Community and Enterprise Edition both come by default with Docker Swarm Mode. Additionally, Kubernetes is supported under Docker Enterprise Edition.For Docker Swarm Mode, one can use Docker Compose file and use Docker Stack Deploy CLI to deploy an application across the cluster nodes.The `docker stack` CLI deploys a new stack or update an existing stack. The client and daemon API must both be at least 1.25 to use this command. One can use the docker version command on the client to check your client and daemon API versionsLogging and MonitoringKubernetesDockerLogging:Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster. Few of popular logging tools are listed below:Fluentd is an open source data collector for a unified logging layer. It’s written in Ruby with a plug-in oriented architecture.It helps to collect, route and store different logs from different sources. While Fluentd is optimized to be easily extended using plugin architecture, fluent-bit is designed for performance. It’s compact and written in C so it can be enabled to minimalistic IOT devices and remain fast enough to transfer a huge quantity of logs. Moreover, it has built-in Kubernetes support. It’s an especially compact tool designed to transport logs from all nodes.Other tools like Stackdriver logging provided by GCP, Logz.io and other 3rd party drivers are available too.Monitoring:There are various open source tools available for Kubernetes application monitoring like:Heapster: Installed as a pod inside of Kubernetes, it gathers data and events from the containers and pods within the cluster.Prometheus: Open source Cloud Native Computing Foundation (CNCF) project that offers powerful querying capabilities, visualization and alerting.Grafana:  Used in conjunction with Heapster for visualizing data within your Kubernetes environment.InfluxDB: A highly-available database platform that stores the data captured by all the Heapster pods.CAdvisor:  focuses on container level performance and resource usage. This comes embedded directly into kubelet and should automatically discover active containers.Logging driver plugins are available in Docker 17.05 and higher. Logging capabilities available in Docker are exposed in the form of drivers, which is very handy since one gets to choose how and where log messages should be shippedDocker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers.Each Docker daemon has a default logging driver, which each container uses unless you configure it to use a different logging driver.In addition to using the logging drivers included with Docker, you can also implement and use logging driver plugins.To configure the Docker daemon to default to a specific logging driver, set the value of log-driver to the name of the logging driver in the daemon.json file, which is located in /etc/docker/ on Linux hosts.The following example explicitly sets the default logging driver to syslog:{   "log-driver": "syslog" }When you start a container, you can configure it to use a different logging driver than the Docker daemon default, using the --log-driver flag. If the logging driver has configurable options, you can set them using one or more instances of the --log-opt = flag. Even if the container uses the default logging driver, it can use different configurable options.SizeKubernetes DockerAs per official page of Kubernetes documentation K8s v1.12 support clusters with up to 5000 nodes based on the below criteria:No more than 5000 nodesNo more than 150000 total podsNo more than 300000 total containersNo more than 100 pods per node.According to the Docker’s blog post on scaling Swarm clusters published during Nov 2015, Docker Swarm has been scaled and performance tested up to 30,000 containers and 1,000.SpecsDiscovery backend: Consul1,000 nodes30 containers per nodeManager: AWS m4.xlarge (4 CPUs, 16GB RAM)Nodes: AWS t2.micro (1 CPU, 1 GB RAM)Container image: Ubuntu 14.04Results Percentile  API Response Time Scheduling Delay50th     150ms              230ms90th      200ms             250ms99th      360ms             400msii) Building and Deploying Containers with DockerDocker has a capability to builds images automatically by reading the instructions via text file called Dockerfile. It is a simple text file that follows a specific format and instructions set that contains all commands, in order, needed to build a given image. A Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer. For example, below is a simple Dockerfile which Consider this Dockerfile:FROM nginx:latest COPY wrapper.sh / COPY html /usr/share/nginx/html CMD ["./wrapper.sh"]Each instruction creates one layer:FROM creates a layer from the nginx:latest Docker image.COPY adds files from your Docker client’s current directory.CMD specifies what command to run within the container.When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.Building a Docker Image$docker build -t hellowhaleThe above shown `docker build` command builds an image from a Dockerfile and a context. The build context is the set of files at a specified location PATH or URL. The PATH is a directory on your local filesystem. The URL is a Git repository location.Running the Docker ContainerA running Docker Image is called Docker container and all you need is to run the below command to expose port 80 on host machine from a container and get it up and running:docker run -d -p 80:80 --name hellowhale hellowhaleTagging the Image$docker tag hellowhale userid/hellowhalePushing the Docker Image to DockerHubBefore you push Docker Image to DockerHub, you need to login to DockerHub first using the below command:$docker login $docker push userid/hellowhaleiii) Managing container with KubernetesDocker CLI for a standalone system is used to build, ship and run your Docker containers. But if you want to run multiple containers across multiple machines, you need a robust orchestration tool and Kubernetes is the most popular in the list.Kubernetes is an open source container orchestration platform, allowing large numbers of containers to work together in harmony, reducing operational burden. It helps with things like running containers across many different machines, scaling up or down by adding or removing containers when demand changes, keeping storage consistent with multiple instances of an application, distributing load between the containers and launching new containers on different machines if something fails.Below are the list of comparative CLI used by Docker Vs Kubernetes to manage containers:Docker CLIKubernetes CLIdocker runTo run an nginx container -$ docker run -d --restart=always --name nginx-app -p 80:80 nginxkubectl runTo run an nginx Deployment and expose the Deployment, see kubectl run.$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"docker psTo list what is currently running, see kubectl get.docker:$ docker ps -akubectl getTo list what is currently running under kubernetes cluster:$ kubectl get po -adocker execTo execute a command in a  Docker container:$ docker psCONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES55c103fa1296        nginx               "nginx -g 'daemon of…"   6 minutes ago       Up 6 minutes        0.0.0.0:80->80/tcp   nginx-app$ docker exec 55c103fa1296 cat /etc/hostnamekubectl:To execute a command in a container, see kubectl exec.$ kubectl get poNAME              READY     STATUS    RESTARTS   AGEnginx-app-5jyvm   1/1       Running   0          10m$ kubectl exec nginx-app-5jyvm -- cat /etc/hostnamenginx-app-5jyvmiv) Trends in Docker and KubernetesDocker, Inc has around 550+ enterprise customer who uses Docker in a production environment. Few of non-exhaustive list of companies who actively uses Docker are list below:The New York TimesPayPalBusiness InsiderCornell University (Not a company but still can be considered)SplunkThe Washington PostSwisscomAlm BrandAssa AbloyExpediaJabilMetLifeSociete GeneraleGEGrouponYandexUberEbayShopifySpotifyNew RelicYelpRecently, the Forrester New Wave™: Enterprise Container Platform Software Suites, Q4 2018 report states that Docker leading the pack with a robust container platform well-suited for the enterprise, offering a secure container supply chain from the developer's desktop to production.Lots of organizations are already using Kubernetes in production—like the ones listed on the Kubernetes case studies page, including eBay, Buffer, Pearson, Box, and Wikimedia. But that is not a complete list. Kubernetes is even more versatile than the official case studies page suggests. Below is a list of companies using it:List of Kubernetes Users   Microservices UsageMicroservices help developers break up monolithic applications into smaller components. They can move away from all-at-once massive package deployments and break up apps into smaller, individual units that can be deployed separately. Smaller microservices can give apps more scalability, more resiliency and - most importantly - they can be updated, changed and redeployed faster. Some of the biggest public cloud applications run as microservices already.Containers are a packaging strategy for microservices. Think of them more as process containers than virtual machines. They run as a process inside a shared operating system. A container typically only does one small job - validate a login or return a search result. Docker is a tool that describes those packages in a common format, and helps launch and run them. Linux containers have been around for a while, but their popularity in the public cloud has given rise to an exciting new ecosystem of companies building tools to make them easier to use, cluster and orchestrate them, run them in more places, and manage their life cycles.Over the last two years, many different types of software vendors - from operating system to IT infrastructure companies - have all joined the container ecosystem. There’s already an industry organization - the open container initiative - guiding the market and making sure everyone plays well together. IBM, HP, Microsoft, VMware, Google, Red Hat, CoreOS - these are just some of the major vendors racing to make containers as easy as possible for developers to use, to share, to protect, and to scale.The rising demand for multi-cloud environmentsWith an estimated 85% of today’s enterprise IT organizations employing a multi-cloud strategy, it has become more critical that customers have a ‘single pane of glass’ for managing their entire application portfolio. Most enterprise organizations have a hybrid and multi-cloud strategy. Containers have helped to make applications portable but let us accept the fact that even though containers are portable today but the management of containers is still a nightmare. The reason being –Each Cloud is managed under a separate operational model, duplicating effortsDifferent security and access policies across each platformContent is hard to distribute and trackPoor Infrastructure utilization still remainsThe emergence of Cloud-hosted K8s is exacerbating the challenges with managing containerized applications across multiple CloudsThis time Docker introduced new application management capabilities for Docker Enterprise Edition that will allow organizations to federate applications across Docker Enterprise Edition environments deployed on-premises and in the cloud as well as across cloud-hosted Kubernetes. This includes Azure Kubernetes Service (AKS), AWS Elastic Container Service for Kubernetes (EKS), and Google Kubernetes Engine (GKE). The federated application management feature will automate the management and security of container applications on premises and across Kubernetes-based cloud services. It will provide a single management platform to enterprises so that they can centrally control and secure the software supply chain for all the containerized applications.With this announcement, undoubtedly Docker Enterprise Edition is the only enterprise-ready container platform that can deliver federated application management with a secure supply chain. Not only does Docker give you your choice of Linux distribution or Windows Server, the choice of running in a virtual machine or on bare metal, running traditional or microservices applications with either Swarm or Kubernetes orchestration, it also gives you the flexibility to choose the right cloud for your needs.Talking about Kubernetes Platform, version 1.3 of container management platform KubernetesIntroduced cross-cluster federated services with an ability to span workloads across clusters and, by extension, across multiple clouds. This opens up the possibility for workloads that need to draw resources from multiple clouds. This would also mean that large jobs can be split among clouds. Not only this, this introduced an ability to automatically scale services to match demand. Increasing support for Docker and KubernetesKubernetes has been enjoying widespread adoption among startups, platform vendors, and enterprises. Companies like Amazon, Google, IBM, Red Hat, and Microsoft offer managed Kubernetes under the Containers as a Service (CaaS) model. The open source ecosystem has dozens of players building various tools covering logging, monitoring, automation, storage, and networking aspects of Kubernetes. System integrators have dedicated practices and offerings based on Kubernetes. Global players like Uber, Bloomberg, Blackrock, BlaBlaCar, The New York Times, Lyft, eBay, Buffer, Squarespace, Ancestry, GolfNow, Goldman Sachs and many others are using Kubernetes in production at massive scale. According to Redmonk, a developer-focused research company, 71 percent of the Fortune 100 use containers and more than 50 percent of Fortune 100 companies use Kubernetes as their container orchestration platform.Did you know there are 35 certified Kubernetes distribution, 22 certified Kubernetes hosting platform and 50 certified Kubernetes service provider available? Over the last three years, Kubernetes has been adopted by a vibrant, diverse community of providers. The Cloud Native Computing Foundation® (CNCF®), which sustains and integrates open source technologies like Kubernetes® , today announced the availability of the Certified Kubernetes Conformance Program, which ensures Certified Kubernetes™ products deliver consistency and portability, and that 35 Certified Kubernetes Distributions and Platforms are now available. A Certified Kubernetes product guarantees that the complete Kubernetes API functions as specified, so users can rely on a seamless, stable experience.In the other hand, Docker Enterprise Edition (EE) 2.0 represents a significant leap forward in container platform solutions, delivering the only solution that manages and secures applications on Kubernetes in multi-Linux, multi-OS, and multi-cloud customer environments. One of the most promising features announced with this release includes Kubernetes integration as an optional orchestration solution, running side-by-side with Docker Swarm. Not only this, this release includes Swarm Layer 7 routing improvements, Registry image mirroring, Kubernetes integration to Docker Trusted Registry & Kubernetes integration to Docker EE access controls. With this new release, organizations will be able to deploy applications with either Swarm or fully-conformant Kubernetes while maintaining the consistent developer-to-IT workflow.Docker EE is more than just a container orchestration solution; it is a full lifecycle management solution for the modernization of traditional applications and microservices across a broad set of infrastructure platforms. It is a Containers-as-a-Service(CaaS) platform for IT that manages and secures diverse applications across disparate infrastructure, both on-premises and in the cloud. Docker EE provides an integrated, tested and certified platform for apps running on enterprise Linux or Windows operating systems and Cloud providers. It is tightly integrated into the underlying infrastructure to provide a native, easy to install experience and an optimized Docker environment.V) Kubernetes vs Docker swarmInstallation & Cluster configurationGUIScalabilityAuto-ScalingLoad BalancingRolling Updates & RollbacksData VolumesLogging & MonitoringKubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It was built by Google based on their experience running containers in production using an internal cluster management system called Borg (sometimes referred to as Omega). On the other hand, a Swarm cluster consists of Docker Engine deployed on multiple nodes. Manager nodes perform orchestration and cluster management. Worker nodes receive and execute tasks Below is the major list of differences between Docker Swarm & Kubernetes:Docker SwarmKubernetesApplications are deployed in the form of services (or “microservices”) in a Swarm cluster. Docker Compose is a tool which is majorly used to deploy the app.Applications are deployed in the form of a combination of pods, deployments, and services (or “microservices”).Autoscaling feature is not available either in  Docker Swarm (Classical) or Docker SwarmaAn auto-scaling feature is available under K8s. It uses a simple number-of-pods target which is defined declaratively using deployments. CPU-utilization-per-pod target is available.  Docker Swarm supports rolling updates features. At rollout time, you can apply rolling updates to services. The Swarm manager lets you control the delay between service deployment to different sets of nodes, thereby updating only 1 task at a time.Under kubernetes, the deployment controller supports both “rolling-update” and “recreate” strategies. Rolling updates can specify a maximum number of pods unavailable or maximum number running during the process.Under Docker Swarm Mode, the node joining a Docker Swarm cluster creates an overlay network for services that span all of the hosts in the Swarm and a host-only Docker bridge network for container.By default, nodes in the Swarm cluster encrypt overlay control and management traffic between themselves. Users can choose to encrypt container data traffic when creating an overlay network by themselves.Under K8s, the networking model is a flat network, enabling all pods to communicate with one another. Network policies specify how pods communicate with each other. The flat network is typically implemented as an overlay.Docker Swarm health checks are limited to services. If a container backing the service does not come up (running state), a new container is kicked off.Users can embed health check functionality into their Docker images using the HEALTHCHECK instruction.Under K8s, the health checks are of two kinds: liveness (is app responsive) and readiness (is app responsive, but busy preparing and not yet able to serve)Out-of-the-box K8S provides a basic logging mechanism to pull aggregate logs for a set of containers that make up a pod.
Rated 4.5/5 based on 36 customer reviews
4276
Kubernetes vs Docker

What is Kubernetes?Kubernetes (also known as K8s) ... Read More

Can Devops and Scaled Agile Models Coexist?

DevOPs, is a perfect blend of Software DEVelopment and Information Technology OPerationS. This term is used to refer to a set of practices that highlights the collaboration and conveyance of both software developers and Information Technology people, while automating the software delivery process and performing changes in infrastructure. The aim behind DevOPs is to provide the environment which can help in coding, testing and deploying the software, rapidly and more reliably. Agile and DevOPs are similar concepts. Agile represents some changes in imagining and practicing what is best for organizational change, whereas DevOps places more importance on implementing the organizational changes to perceive its goals. Due to the increasing popularity of the Agile Software Development, the need to introduce DevOPs has escalated. This has led to a cooperation in software delivery, as there are increased number of releases. If DevOps practices are adapting the Agile methodology, it will contribute to an increased organizational profit. Ultimately, it will be beneficial if Agile and DevOps go hand in hand, to maximize the outcome for the organization. However, everyone should be aware of the fact that the software is being created first and ranked by the developers. The success of the implementation of this software  depends entirely on the use of the software. Let’s have a look on some well-known models: SAFe (Scaled Agile Framework) Scaled Agile Framework is the Agile Software Framework, designed by Scaled Agile, Inc. It consists of a knowledge base of combined patterns, which forms the Enterprise-Scale and Lean-Agile development. SAFe aims at the collaboration of both software and systems development and delivery, for increased number of Agile teams. SAFe is mainly about helping the customers to solve their most challenging scaling problems. SAFe uses 3 primary bodies to maximize the advantage. These are: Agile development, Lean product development, and systems thinking. You can say that there are two aspects of SAFe are: the DevOps team and the System team. The System team is more centered around the development side activities like Continuous Integration and test automation, while  the DevOps team focuses on the features which will be useful for deployment and the automation of the process. DAD (Disciplined Agile Delivery): The focus of DAD is more on the processes. DAD is the decision framework process that enables decisions on incremental and iterative solution delivery. DAD has been known as the “movement beyond the scrum”,which is described in the book written by Scott Ambler and Mark Lines. The DAD framework provides a mechanism to construct smoothly working of IT processes and helps in Scaling. LESS (Large Scale Scrum): In LESS, DevOps is not called in the detailed manner, but it is covered by Technical Excellence. Technical Excellence is the way of  making changes to the product in a simple, rapid and flexible way,also called Technical Agility. LESS has a ‘less’ focus on explicitly describing how to use the process and at what time.  So, it is upto you to assign the work at the right time to the right person. It is therefore highly recommendable that the organizations use the combination  of ‘Explicit Structure of SAFe principles’ and the ‘ideas of LESS’ to create a framework. This will definitely create more profit in your Organizations.
Rated 4.0/5 based on 1 customer reviews
4438
Can Devops and Scaled Agile Models Coexist?

DevOPs, is a perfect blend of Software DEVelopment... Read More

A Complete Guide to DevOps

1.History of DevOps First time pitched in Toronto Conference in 2008 by Patrick Debois and Andrew Shafer. He proposed that there is a better approach can be adopted to resolve the conflicts we have with dev and operation time so far. This again took a boom when 2 flickr employee delivered a  seminar that how they are able to place 10+ deployment in a day. They came up with a proven model which can resolve the conflicts of Dev and operation having component build, test and deploy this should be an integrated development and operations process.2.Definition of DevOpsDevOps is a practice culture having a union of people, process, and tools which enables faster delivery. This culture helps to automate the process and narrow down the gaps between development and IT. DevOps is a kind of abstract impression that focuses on key principles like Speed, Rapid Delivery, Scalability, Security, Collaboration & Monitoring etc.             A definition mentioned in Gartner “DevOps represents a change in IT culture, focusing on rapid IT service delivery through the adoption of agile, lean practices in the context of a system-oriented approach. DevOps emphasizes people (and culture) and seeks to improve collaboration between operations and development teams. DevOps implementations utilize technology— especially automation tools that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective.”3.How DevOps Works with a toolchain?4.Benefits of DevOpsAdding some benefits which we can see after adopting DevOps :a) Horizontal and vertical growth: When I’m using “Horizontal and Vertical Growth” I’m keeping customer satisfaction on X, Business on Y2 and time on Y axis. Now the question is how it helps to populate growth in 2 axes, and my answer will be the quick turnaround time for minor and major issues. Once we adopt DevOps we scale and build in such fashion that in less time the graph shows a rapid jump.b) Scalability & Quality: If a business starts reaching to more user we start looking to increase infrastructure and bandwidth. But in other hands, it starts popping up a question whether we are scaling our infra in right way and also if a lots people are pushing changes (Your code commits/builds) are having the same or great quality we have earlier. Both of the questions are somehow now reserved by the DevOps team. If your business pitch that we might be going to hit 2000+ client and they will be having billion of traffic and we are ready to handle, DevOps take this responsibility and says yes can scale infra at any point of time. And if the same time internal release team says I want to deliver 10 feature in next 10 days independently, DevOps says quality can be maintained.c) Agility & Velocity: They key param of adopting DevOps is to improve the velocity of product development. DevOps enables Agility and if both are in sync we can easily observe the velocity. The expectation of end users are always high and at the same time, the deliverable time span is short. To achieve this we have to ensure that we are able to our rollout new features to customers at much higher frequencies otherwise your competitors may win the marketBe an industry recognized DevOps engineer.  Click here for schedules.d) Improving ROI of Data: Having DevOps in an organization ensures that we can design a decent ROI from data at an early stage more quickly. If we will do a raw survey now Software industry is playing with data and have control over there a team should have an end to end control on data. And if we define DevOps it will help a team to crunch data in various ways by automating small jobs. Automation we can segregate and justify data and this helps to populate either in Dashboard or can present offline to a customer.5.How DevOps enables CI/CD (Continuous Integration/Continuous Delivery)When we say “continuous” it doesn’t translate that “always running” but of course “always ready to run”.Continuous integration is nothing but the development philosophy and practices that drive teams to check-in code to version control system as often as possible. So keeping your build clean and QA ready developer's changes need to be validated by running automated tests against the build that could be a Junit, iTest. The goal of CI is to place a consistent and automated way to build and test applications which results in a better collaboration between teams, and eventually a better-quality product.Continuous Delivery is an adjunct of CI which enables a facility to make sure that we can release new changes to your customers quickly in a sustainable way. Typical CD involves below steps:Pull code from version control system like bitbucket and execute build.Execute any required infrastructure steps command line/script to stand up or tear down cloud infrastructure.Move to build to right compute environment Able to handle all the configuration generation process.Pushing application components to their appropriate services, such as web servers, API services, and database services.Executing any steps required to restarts services or call service endpoints that are needed for new code pushes.Executing continuous tests and rollback environments if tests fail.Providing log data and alerts on the state of the delivery. Below there is a table which will help us better to understand what we need to put an effort and what we will gain if we enable CI/CD in place:Practice testEffort RequiredGainContinuous integrationa) Need to prepare automated  your team willneed to write automated tests for each new feature, improvement or bug fixb)You need a continuous integration server that can monitor the main repository and run the tests automatically for every new commits pushed.c)Developers need to merge their changes as often as possible, at least once a day.a) Will give control on regressions which can be captured in early stage of automated testing.b)Less context switching as developers are alerted as soon as they break the build and can work on fixing it before they move to another task.c)Building the release is easy as all integration issue have beer solved early.d)Testing costs are reduced drastically- your Cl server can run hundreds of tests in the matter 0 seconds.e)Your QA team spend less time testing and can focus or significant improvement to the quality culture.Continuous Deliverya)You need a strong foundation in continuousintegration and your test suite needs to cover enough of your codebase.b)Deployments need to be automated. The tigger is still manual but once a deployment is started there shouldn't be a need for human intervention.c)Your team will most likely need to embrace feature flags so that incomplete feature do not affect customer in production.a)The complexity of deploying software has been taken away your team doesn't have to spend days preparing for a release anymore.b)You can release more often thus accelerating the feedback loop with your customer.C)There is much less pressure on decisions for small changes hence encouraging iterating faster.Sample Flow of CI /CD plan:-6.DevOps Tools Build: Jenkins :  Jenkins is an open source automation server which is used to automate the software build, and deliver or deploy the build.  It can be installed through native system packages, Docker, or even run standalone by any machine with a Java Runtime Environment (JRE) installed. In short, Jenkins enables continuous integration which helps to accelerate the development. There are ample of plugins available which enables integration for Various DevOps stages. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc.Alert: Pingdom: Pingdom is a platform which enables monitoring to check the availability,  performance, transaction monitoring (website hyperlink) and incident collection of your websites, servers or web applications. Beauty is if you are using collaboration tool like slack or flock you can just integrate by using the webhook (Pretty much simple no code required)you can easily get notified at any time. Pingdom also provides API so you can have your customized dashboard (Recently started) and the documentation is having enough details and self-explanatory.                                                                                                     Image copyright Pingdom - D-3Nagios: It’s an Open Source Monitoring Tool to monitor the computer network. We can monitor server, applications, incident manager etc and you can certainly configure email, SMS, Slack notifications and phone calls even. Nagios is licensed under GNU GPLv2. Listing some major components which can be monitored with Nagios:Once we install Nagios we get a dashboard to monitor network services like SMTP, HTTP, SNMP, FTP, SSH, POP, etc and can view current network status, problem history, log files, notifications that have been triggered by the system, etc.We can monitor Servers resources like disk drives, memory, processor, server load usage, system logs, etc.       Image copyright stack driver- D-4Stackdriver: Stackdriver is again a Monitoring tool to get the visibility of performance, uptime, and overall health for cloud-powered applications. Stackdriver monitoring collects events and metadata from Google Cloud Platform, Amazon Web Services (AWS). Stackdriver consumes data and generates insights via dashboards, charts, and alerts. And for alerting we can integrate to collaboration tools like Slack, PagerDuty, HipChat, Campfire, and more.                                                                                             Image copyright stack driver- D-2Adding one sample log where we can see what all parameter it collects and also i have just separated them in the fashion which will help us to understand what all it logs it actually collects:Log InformationUser Details and Authorization InfoRequest Type and Caller IPResource and Operation DetailsTimeStamp and Status Details{ insertId:    logName:  operation: {   first:     id:   producer:  }protoPayload: {   @type:   authenticationInfo: {   principalEmail:       }   authorizationInfo: [   0: {   granted:     permission:       }   ]methodName:     request: {    @type:     } requestMetadata:  { callerIp: callerSuppliedUserAgent:   }resourceName: response: { @type:    Id:  insertTime:    name:  operationType:    progress:    selfLink:    status:      targetId:     targetLink:    user:    zone:     }   serviceName:    }receiveTimestamp:  resource: {  labels: {  instance_id:  project_id:      zone:      }  type:    }  severity:  timestamp:   }Monitoring:Grafana: It is an open source visualization tool and can be used on top of different data stores like InfluxDB,Elasticsearch and Logz.io.We can create comprehensive charts with smart axis formats (such as lines and points) as a result of Grafana’s fast, client-side rendering — even over long ranges of time — that uses Flot as a default option. We can get the 3 different level of access, watcher, Editor and Admin, even we can enable G-Auth for having good access control. A detail information guide can be found here Image copyright stack driver- D-5ELK:ELK:Elasticsearch: It's an open source realtime distributed, RESTful search and analytics engine. It collects unstructured data and stores in a cultivated format which is optimized and available for language based search. The beauty of Elastic is scalability, speed, document-oriented, schema-free. It scales horizontally to handle kajillions of events per second, while automatically managing how indices and queries are distributed across the cluster for smooth operations.Logstash :  Logstash is used to gather logging messages, convert them into JSON documents and store them in an ElasticSearch cluster.Kibana:reOptimize.io: Once we run ample of servers we usually end up with burning good amount not intentionally but of course because of not have a clear visualization. At reoptimize helps thereby providing a detailed insight about the cloud expense the integration can be done with 3-4 simple steps but before that you might need to look into the request which can be accessed here. Just a heads up that they only need a read access for all these and permission docs can be found here .          Image copyright reOptimizeD-67.DevOps vs AgileDevOpsAgileDevOps culture can be enabled in the software industry to deliver reliable build.Agile is a generic culture which can be deployed in any departmentThe key focus area is to have involvement at the end to end processAgile helps management team to push the frequent releaseEnables quality build with rapid deliveryKeep team aware of frequent changes for any release and featureAgile sprints work within the immediate future, A sprint lifecycle varies between 7-30 days.DevOps don’t have such scheduled matrix, they work to avoid such unscheduled disruptions.Team size also differs, in Agile wee can minimal resource can be one as well.DevOps works on collaboration and bridge a big set of the team. 8.What are a DevOps engineer & roles and responsibilities of it? Don’t want to be specific but I should say when they build or code they keep the business logic in mind. DevOps engineer should also seek for why this but why not this. They add value when they communicate infra which help business to scale. DevOps team should have control on data matrix collection which could be business driven and the automation pipeline would be Dev driven. As current demand of the cloud is on the pick they should also have a control over the scalability. DevOps engineer are expert collaborators and also they are often known as a mentor for a software developer as they help them to build a scalable architecture. At the same time, they join IT and security team to ensure that there is enough work has been done to push quality code and to avoid data breach.9.DevOps engineer salary As per the current market, trend salary doesn’t seem to be a bar for right candidate. The basic expectation from companies are, the candidate should be a good learner and at the same time, he or she should be able to implement that expertise to resolve the real-time problem. And for cracking good no’s you must have the ability to build an automated pipeline of DevOpsCountryAverage salaryExpUS$51,640-$133,3780-5 YEARSIndia$4,144-$21,184UK$73,367-$96,845SV$41,000-$210,000  Above table is a kind of current trending no’s which we recently heard from candidates, companies and also the analytical firms who all participate in this (Hiring Firms). Nowadays there is we can see there is a demand for DevOps engineer in almost every organization. And also I must say Startup always beat MNC in salary though they also beat in work, which I believe it is always a good sign to see a growth in your skill and no’s.DevOps Useful LinkWhat is DevOps? (Credit- Sam Guckenheimer- Azure)Introduction to CI/CD (Credit- Justin Ellingwood- Digital Ocean)How to autoscale : GCP (Credit: RK Kuppala - Searce)Opensource Tool for DevOps (Credit: Tomer Levy - Logz.io)DevOps success story (Credit: Ben Putano - Stackify )Netflix Load Balancing system (Credit:Netfllix)DevOps by vmware (Credit: VmWare)
Rated 4.0/5 based on 11 customer reviews
3074
A Complete Guide to DevOps

1.History of DevOps First time pitched in Toronto... Read More