Search

DevOps Roadmap to Become a Successful DevOps Engineer

“DevOps is a combination of best practices, culture, mindset, and software tools to deliver a high quality and reliable product faster”Benefits of DevOps (Dev+Ops(SysAdmins plus Database Admins)    DevOps agile thinking drives towards an iterated continuous development model with higher velocity, reduced variations and better global visualization of the product flow. These three “V”s are achieved with synchronizing the teams and implementing CI/CD pipelines that automate the SDLC repetitive and complex processes in terms of continuous integration of code, continuous testing, and continuous delivery of features to the production-like environment for a high-quality product with shorter release cycles and reduced cost.This ensures customer satisfaction and credibility.A streamlined process in place with the help of best practices and DevOps tools reduce the overhead, and downtime thus giving more opportunity for innovation. As a matter of fact, DevOps way of defining every phase (coding, testing, infrastructure provisioning, deployment, and monitoring) as code also makes it easier to rollback a versioned code in case of disaster recovery and make the environment easily scalable, portable and secure.“DevOps tools help you accomplish what you can already do but do not have time to do it.”1. What are the tasks of a DevOps Engineer?A Summary of day-to-day tasks carried out by a DevOps engineer -Design, build, test and deploy scalable, distributed systems from development through productionManage the code repository(such as Git, SVN, BitBucket, etc.) including code merging and integrating, branching and maintenance and remote repository managementManage, configure and maintain infrastructure systemDesign the database architecture and database objects and synchronize the various environmentsDesign implement and support DevOps Continuous Integration and Continuous Delivery pipelinesResearch and implement new technologies and practicesDocument processes, systems, and workflowsCreation and enhancement of dynamic monitoring and alerting solutions using industry-leading servicesContinuously analyse tasks that are performed manually and can be replaced by codeCreation and enhancement of Continuous Deployment automation built on Docker and Kubernetes.2. Who can become a DevOps Engineer?DevOps is a vast environment that fits almost all technologies and processes into it. For instance, you could come from a coding or testing background or could be a system administrator, a database administrator, or Operations team there is a role for everyone to play in a DevOps approach.You are ready to become a DevOps Engineer if you have the below knowledge and/expertise-You have a Bachelor’s or Master’s or BSC degree (preferably in Computer Science, IT, Engineering, Mathematics, or similar)Minimum 2 years of IT experience as a Software Developer with a good understanding of SDLC lifecycle with lean agile methodology (SCRUM)Strong background in Linux/Unix & Windows AdministrationSystem development in an Object-oriented or functional programming language such as Python / Ruby / Java / Perl / Shell scripting / Groovy or GoSystem-level understanding of Linux (RedHat, CentOS, Ubuntu, SUSE Linux), Unix (Solaris, Mac OS) and Windows ServersShell scripting and automation of routines, remote execution of scriptsDatabase management experience in Mongo/Oracle or MySQL databaseStrong SQL and PL/SQL scriptingExperience working with source code version control management like Git, GitLab, GitHub or SubversionExperience with cloud architectures, particularly Amazon Web Services(AWS) or Google cloud platform or Microsoft AzureGood understanding of containerization using Dockers and/or KubernetesExperience with CI/CD pipelines using Jenkins and GitLabKnowledge of data-centre management, systems management, and monitoring, networking & securityExperience in Automation/configuration management using Ansible, and/or Puppet and/or ChefKnow how to monitor your code using Configuration Monitoring tools such as Nagios or PrometheusBackground in Infrastructure and NetworkingExtensive knowledge about RESTful APIsA solid understanding of networking and core Internet protocols (e.g. TCP/IP, DNS, SMTP, HTTP, and distributed networks)Excellent written and verbal English communication skillsSelf-learner, team layer, willingness to learn new technologies and ability to resolve issues independently and deliver results.3. Roadmap to becoming a DevOps Engineer3.1 Learn a programming languageA programming language enables a user to interact and manage the system resources such as the kernel, device drivers, memory devices, I/O devices; also to write software.A well-written piece of code will be more versatile, portable, error-proof, scalable and optimized that will enhance your DevOps cycle letting you be more productive with a high-quality product. As a DevOps Engineer, you will have to use many software and plugins for a CI/CD pipeline, and you will be at your best if you have a good grip on some of the popular programming languages:1. Java : An object-oriented, general-purpose programming language. Goal – “Write once, run anywhere”, which is synonymous with the Dockers(or containerization) philosophy     2. C: Is a general-purpose procedural programming language, it supports structured programming3. C#: A general-purpose, multi-paradigm object-oriented programming (OOP) language4. Python: Python is an easy to learn, interpreted, high-level and powerful programming language with an object-oriented approach. Ideal for infrastructure programming and web development. It has a very clear syntax5. Ruby: Is an open-source dynamic OOP programming language with an elegant and easy syntax. This implements multiple multi-paradigm approaches.As you know, DevOps majorly emphasizes on automating the repetitive and error-prone tasks. You ought to know any of the popular scripting languages:6. Perl: Perl is a highly capable scripting programming language, with its syntax very similar to C7. Bash shell script: Powerful set of instructions in a single shell script file to automate repetitive and complex commands8. JavaScript: An interpreted scripting language to build websites9. PowerShell for windows: A cross-platform automation and configuration framework or tool, that deals with structured data, REST APIs and object models. It has a command-line tool.Good-to-know language:10. Go: Go is an open-source programming language developed by Google. It is used to build simple, reliable and efficient software3.2 Understand different OS conceptsAs a Software developer, you must be able to write code that can interact with the machine resources and have a sound understanding of the underlying OS you are dealing with.Knowing the OS concepts will help you be more productive in your programming.This gives you the ability to make your code faster, manage processes, interact with the input-output devices, communicate with the other OS, optimize the processing usage, memory usage and disk usage of your program.As a DevOps engineer with infrastructure role, setting up and managing servers, controllers and switches becomes easier if you understand the resources, processes, and virtualization concepts very well.To be able to administer the users and groups, file permissions and security you must know the filesystem architecture.Essential OS concepts a DevOps engineer must know include:I. Kernel managementKernel is the core element of any OS. It connects the system hardware with the software. It is responsible for memory, storage, and process managementII. Memory ManagementMemory management is the allocation/deallocation of system memory(RAM, cache, page) to various system resources and to optimize the performance of the systemIII. Device drivers managementA device driver is a software program that controls the hardware device of the machineIV. Resource managementThe dynamic allocation/deallocation of system resources such as kernel, CPU, memory, disk and so onV. I/O managementCommunication between various input/output devices connected to the machine such as- keyboard, mouse, disk, USB, monitor, printers, etc VI. Processes and process managementEvery program that executes a certain task is called a process, each process utilizes a certain amount of computational resources. The technique of managing various processes to share the load of memory, disk and CPU(processing) usage also the inter-process communication is termed as process managementVII. Threads and concurrencyMany programming languages support multi-threading and concurrency, i.e, the ability to run multiple tasks simultaneously  VIII. Virtualization and containerizationConcept of simulating a single physical machine to multiple virtual machines/environments to optimize the use of resources and to reduce the time is taken and cost. Understand this well as you will often need to replicate the real-time environment.Linux  containers are a great concept to isolate and package an application along with its run-time environment as a single entity.Run-time environment includes all its dependencies, binaries, configuration files and libraries. Dockers is a containerized command-line tool that makes it easier to create, run and deploy applications with containers.Using both Virtual machines and dockers together can yield better results in virtualizationIX. Distributed file systemsA client machine can access data located on a Server machine. This is true in the case of a client/server-based application model.X. Filesystem architectureThe architectural layout of how and in what hierarchy the data is organized on a disk, will make your task of managing data easier.3.3 Learn about managing serversAs cloud deployments become more useful with DevOps approach, there is a need to manage a group of Servers (Application, Database, Web Server, Storage, Infrastructure, Networking Server and so on) rather than individual servers.You should be dynamically scaled up/down the servers, without rewriting the configuration files.Nginx: This is a web server that can also be used as a reverse proxy, load balancer, mail proxy, and HTTP cache.This provides robust and customizable monitoring of your cloud instances and their status. Nginx offers more flexibility and configurability for better configuration and automation using DevOps tools like Puppet and Chef.3.4 Networking and SecurityIn a highly connected network of computers, it becomes essential to understand the basic concepts of networking, how to enforce security and diagnose problems.As a DevOps engineer, you would also be required to set up an environment to test networking functions. In addition, set up continuous integration, delivery and deployment pipelines for network functions.Learn the basic networking concepts like Ip addresses, DNS, routing, firewalls and ports, basic utilities like ping, ssh, netstat, ncr and ip, load balancing and TLS encryption.Understand the basic protocols(standard rules for networking) such as-TCP/IP (Transfer Control Protocol/Internet Protocol), HTTP (Hypertext Transfer Protocol), SSL, SSH (Secure Shell), FTP (File Transfer Protocol), DNS (Domain Name Server).Configuration management tools like Ansible and Jenkins can be used to configure and orchestrate network devices.3.5 What is and how to set-upAs a DevOps methodology we often describe CI/CD pipeline, let us understand what is it?Continuous Integration(CI) is a development practice wherein ­­developers regularly merge or integrate their code changes into a commonly shared repos­itory very frequently.If I speak from a VCS (preferably Git’s) point of view -Every minor code change done on various branches (from different contributors) is pushed and integrated with the main release branch several times a day, rather than waiting for the complete feature to be developed.Every code check-in is then verified by an automated build and automated test cases. This approach helps to detect and fix the bugs early, resolve the conflicts that may arise, improve software quality, reduce the validation and feedback loop time; hence increasing the overall product quality and speedy product releases.Continuous Delivery(CD) is a software practice where every code check-in is automatically built, tested and ready for a release(delivery) to production. Every code check-in should be release/deployment ready.CD phase delivers the code to a production-like-environment such as dev, uat, preprod, etc and runs automated tests.On successful implementation of continuous delivery in the prod-like environment, the code is ready to be deployed to the main production server.It is best to learn the DevOps lifecycle of continuous development, continuous build, continuous testing, continuous integration, continuous deployment and continuous monitoring throughout the complete product lifecycle.Based on the DevOps process setup use the right tools to facilitate the CI/CD pipeline.3.6 Learn Infrastructure as codeInfrastructure as code (IaC) is to define(or declare) and manage the infrastructure resources programmatically by writing code as configuration files instead of managing each resource individually.These infrastructure resources(hardware and software) may be set up on a physical server, a Virtual machine or cloud.An IaC defines the desired state of the machine and generates the same environment every time it is compiled.What does IaC do?Automation: Spinning up or scaling down many resources becomes easier, as just a configuration file needs to be compiled and run. This reduces the overhead and the time spent.Versioning:  IaC is a text file which can be versioned controlled which means 3 things:Infrastructure changes such as scaling up/down the resources and or changing/updating the resources (filesystem or user management) can be tracked through the versioned historyConfiguration files are easily shareable and portable and are checked-in as source codeAn IaC text file can easily be scheduled to be run in a CI/CD pipeline for Server management and orchestration.Manual errors eliminated: productivity increasedEach environment is an exact replica of production.How to do it?Use tools like  Puppet,  Ansible,  Chef,  TerraformThese tools aim at providing a stable environment for both development and operations tasks that results in smooth orchestration.A. Puppet: Puppet is a Configuration Management Tool (CMT) to build, configure and manage infrastructure on physical or virtual machinesB. Ansible: is a Configuration management, Deployment and Orchestration toolC. Chef: is a configuration management tool written in Ruby and Erlang to deploy, manage, update and repair server and application to any environmentD. Terraform: This automation tool builds, change, version and improve infrastructure and servers safely and efficiently.How will IaC be applied in DevOps?IaC configuration files are used to build CI/CD pipelines.IaC definitions enable DevOps teams to test applications/software in production-like stable environments quickly and effortlessly.These environments with IaC are repeatable and prevent runtime issues caused due to misconfiguration or missing dependencies.---3.7 Learn some Continuous Integration and Delivery (CI/CD) toolsIn order to continuously develop, integrate, build, test, apply feedback, deliver our product features to the production environment or deploy to the customer site, we have to build an automated sequence of jobs(processes) to be executed using the appropriate tools.CI/CD pipeline requires custom code and working with multiple software packages simultaneously. As a DevOps Engineer, here are some widely used tools you must know-a.  Jenkins is an open-source automation server. Using Jenkins plugins CI/CD pipelines are built to automatically build, test and deploy the source codeJenkins is a self-contained Java-based program and easy to configure, extensible and distributedb.  GitLab CI is a single tool for the complete DevOps cycle. Every code check-ins trigger builds, run tests, and deploy code in a virtual machine or docker container or any other server. Its has an excellent GUI interface. GitLab CI also has features for monitoring and securityc.  CircleCI software is used to build, test, deploy and automate the development cycle. This is a secure and scalable tool with huge multi-platform support for IOS and MAC OS using MAC virtual machines along with Android and Linux environmentsd.  Microsoft VSTS(Visual Studio Team Services) is not only a CI/CD service but also provide unlimited cloud-hosted private code repositoriese.  CodeShip tool empowers your DevOps CI/CD pipelines with easy, secure, fast and reliable builds with native docker support. It provides a GUI to easily configure the buildsf.  Bamboo by Atlassian is a Continuous integration, deployment and delivery Server. Bamboo has built-in  Jira Software and  BitBucket Software Integration, also built-in git branching and workflows.Jenkins is the most popular and widely used tool with numerous flexible plugins that integrate with almost any CI/CD toolchain. Also the ability of Jenkins to automate any project really distinguish this tool from others, thus it is highly recommended to get a good grip of this tool as a DevOps practitioner.Note: Since this is also a key for enthusiasts to choose the right tool but should be short definitions3.8 Know the tools to monitor software and infrastructureIt is crucial to continuously monitor the software and infrastructure upon setting up the continuous integration and continuous delivery pipeline (CI/CD) to understand how well your DevOps setup is performing. Also, it is vital to monitor system events and get alerts in real-time. A hiccup in the pipeline such as an application dependency failure or a linking error, or say the database has a downtime must be immediately notable and taken care of.This is where a DevOps Engineer must be familiar with monitoring tools such as -1.  Nagios: is an open-source software application that monitors systems, networks, and infrastructure(Servers) and generates logs and alerts2.  Prometheus: is an open-source real-time metrics-based event monitoring and alerting system.3.9 Learn about Cloud ProvidersAs the computational need increases so do the demand of the infrastructure resources.Cloud computing is a higher level of virtualization, wherein the computing resources are outsourced on a “cloud” and available for use on a pay-as-you-go basis over the internet.Some of the leading cloud providers such as AWS, Google Cloud, Microsoft Azure to name a few provide varied cloud services like IaaS, PaaS, and SaaS.Begin part of a DevOps practice, you will often find the need to access various cloud services say for infrastructure resources, production-like environment on the go for testing your product without having to provision it, get multiple replicas of the production environment, create a failover cluster, backup and recover your database over the cloud and various other tasks.Some of the cloud providers and what they offer are listed below-A.  AWS (Amazon Web Services): provide tooling and infrastructure resources readily available for DevOps programs customized as per your requirement. You can easily build and deliver products, automate CI/CD process without having to worry about provisioning and configuring the environmentB.  Microsoft Azure: Create a reliable CI/CD pipeline, practice Infrastructure as Code and continuous monitoring through Microsoft-managed data centresC.  Google Cloud Platform: Uses google-managed data centres to provide DevOps features like end-to-end CI/CD automation, Infrastructure as Code, configuration management, security management, and serverless computing.AWS is the most versatile and recommended provider that you may wish to start learning.4. What next after becoming a DevOps expert?“Sky is the only limit for a DevOps person !!!”Mastering the DevOps tools and practices opens up the door to new roles and challenges for you to learn and grow.4.1 DevOps EvangelistA technical Evangelist is a strong powerful and influential role that exhibits a strong thought process.A DevOps evangelist is a DevOps leader who identifies and implements the DevOps features to solve a business problem or a process, and then shares and promotes the benefits that come from DevOps practice.Also identifies the key roles and train the team in the same and is responsible for the success of entire DevOps processes and people.4.2 Code Release ManagerA Code Release Manager measures the overall progress of the project in terms of metrics, he/she is aware of the entire Agile methodology. A Release Manager is more involved in the coordination among all the phases of DevOps flow to support continuous delivery.4.3 Automation ArchitectThe key responsibility is to plan, analyze, and design a strategy to automate all manual tasks with the right tools and implement the processes for continuous deployment.4.4 Experience AssuranceAn experience Assurance person is responsible for the user experience and makes sure that the product being delivered meet the original business specifications.This role is also termed as Quality Assurance but with extended responsibilities of user experience testing. This role plays a critical role in the DevOps cycle.4.5 Software Developer/TesterUnder DevOps, the role and responsibilities of a Software Developer literally expand l, that the developers are no longer responsible for writing code, but also take ownership of unit testing, deployment and monitoring as well.A Developer/Tester has to make sure that the code meets the original business requirement.Henceforth; the role Developer/Tester or if the innovation extends further a Developer may also be referred to as DevTestOps.4.6 Security EngineerSecurity Engineer focuses on the Integrity of data by incorporating security into the product, and not at the end.He/she supports project teams in using security tools in the CI/CD pipeline, as well as provide resolution of identified security flaws. Conclusion“If you define the problem correctly, you almost have the solution.”  - Steve JobsIn a nutshell, if you aspire to  become a DevOps professional you ought to know -Programming language (C, Java, Perl, Python, Ruby, Bash shell, PowerShell)Operating System concepts (resource management)Source Control (like Git, Bitbucket, Svn, VSTS, etc)Continuous Integration and Continuous Delivery (Jenkins, GitLab CI, CircleCI)Infrastructure as Code (IaC) Automation (tools like Puppet, Chef, Ansible and/or Terraform)Managing Servers (application, storage, database, infrastructure, networking, web server etc)(Application, Database, Web Server, Storage, Infrastructure, Networking Server Networking and securityContainer Concepts (Docker)Continuous monitoring (Nagios and Prometheus)Cloud (like AWS, Azure, Google Cloud).DevOps ways( The three ways of DevOps) open the door of opportunities to improve and excel in the process using the right tools and technologies.“DevOps channels the entire process right from the idea on a whiteboard until the real product in the customer’s hands through automated pipelines(CI/CD).”As a DevOps Engineer you must be a motivated team player, need to have a desire to learn and grow, optimize the process and find better solutions.Since DevOps covers a vast area under its umbrella, it is best to focus on your key skills and learn the technologies and tools as needed.Understand the problem/challenge then find a DevOps solution around the same.

DevOps Roadmap to Become a Successful DevOps Engineer

9K
DevOps Roadmap to Become a Successful DevOps Engineer

“DevOps is a combination of best practices, culture, mindset, and software tools to deliver a high quality and reliable product faster

Benefits of DevOps

Benefits of DevOps (Dev+Ops(SysAdmins plus Database Admins)

 
 
 
 

DevOps agile thinking drives towards an iterated continuous development model with higher velocity, reduced variations and better global visualization of the product flow. These three “V”s are achieved with synchronizing the teams and implementing CI/CD pipelines that automate the SDLC repetitive and complex processes in terms of continuous integration of code, continuous testing, and continuous delivery of features to the production-like environment for a high-quality product with shorter release cycles and reduced cost.This ensures customer satisfaction and credibility.

A streamlined process in place with the help of best practices and DevOps tools reduce the overhead, and downtime thus giving more opportunity for innovation. As a matter of fact, DevOps way of defining every phase (coding, testing, infrastructure provisioning, deployment, and monitoring) as code also makes it easier to rollback a versioned code in case of disaster recovery and make the environment easily scalable, portable and secure.

“DevOps tools help you accomplish what you can already do but do not have time to do it.”

1. What are the tasks of a DevOps Engineer?

A Summary of day-to-day tasks carried out by a DevOps engineer -

  • Design, build, test and deploy scalable, distributed systems from development through production
  • Manage the code repository(such as Git, SVN, BitBucket, etc.) including code merging and integrating, branching and maintenance and remote repository management
  • Manage, configure and maintain infrastructure system
  • Design the database architecture and database objects and synchronize the various environments
  • Design implement and support DevOps Continuous Integration and Continuous Delivery pipelines
  • Research and implement new technologies and practices
  • Document processes, systems, and workflows
  • Creation and enhancement of dynamic monitoring and alerting solutions using industry-leading services
  • Continuously analyse tasks that are performed manually and can be replaced by code
  • Creation and enhancement of Continuous Deployment automation built on Docker and Kubernetes.

2. Who can become a DevOps Engineer?Who can become a DevOps Engineer

DevOps is a vast environment that fits almost all technologies and processes into it. For instance, you could come from a coding or testing background or could be a system administrator, a database administrator, or Operations team there is a role for everyone to play in a DevOps approach.

You are ready to become a DevOps Engineer if you have the below knowledge and/expertise-

  • You have a Bachelor’s or Master’s or BSC degree (preferably in Computer Science, IT, Engineering, Mathematics, or similar)
  • Minimum 2 years of IT experience as a Software Developer with a good understanding of SDLC lifecycle with lean agile methodology (SCRUM)
  • Strong background in Linux/Unix & Windows Administration
  • System development in an Object-oriented or functional programming language such as Python / Ruby / Java / Perl / Shell scripting / Groovy or Go
  • System-level understanding of Linux (RedHat, CentOS, Ubuntu, SUSE Linux), Unix (Solaris, Mac OS) and Windows Servers
  • Shell scripting and automation of routines, remote execution of scripts
  • Database management experience in Mongo/Oracle or MySQL database
  • Strong SQL and PL/SQL scripting
  • Experience working with source code version control management like Git, GitLab, GitHub or Subversion
  • Experience with cloud architectures, particularly Amazon Web Services(AWS) or Google cloud platform or Microsoft Azure
  • Good understanding of containerization using Dockers and/or Kubernetes
  • Experience with CI/CD pipelines using Jenkins and GitLab
  • Knowledge of data-centre management, systems management, and monitoring, networking & security
  • Experience in Automation/configuration management using Ansible, and/or Puppet and/or Chef
  • Know how to monitor your code using Configuration Monitoring tools such as Nagios or Prometheus
  • Background in Infrastructure and Networking
  • Extensive knowledge about RESTful APIs
  • A solid understanding of networking and core Internet protocols (e.g. TCP/IP, DNS, SMTP, HTTP, and distributed networks)
  • Excellent written and verbal English communication skills
  • Self-learner, team layer, willingness to learn new technologies and ability to resolve issues independently and deliver results.

3. Roadmap to becoming a DevOps Engineer

3.1 Learn a programming language

Learn a programming language in DevOps Engineer

A programming language enables a user to interact and manage the system resources such as the kernel, device drivers, memory devices, I/O devices; also to write software.

A well-written piece of code will be more versatile, portable, error-proof, scalable and optimized that will enhance your DevOps cycle letting you be more productive with a high-quality product. 

As a DevOps Engineer, you will have to use many software and plugins for a CI/CD pipeline, and you will be at your best if you have a good grip on some of the popular programming languages:

1. Java : An object-oriented, general-purpose programming language. Goal – “Write once, run anywhere”, which is synonymous with the Dockers(or containerization) philosophy     

2. C: Is a general-purpose procedural programming language, it supports structured programming

3. C#: A general-purpose, multi-paradigm object-oriented programming (OOP) language

4. Python: Python is an easy to learn, interpreted, high-level and powerful programming language with an object-oriented approach. Ideal for infrastructure programming and web development. It has a very clear syntax

5. RubyIs an open-source dynamic OOP programming language with an elegant and easy syntax. This implements multiple multi-paradigm approaches.

As you know, DevOps majorly emphasizes on automating the repetitive and error-prone tasks. 

You ought to know any of the popular scripting languages:

6. PerlPerl is a highly capable scripting programming language, with its syntax very similar to C

7. Bash shell script: Powerful set of instructions in a single shell script file to automate repetitive and complex commands

8. JavaScript: An interpreted scripting language to build websites

9. PowerShell for windows: A cross-platform automation and configuration framework or tool, that deals with structured data, REST APIs and object models. It has a command-line tool.

Good-to-know language:

10. Go: Go is an open-source programming language developed by Google. It is used to build simple, reliable and efficient software

3.2 Understand different OS concepts

As a Software developer, you must be able to write code that can interact with the machine resources and have a sound understanding of the underlying OS you are dealing with.Knowing the OS concepts will help you be more productive in your programming.

This gives you the ability to make your code faster, manage processes, interact with the input-output devices, communicate with the other OS, optimize the processing usage, memory usage and disk usage of your program.

As a DevOps engineer with infrastructure role, setting up and managing servers, controllers and switches becomes easier if you understand the resources, processes, and virtualization concepts very well.

To be able to administer the users and groups, file permissions and security you must know the filesystem architecture.

Essential OS concepts a DevOps engineer must know include:

I. Kernel management

Kernel is the core element of any OS. It connects the system hardware with the software. It is responsible for memory, storage, and process management

II. Memory Management

Memory management is the allocation/deallocation of system memory(RAM, cache, page) to various system resources and to optimize the performance of the system

III. Device drivers management

A device driver is a software program that controls the hardware device of the machine

IV. Resource management

The dynamic allocation/deallocation of system resources such as kernel, CPU, memory, disk and so on

V. I/O management

Communication between various input/output devices connected to the machine such as- keyboard, mouse, disk, USB, monitor, printers, etc 

VI. Processes and process management

Every program that executes a certain task is called a process, each process utilizes a certain amount of computational resources. The technique of managing various processes to share the load of memory, disk and CPU(processing) usage also the inter-process communication is termed as process management

VII. Threads and concurrency

Many programming languages support multi-threading and concurrency, i.e, the ability to run multiple tasks simultaneously  

VIII. Virtualization and containerization

Concept of simulating a single physical machine to multiple virtual machines/environments to optimize the use of resources and to reduce the time is taken and cost. Understand this well as you will often need to replicate the real-time environment.

Linux  containers are a great concept to isolate and package an application along with its run-time environment as a single entity.

Run-time environment includes all its dependencies, binaries, configuration files and libraries. Dockers is a containerized command-line tool that makes it easier to create, run and deploy applications with containers.

Using both Virtual machines and dockers together can yield better results in virtualization

IX. Distributed file systems

A client machine can access data located on a Server machine. This is true in the case of a client/server-based application model.

X. Filesystem architecture

The architectural layout of how and in what hierarchy the data is organized on a disk, will make your task of managing data easier.

3.3 Learn about managing servers

As cloud deployments become more useful with DevOps approach, there is a need to manage a group of Servers (Application, Database, Web Server, Storage, Infrastructure, Networking Server and so on) rather than individual servers.

You should be dynamically scaled up/down the servers, without rewriting the configuration files.

Nginx: This is a web server that can also be used as a reverse proxy, load balancer, mail proxy, and HTTP cache.
This provides robust and customizable monitoring of your cloud instances and their status. Nginx offers more flexibility and configurability for better configuration and automation using DevOps tools like Puppet and Chef.

3.4 Networking and Security

In a highly connected network of computers, it becomes essential to understand the basic concepts of networking, how to enforce security and diagnose problems.

As a DevOps engineer, you would also be required to set up an environment to test networking functions. In addition, set up continuous integration, delivery and deployment pipelines for network functions.

Learn the basic networking concepts like Ip addresses, DNS, routing, firewalls and ports, basic utilities like ping, ssh, netstat, ncr and ip, load balancing and TLS encryption.

Understand the basic protocols(standard rules for networking) such as-
TCP/IP (Transfer Control Protocol/Internet Protocol), HTTP (Hypertext Transfer Protocol), SSL, SSH (Secure Shell), FTP (File Transfer Protocol), DNS (Domain Name Server).

Configuration management tools like Ansible and Jenkins can be used to configure and orchestrate network devices.

3.5 What is and how to set-up

As a DevOps methodology we often describe CI/CD pipeline, let us understand what is it?

Continuous Integration(CI) is a development practice wherein ­­developers regularly merge or integrate their code changes into a commonly shared repos­itory very frequently.

If I speak from a VCS (preferably Git’s) point of view -
Every minor code change done on various branches (from different contributors) is pushed and integrated with the main release branch several times a day, rather than waiting for the complete feature to be developed.

Every code check-in is then verified by an automated build and automated test cases. This approach helps to detect and fix the bugs early, resolve the conflicts that may arise, improve software quality, reduce the validation and feedback loop time; hence increasing the overall product quality and speedy product releases.

Continuous Delivery(CD) is a software practice where every code check-in is automatically built, tested and ready for a release(delivery) to production. Every code check-in should be release/deployment ready.

CD phase delivers the code to a production-like-environment such as dev, uat, preprod, etc and runs automated tests.

On successful implementation of continuous delivery in the prod-like environment, the code is ready to be deployed to the main production server.

It is best to learn the DevOps lifecycle of continuous development, continuous build, continuous testing, continuous integration, continuous deployment and continuous monitoring throughout the complete product lifecycle.

Based on the DevOps process setup use the right tools to facilitate the CI/CD pipeline.

3.6 Learn Infrastructure as code

Infrastructure as code (IaC) is to define(or declare) and manage the infrastructure resources programmatically by writing code as configuration files instead of managing each resource individually.

These infrastructure resources(hardware and software) may be set up on a physical server, a Virtual machine or cloud.

An IaC defines the desired state of the machine and generates the same environment every time it is compiled.

What does IaC do?

  1. Automation: Spinning up or scaling down many resources becomes easier, as just a configuration file needs to be compiled and run. This reduces the overhead and the time spent.
  2. Versioning:  IaC is a text file which can be versioned controlled which means 3 things:

    • Infrastructure changes such as scaling up/down the resources and or changing/updating the resources (filesystem or user management) can be tracked through the versioned history
    • Configuration files are easily shareable and portable and are checked-in as source code
    • An IaC text file can easily be scheduled to be run in a CI/CD pipeline for Server management and orchestration.
  3. Manual errors eliminated: productivity increased

    • Each environment is an exact replica of production.

How to do it?

Use tools like  Puppet,  Ansible,  Chef,  Terraform

These tools aim at providing a stable environment for both development and operations tasks that results in smooth orchestration.

A. Puppet: Puppet is a Configuration Management Tool (CMT) to build, configure and manage infrastructure on physical or virtual machines

B. Ansible: is a Configuration management, Deployment and Orchestration tool

C. Chef: is a configuration management tool written in Ruby and Erlang to deploy, manage, update and repair server and application to any environment

D. Terraform: This automation tool builds, change, version and improve infrastructure and servers safely and efficiently.

How will IaC be applied in DevOps?

IaC configuration files are used to build CI/CD pipelines.

IaC definitions enable DevOps teams to test applications/software in production-like stable environments quickly and effortlessly.

These environments with IaC are repeatable and prevent runtime issues caused due to misconfiguration or missing dependencies.
---

3.7 Learn some Continuous Integration and Delivery (CI/CD) tools

In order to continuously develop, integrate, build, test, apply feedback, deliver our product features to the production environment or deploy to the customer site, we have to build an automated sequence of jobs(processes) to be executed using the appropriate tools.

CI/CD pipeline requires custom code and working with multiple software packages simultaneously. 

As a DevOps Engineer, here are some widely used tools you must know-

a.  Jenkins is an open-source automation server. Using Jenkins plugins CI/CD pipelines are built to automatically build, test and deploy the source code

Jenkins is a self-contained Java-based program and easy to configure, extensible and distributed

b.  GitLab CI is a single tool for the complete DevOps cycle. Every code check-ins trigger builds, run tests, and deploy code in a virtual machine or docker container or any other server. Its has an excellent GUI interface. GitLab CI also has features for monitoring and security

c.  CircleCI software is used to build, test, deploy and automate the development cycle. This is a secure and scalable tool with huge multi-platform support for IOS and MAC OS using MAC virtual machines along with Android and Linux environments

d.  Microsoft VSTS(Visual Studio Team Services) is not only a CI/CD service but also provide unlimited cloud-hosted private code repositories

e.  CodeShip tool empowers your DevOps CI/CD pipelines with easy, secure, fast and reliable builds with native docker support. It provides a GUI to easily configure the builds

f.  Bamboo by Atlassian is a Continuous integration, deployment and delivery Server. Bamboo has built-in  Jira Software and  BitBucket Software Integration, also built-in git branching and workflows.

Jenkins is the most popular and widely used tool with numerous flexible plugins that integrate with almost any CI/CD toolchain. Also the ability of Jenkins to automate any project really distinguish this tool from others, thus it is highly recommended to get a good grip of this tool as a DevOps practitioner.

Note: Since this is also a key for enthusiasts to choose the right tool but should be short definitions

3.8 Know the tools to monitor software and infrastructure

Know the tools to monitor software and infrastructure

It is crucial to continuously monitor the software and infrastructure upon setting up the continuous integration and continuous delivery pipeline (CI/CD) to understand how well your DevOps setup is performing. Also, it is vital to monitor system events and get alerts in real-time. 

A hiccup in the pipeline such as an application dependency failure or a linking error, or say the database has a downtime must be immediately notable and taken care of.

This is where a DevOps Engineer must be familiar with monitoring tools such as -

1.  Nagios: is an open-source software application that monitors systems, networks, and infrastructure(Servers) and generates logs and alerts

2.  Prometheus: is an open-source real-time metrics-based event monitoring and alerting system.

3.9 Learn about Cloud Providers

As the computational need increases so do the demand of the infrastructure resources.Cloud computing is a higher level of virtualization, wherein the computing resources are outsourced on a “cloud” and available for use on a pay-as-you-go basis over the internet.Some of the leading cloud providers such as AWS, Google Cloud, Microsoft Azure to name a few provide varied cloud services like IaaS, PaaS, and SaaS.

Begin part of a DevOps practice, you will often find the need to access various cloud services say for infrastructure resources, production-like environment on the go for testing your product without having to provision it, get multiple replicas of the production environment, create a failover cluster, backup and recover your database over the cloud and various other tasks.

Some of the cloud providers and what they offer are listed below-

A.  AWS (Amazon Web Services): provide tooling and infrastructure resources readily available for DevOps programs customized as per your requirement. You can easily build and deliver products, automate CI/CD process without having to worry about provisioning and configuring the environment

B.  Microsoft Azure: Create a reliable CI/CD pipeline, practice Infrastructure as Code and continuous monitoring through Microsoft-managed data centres

C.  Google Cloud Platform: Uses google-managed data centres to provide DevOps features like end-to-end CI/CD automation, Infrastructure as Code, configuration management, security management, and serverless computing.

AWS is the most versatile and recommended provider that you may wish to start learning.

4. What next after becoming a DevOps expert?

“Sky is the only limit for a DevOps person !!!”

Mastering the DevOps tools and practices opens up the door to new roles and challenges for you to learn and grow.

4.1 DevOps Evangelist

A technical Evangelist is a strong powerful and influential role that exhibits a strong thought process.

A DevOps evangelist is a DevOps leader who identifies and implements the DevOps features to solve a business problem or a process, and then shares and promotes the benefits that come from DevOps practice.

Also identifies the key roles and train the team in the same and is responsible for the success of entire DevOps processes and people.

4.2 Code Release Manager

A Code Release Manager measures the overall progress of the project in terms of metrics, he/she is aware of the entire Agile methodology. A Release Manager is more involved in the coordination among all the phases of DevOps flow to support continuous delivery.

4.3 Automation Architect

The key responsibility is to plan, analyze, and design a strategy to automate all manual tasks with the right tools and implement the processes for continuous deployment.

4.4 Experience Assurance

An experience Assurance person is responsible for the user experience and makes sure that the product being delivered meet the original business specifications.

This role is also termed as Quality Assurance but with extended responsibilities of user experience testing. This role plays a critical role in the DevOps cycle.

4.5 Software Developer/Tester

Under DevOps, the role and responsibilities of a Software Developer literally expand l, that the developers are no longer responsible for writing code, but also take ownership of unit testing, deployment and monitoring as well.

A Developer/Tester has to make sure that the code meets the original business requirement.
Henceforth; the role Developer/Tester or if the innovation extends further a Developer may also be referred to as DevTestOps.

4.6 Security Engineer

Security Engineer focuses on the Integrity of data by incorporating security into the product, and not at the end.

He/she supports project teams in using security tools in the CI/CD pipeline, as well as provide resolution of identified security flaws. 

Conclusion

“If you define the problem correctly, you almost have the solution.”  - Steve Jobs

In a nutshell, if you aspire to  become a DevOps professional you ought to know -

  • Programming language (C, Java, Perl, Python, Ruby, Bash shell, PowerShell)
  • Operating System concepts (resource management)
  • Source Control (like Git, Bitbucket, Svn, VSTS, etc)
  • Continuous Integration and Continuous Delivery (Jenkins, GitLab CI, CircleCI)
  • Infrastructure as Code (IaC) Automation (tools like Puppet, Chef, Ansible and/or Terraform)
  • Managing Servers (application, storage, database, infrastructure, networking, web server etc)
  • (Application, Database, Web Server, Storage, Infrastructure, Networking Server 
  • Networking and security
  • Container Concepts (Docker)
  • Continuous monitoring (Nagios and Prometheus)
  • Cloud (like AWS, Azure, Google Cloud).

DevOps ways( The three ways of DevOps) open the door of opportunities to improve and excel in the process using the right tools and technologies.

“DevOps channels the entire process right from the idea on a whiteboard until the real product in the customer’s hands through automated pipelines(CI/CD).”

As a DevOps Engineer you must be a motivated team player, need to have a desire to learn and grow, optimize the process and find better solutions.

Since DevOps covers a vast area under its umbrella, it is best to focus on your key skills and learn the technologies and tools as needed.

Understand the problem/challenge then find a DevOps solution around the same.

Divya

Divya Bhushan

Content developer/Corporate Trainer

  • Content Developer and Corporate Trainer with a 10-year background in Database administration, Linux/Unix scripting, SQL/PL-SQL coding, Git VCS. New skills acquired-DevOps and Dockers.
  • A skilled and dedicated trainer with comprehensive abilities in the areas of assessment, 
requirement understanding, design, development, and deployment of courseware via blended environments for the workplace. 

  • Excellent communication, demonstration, and interpersonal skills.

Website : https://www.knowledgehut.com/tutorials/git-tutorial

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

Introduction to Docker, Docker Containers & Docker Hub

Docker is a tool that makes creating, deploying, and running applications easier with the use of containers. Now, what are containers? These can be described as something that makes it possible for developers to spruce up an application with all the parts needed for it. These could include libraries, for instance, along with other dependencies. Docker assembles all these and presents them as one package. The container gives the developer the assurance that the application will run on just about any Linux machine, no matter to what extent any of its customized settings in a particular machine could be at variance from those on the machine on which the code is written and tested.Who is Docker for:Docker is aimed to benefit both developers and system administrators. This makes it a part of many DevOps (developers + operations) toolchains. The main benefit that Docker carries for developers is that they can concentrate on their core job of writing the code without having to bog themselves down with which system it will run on.How Docker is useful in the IT industry:The most vital use of the Docker Enterprise container platform is that it offers value to a business by drastically bringing down its cost on infrastructure and maintenance. It can also do the same when it comes to migrating current. Best of all, all these can be done immediately upon installation. In this way, it saves time, as well. The following infographic illustrates how Docker brings down costs and increases productivity in an enterprise:Image sourceDocker container:Next, let us understand what a container in a Docker is. We can think of it as being a standard unit of software that has the purpose of packaging the code and all its dependencies together.It comes with all that an application needs to run, namely settings, code, system tools, runtime, and system libraries.The point of making a Docker container in this fashion is to help the application run in a fast and dependable manner between one computing environment and another. A Docker container image has these characteristics:LightweightStandaloneExecutableIn this sense, the container lies at the heart of a Docker.Docker containers that run on Docker Engine:Let us get down to understanding the Docker containers that power the Docker Engine.Standardization: Docker containers were created according to the industry standard for containers. The aim of doing this is that the containers could be made portable.Lightweight: Since containers share the machine’s OS system kernel; there is no need for an OS per application. What does this do? It increases server efficiencies and brings down the costs of the server as well as those associated with licensing.Security: Security is assured for applications in containers. It is a fact that  Docker comes with the industry-best default isolation capabilities.Let us explain a few Docker commands from the architecture shown above:docker run – Used for running a command in a new containerdocker start – For starting one or more stopped containersdocker stop – For stopping one or more running containersdocker build – Used for building an image form in a Docker filedocker pull – For pulling an image or a repository from a registrydocker push – Given for pushing an image or a repository to a registrydocker export – For exporting a container’s filesystem as a tar archivedocker exec – To run a command in a run-time containerdocker search – For searching the Docker Hub for imagesdocker volume- To create and attach to containers to store data.docker network- allows you to attach a container to as many networks as you like. You can also attach an already running container.docker attach – To attach to a running containerdocker commit – For creating a new image from a container’s changesdocker daemon – Having listened for Docker API requests, the Docker daemon (dockerd) manages Docker objects. These include networks, volumes, containers, and images. It also communicates with other daemons when managing Docker services.docker Images – A read-only template, an image has instructions that are used to create a Docker container. Many times, images are based on other images and carry some degree of customization. An image-based on ubuntu can install the Apache web server, your application, and the configuration details that the application needs to run.Understanding Docker Hub RegistryA registry service that is cloud-based; the Docker Hub Registry allows the user to do the following:Link to code repositoriesBuild images and test themStores images that are manually pushedLinks to Docker Cloud to help deploy images to a host.In summary, we can understand the Docker Hub Registry as a tool that offers a centralized resource for discovering a container image, managing distribution and change, facilitating collaboration between the user and team, and automating workflow throughout the development pipeline.Ref URL.Create a docker hub account.Pull a docker imagedocker pull ubuntupull a docker image with old versiondocker pull ubuntu:16.04create a custom tag to docker imagedocker tag ubuntu: latest admin/ubuntu: demologin to your docker hub registry “sh docker logindocker push admin/ubuntu: demotestingRemove all images in docker serverdocker image rm -f Pull your custom image from your docker accountdocker pull admin/ubuntu:demoInstallation Docker on Amazon Web Services (AWS) cloud:Why Amazon Web Services:AWS is a highly preferred cloud service. It enjoys a position of primacy in the global cloud services market due to the following reasons:Market pioneersUnshakeable customer faithCost-effectivenessEase and affordability of building a storage system with no worry of estimating usageSuitability for small businesses, since it is ideal for building a business from bottom to top.Advantages of AWS:Easy of usabilityAgilitySecurityReliabilityServices without capacity limitsCost-effectivenessFlexibility24×7 support.Steps to Install docker on Amazon Linux:We need Amazon web services account.Create AWS account and login to console. Choose Ec2 service from console.Click on Launch instance and choose Amazon Linux Ami Ec2 server free tier Eligible.Choose free tier Eligible Ec2 t2. Micro.Here we need configure instance details like region, subnets, vpc.Add storage. By default it will give us 8GB, and we can modify it after launching Ec2.Create security groups and check port 22 is open to allow SSH connection and we can add incoming ports in security groups.Review details of Ec2 instance and click on Launch.Create New key pair or if we have existing key pair, we can use the same; and download and click on Launch instance.Convert Keypair from .PEM file to. PPK using puttygen.  We can Download puttygen and putty from here.Login ec2 instance using putty and Ec2 Public Ip address.Click on SSH in Right panel and click Auth and add PPK key pair for ec2 to login.When we login to ec2 with New key pair we will get security alert. Click on YES and login as “Ec2-user”. If we need to login as root “sudo su – “.Update packages for security purpose using command “sudo yum update -y”.Now we need to install docker on Amazon Linux. Use command “ sudo yum install docker -y”.To check Docker version, we can see output below:Start docker with “sudo service docker start” command.Check Docker status.Now we can download any docker images by using “docker pull command”.Check if the docker container is running with “docker ps” command.To Login into docker container use “docker exec -it –user root container id bash.Check current docker containers and stopped container with “docker ps -a” command.To check downloaded docker images with “docker images” command.Conclusion:A tool with which creating, deploying and running applications is made much easier, a Docker is a set of packages that uses containers. It is of high value to both developers and system administrators, who can look at their core work without having to worry about writing the code, which runs on any system.Docker Enterprise is of immense value to the IT industry, as it brings down the maintenance and infrastructure costs. It can be deployed immediately and can be migrated easily.
5541
Introduction to Docker, Docker Containers & Do...

Docker is a tool that makes creating, deploying, a... Read More

Chaos Engineering

The 4th industrial revolution has swept the world. In just under a decade, our lives have become completely dependent on technology. The world has become a smaller place due to the internet and day by day we see an increase in the number of industries that are switching to the online platform. But this is still a new technology and emerging and developed economies are still trying to perfect the infrastructure and ecosystem which is needed to run these businesses online. This uncertainty makes failure more prevalent.  We generally came across headlines "Customers report difficulty in accessing bank mobile and online", "Bank Website down, not working" , "Service Unavailable" and such unpredictability is occurring on a regular frequency.  These outages/failures are often in complex and distributed systems, where often, several things fail at the same time, thereby compounding the problem. Finding the bugs and fixing them takes a couple of minutes to hours depending on system architecture, causing not only loss of revenue to the company but also loss of customer trust. The system is built to handle individual failures, but in big chaotic systems, failure of systems or processes may lead to severe outages. The term Microservice Death Star, refers to an architecture that is poorly designed, has highly interdependent complex systems that are slow, inflexible and can blow up and lead to failure. Image SourceStructure of microservices at AmazonImage SourceIn the old world, our system was more simplistic due to monolithic architecture. It was easy to debug errors and consequently fix them. Code changes were shipped once a quarter, or half-yearly. But today, architecture has changed a lot with migration to the cloud where innovation and speed of execution have become part for our system. The system is changing not in order of weeks and days but in order of minutes and hours. Usage of cloud-based and microservice architecture has provided us with a lot of advantages but come with complexity and chaos which can cause failure. It is an engineer’s responsibility to make the system as reliable as it can be.  Netflix's Way of Dealing with the system has taught us a better approach and has given birth to a new discipline "Chaos Engineering". Let's discuss more about it below.  Chaos Engineering and its Need:As Defined by a Netflix Engineer: "Chaos engineering is the discipline of experimenting on a software system in production to build confidence in the system's capability to withstand turbulent and unexpected conditions" Reference Link.Chaos engineering is the process of exposing a software system by introducing disruptive events, such as server outages or API throttling. In this process, we introduce  failure scenarios, faults, to test  the system’s capability of surviving against unstable and unexpected conditions. It also helps teams to simulate real-world conditions needed to uncover the hidden issues, monitoring blind spots, and performance bottlenecks that are difficult to find in distributed systems. This method is quite effective in preventing downtime or production outages before their occurrence. The Need for Chaos Engineering: How does it benefit? Implementing Chaos Engineering improves the resilience of a system.  By designing and executing Chaos Engineering experiments, we  get to know about weaknesses in the system that could lead to outages, which in turn can lose us customers. This helps improve incident response. It helps us to improve the understanding of the risk of the system by exposing threats to the system.  Principles of Chaos Engineering: The term Chaos Engineering was designed  by Engineers at Netflix. Chaos Engineering Experiments are designed based on the following four principles: Define system’s normal behaviour: First, the steady state of the system is defined, thereby defining some measurable outputs which can indicate the system’s normal behaviour. Creating Hypothesis:  During an experiment, we need a hypothesis for comparing to a stable control group, and the same applies here too. If there is a reasonable expectation for a particular action according to which we will change the steady state of a system, then the first thing to do is to fix the system so that we accommodate for the action that will potentially have that effect on the system.  Apply real-world events: Design and create experiments by introducing real-world events like terminating servers, network failures, latency, dependency failure, memory malfunction, etc. Observe Results: In this, we will be comparing steady-state metrics with the system after introducing disturbance. For monitoring we can use cloudwatch, Kibana, splunk etc or any other tool which is already part of the system architecture. If there will be a difference in results, it can be used to identify future incidents, and improvements can be made. Otherwise, if there is no difference, it can improve a higher degree of trust and confidence about application among team members. Difference Between Chaos Engineering And Testing : When we develop an application, we pass it through various tests that include Unit Tests, Integration Tests, and System Tests. With Unit testing, we write a unit test case and check the expected behaviour of a component that is independent of all external components whereas Integration testing checks the interaction of individual and inter-dependant components. But even extensive testing does not provide us with a guaranteed error-free system because this testing examines only pre-defined and single scenarios. The results don't cover new information about the application, system behaviour, performance, and properties. This uncertainty increases with the use of microservice architectures, where the system grows with passing time. Whereas in chaos, it generates a wide range and unpredictable outcome for experimenting on a distributed architecture to build confidence in the system’s capability and withstand turbulent conditions in production. Chaos Testing is a deliberate introduction of failure and faulty scenarios into our system to understand how the system will react and what could be its side effects. This type of testing is an effective method to prevent/minimize outages before they impact the system and ultimately the business.  Chaos Engineering Examples There are many chaos experiments that we can inject and test our system with, which mainly depend on our goals and system architecture.  Below is a list of the most common chaos tests: Simulating the failure of a micro-component and dependency. Simulating a high CPU load and sudden increase in traffic. Simulating failure of entire AZ(Availability Zone) or region. Injecting latency and byzantine failures in services. Exhausting memory on instances(cloud services) and allowing fault injection. Causing Host Failure. List of Tools Developed by Netflix: The Netflix Team has created a suite of tools that support chaos engineering principles and named it the Simian Army. The tools constantly test the reliability, security, or resiliency of its Amazon Web Services infrastructure. Chaos Monkey: It is a tool that is used to test the resilience of the system. It works by disabling one system of production and testing how other remaining systems respond to the outage. It is designed to test system stability by enforcing failures and later on checking the response of the system.The name "Chaos Monkey" is explained in the book Chaos Monkeys by Antonio Garcia Martinez "Imagine a monkey entering a 'data centre', these 'farms' of servers that host all the critical functions of our online activities. The monkey randomly rips cables, destroys devices, and returns everything that passes by the hand [i.e. flings excrement]. The challenge for IT managers is to design the information system they are responsible for so that it can work despite these monkeys, which no one ever knows when they arrive and what they will destroy." Reference link.Latency Monkey: This is useful in testing fault tolerance of service by creating communication delays to provoke outages in the network. Doctor Monkey: It checks the health status as well as other components related to health of the system i.e. CPU load to detect unhealthy instances and eventually fixing the instance. Conformity Monkey: It finds the instance that doesn't adhere to best practices against a set of rules and sends an email notification to the owner of the instance. Janitor Monkey: Ensures cloud service is working free of unused resources and clutter. Disposes of any waste. Security Monkey: It is an extension of Conformity Monkey. It finds security violations or vulnerabilities, such as improperly configured AWS security groups, and terminates the offending instances. Chaos Gorilla: It is similar to Chaos Monkey, but drops full Availability Zone while testing. Chaos Engineering and DevOps: When it comes to DevOps and running SDLC, implementing chaos principles in the system helps in understanding system ability against failure, which later on helps in reducing incidents in production. There are scenarios, where we quickly need to deploy the software in an environment, for all those cases we can perform chaos engineering in distributed, continuous-changing, and complex development methodologies to find unexpected failures. Advantages: Insights received after running chaos testing can lead to a reduction in production incidents for the future. Through Chaos Engineering, the team can verify the system's behaviour on failure so that accordingly it takes action. Chaos Engineering helps in the testing response of the team to the incident. Also, helps in testing if the raised alert has been notified to the correct team. On a high level, Chaos Engineering provides us an advantage by overall system availability. Chaos Experiments make the system more resilient to failures. Production outages can lead to huge losses to companies depending on the usage of the system, therefore chaos engineering helps in the prevention of large losses in revenue. It helps in improving the confidence and engagement of team members for carrying out disaster recovery methods and makes applications highly reliable. Disadvantages: Implementing Chaos Monkey for a large-scale system and experimenting can lead to an increase in cost. Carelessness or Incorrect steps in formation and implementation can impact the application, thereby hampering the customer. While implementing the project, it doesn't provide any Interface to track and monitor. It runs through scripts and configuration files. It doesn't support all kinds of deployment.  Conclusion:In the present world of Software Development Lifecycle, chaos engineering has become a magnificent tool which can help organizations to not only improve resiliency, flexibility, and velocity of the system, but also helps in operating distributed system. Along with these benefits, it has also provided us with remediation of the issue before it impacts the system. Implementation of Chaos Engineering is important and should be adopted for better outcomes. In the above article, we have shared a brief about chaos engineering and demonstrated how it can provide new insights to the system. Hope this article has provided you with valuable insights about chaos engineering. This is an extensive field and there is a lot more to learn about it.   
5403
Chaos Engineering

The 4th industrial revolution has swept the world... Read More

How to Become a DevOps Engineer

Who is DevOps engineer?        DevOps engineers are a group of influential individuals who encapsulates depth of knowledge and years of hands-on experience around a wide variety of open source technologies and tools. They come with core attributes which involve an ability to code and script, data management skills as well as a strong focus on business outcomes. They are rightly called “Special Forces” who hold core attributes around collaboration, open communication and reaching across functional borders.DevOps engineer always shows interest and comfort working with frequent, incremental code testing and deployment. With a strong grasp of automation tools, these individuals are expected to move the business quicker and forward, at the same time giving a stronger technology advantage. In nutshell, a DevOps engineer must have a solid interest in scripting and coding,  skill in taking care of deployment automation, framework computerization and capacity to deal with the version control system.Qualities of a DevOps Engineer Collated below are the characteristics/attributes of the DevOps Engineer.Experience in a wide range of open source tools and techniquesA Broad knowledge on Sysadmin and Ops rolesExpertise in software coding, testing, and deploymentExperiences on DevOps Automation tools like Ansible, Puppet, and ChefExperience in Continuous Integration, Delivery & DeploymentIndustry-wide experience in implementation of  DevOps solutions for team collaborationsA firm knowledge of the various computer programming languagesGood awareness in Agile Methodology of Project ManagementA Forward-thinker with an ability to connect the technical and business goals     Demand for people with DevOps skills is growing rapidly because businesses get great results from DevOps. Organizations using DevOps practices are overwhelmingly high-functioning: They deploy code up to 30 times more frequently than their competitors, and 50 percent fewer of their deployments fail.What exactly DevOps Engineer do?DevOps is not a way to get developers doing operational tasks so that you can get rid of the operations team and vice versa.  Rather it is a way of working that encourages the Development and Operations teams to work together in a highly collaborative way towards the same goal. In nutshell, DevOps integrates developers and operations team to improve collaboration and productivity.The main goal of DevOps is not only to increase the product’s quality to a greater extent but also to increase the collaboration of Dev and Ops team as well so that the workflow within the organization becomes smoother & efficient at the same time.DevOps Engineer has an end-to-end responsibility of the Application (Software) right from gathering the requirement to development, to testing, to infrastructure deployment, to application deployment and finally monitoring & gathering feedback from the end users, then again implementing the changes. These engineers spend more time researching new technologies that will improve efficiency and effectiveness.They Implement highly scalable applications and integrate infrastructure builds with application deployment processes. Let us spend some time in understanding the list of most important DevOps Engineers’ roles and responsibilities.1) The first and foremost critical role of a DevOps Engineer is to be an effective communicator i.e Soft Skills. A DevOps Engineer is required to be a bridge between the silos and bring different teams together to work towards a common goal. Hence, you can think of DevOps Engineers as “IT Project Managers”. They typically work on a DevOps team with other professionals in a similar role, each managing their own piece of the infrastructure puzzle.2) The second critical role of DevOps Engineer is to be Expert Collaborators. This is because their role requires them to build upon the work of their counterparts on the development and IT teams to scale cloud programs, create workflow processes, assign tenants and more.3) Thirdly, they can be rightly called “Mentors” as they spend most of the time in mentoring and educating software developers and architecture teams within an organization on how to create software that is easily scalable. They also collaborate with IT and security teams to ensure quality releases.Next, they need to be a “customer-service oriented” individuals. The DevOps Engineer is a customer-service oriented, team player who can emerge from a number of different work and educational backgrounds, but through their experience has developed the right skillset to move into DevOps.The DevOps Engineer is an important IT team member because they work with an internal customer. This includes QC personnel, software and application developers, project managers and project stakeholders usually from within the same organization. Even though they rarely work with external customers or end-users, but they keep close eye on  a “customer first” mindset to satisfy the needs of their internal clients.Not to miss out, DevOps engineer holds broad knowledge and experience with Infrastructure automation tools. A key element of DevOps is automation.  A lot of the manual tasks performed by the more traditional system administrator and engineering roles can be automated by using scripting languages like Python, Ruby, Bash, Shell, Node.js. This ensures a consistent performance of manual tasks by removing the human component and allowing teams to spend the saved time on more of the broader goals of the team and company.Hence, a DevOps engineer must possess the ability to implement automation technologies and tools at any level, from requirements to development to testing and operations.Few of other responsibilities of DevOps Engineer include -Manage and maintain infrastructure systemMaintaining and developing highly automated services landscape and open source servicesTake over the ownership for integral components of technology and make sure it grows aligned with company successScale systems and ensure the availability of services with developers on changes to the infrastructure required by new features and products.How to become a devops engineer?DevOps is less about doing things a particular way, and more about moving the business forward and giving it a stronger technological advantage. There is not a single cookbook or path to become a devops professional . It's a continuous learning and consulting process . Every DevOps tasks have been originated from various development , testing , ops team  consulting through consultants and running pilots, therefore it’s hard to give a generic playbook for how to get it implemented. Everyone should start with learning about the values, principles, methods, and practices of DevOps and trying to share it via any channel  and keep learning.Here’s my 10 golden tips to become a DevOps Engineer:    1.  Develop Your Personal Brand with Community Involvement    2. Get familiar with IaC(Infrastructure-as-Code) - CM    3. Understand DevOps Principles & Frameworks    4. Demonstrate Curiosity & Empathy    5. Get certified on Container Technologies - Docker | Kubernetes| Cloud    6. Get Expert in Public | Private | Hybrid Cloud offering    7. Become an Operations Expert before you even THINK DevOps    8. Get Hands-on with various Linux Distros & Tools    9. Arm Yourself with CI-CD, Automation & Monitoring Tools(Github, Jenkins, Puppet, Ansible etc)    10.Start with Process Re-Engineering and Cross-collaboration within your teams.Skills that DevOps engineer need to have If you’re aiming to land a job as a DevOps engineer in 2018, it’s not only about having a deep specialized skill but understanding how a variety of technologies and skills come together.One of the things that makes DevOps both challenging to break into is that you need to be able to write code, and also to work across and integrate different systems and applications. Based on my experience, I have finalized on the list of top 5 skill sets  which you might require to be a successful DevOps engineer:#1 - SysAdmin with Virtualization ExperienceDeployment is a major requirement in devops role and ops engineer are good at that , All is needed is a deployments automation engine(chef ,puppet ,ansible) knowledge  and its use-cases implementations . Nowadays , most of public clouds are running multiple flavors of virtualization so a must have 3 – 5 years of virtualization experience with VMware, KVM, Xen, Hyper-V is required along .#2 - Solution Architect RoleAlong with deployments or virtualization experience, understanding and implementation of all the hardware technologies in breadth is a must like storage and networking. Nowadays  there is a very high-demand for people who can design a solution that scales and performs with high availability and uptime with minimal amount of resources to feed on (Max utilization) .#3 - A Passionate Programmer/API ExpertiseBash, Powershell, Perl, Ruby, JavaScript, Go, Python etc are few of popular scripting languages one need to have expertise on  to become an effective DevOps Engineer. A DevOps engineer must be able to write code to automated repeatable processes. One need to be familiar with RESTFUL APIs.#4 - Integration Skillset around CI-CD toolA DevOps engineer should be able to use all his expertise to integrate all the open source tools and technique to create an environment that is fully automated and integrated. The goal should be for zero manual intervention from source code management to deployment state, i.e. Continuous Integration, Continuous Delivery and Continuous Deployment.#5 - Bigger Picture & Customer FocusWhile the strong focus on coding chops makes software engineering a natural path to a career in DevOps, the challenge for candidates who are coming from this world is that they need to be able to prove that they can look outside their immediate team and project. DevOps engineers are responsible for facilitating collaboration and communication between the Development and IT teams within an organization, so to succeed in an interview, you’ll need to be able to demonstrate your understanding of how disparate parts of the technical organization fit and work together.In nutshell, all you need are the list of tools and technologies listed below -Source Control (like Git, Bitbucket, Svn, VSTS etc)Continuous Integration (like Jenkins, Bamboo, VSTS )Infrastructure Automation (like Puppet, Chef, Ansible)Deployment Automation & Orchestration (like Jenkins, VSTS, Octopus Deploy)Container Concepts (LXD, Docker)Orchestration (Kubernetes, Mesos, Swarm)Cloud (like AWS, Azure, Google Cloud, Openstack)What are DevOps certifications available in the market? Are they really useful?In 2018, DevOps professionals are in huge demand. The demand for DevOps professionals in the current IT marketplace has increased exponentially over the years. A certification in DevOps is a complete win-win scenario, with both the individual professional and the organization as a whole standing to gain from its implementation. Completing a certification in the same will not only provide added value to one’s profile as an IT specialist but also advance career prospects faster than would usually be possible.The certifications related to DevOps are categorized into         1)  Foundation,         2) Certified Agile Process Owner &         3) Certified Agile Service ManagerThe introductory DevOps Certification is Foundation and certified individuals are able to execute the concepts and best practices of DevOps and enhance workflow and communication in the enterprise.Yes, these DevOps  certifications hold numerous benefits in the following ways:1. Better Job OpportunitiesDevOps is a relatively new idea in the IT domain with more businesses looking at employing DevOps processes and practices. There is a major gap between the demand for DevOps Certified professionals and the availability of the required DevOps professionals. IT professionals can take advantage of this huge deficit in highly skilled professionals by taking up a certification in DevOps for validation of DevOps skill set. This will ensure and guarantee much better job options.2. Improved Skills & KnowledgeThe core concept of DevOps revolves around brand new decision-making methods and thought processes. DevOps comes with a host of technical and business benefits which upon learning can be implemented in an enterprise. The fundamentals of DevOps consist of professionals working in teams of a cross-functional nature. Such teams consist of multi-disciplinary professionals ranging from business analysts, QA professionals, Operation Engineers, and Developers.3. Handsome SalaryRapid penetration of DevOps best practices in organizations and their implementation in the mentioned organizations is seeing massive hikes in the pay of DevOps professionals.This trend is seen to be consistent and sustainable according to industry experts the world over. DevOps professionals are the highest paid in the IT industry.4. Increased Productivity & EffectivenessConventional IT workplaces see employees and staff being affected by downtime which can be attributed to waiting for other employees or staff and other software and software related issues. The main objective of an IT professional at the workplace would be to be productive for a larger part of the time he/she will spend at the workplace. This can be achieved by minimizing the time spent waiting for other employees or software products and eliminating the unproductive and unsatisfying part of the work process. This will boost the effectiveness of the work done and will add greatly to the value of the enterprise and the staff as well.If you are looking out for the “official” certification programs for DevOps, below are some of the useful links:1) AWS Certified DevOps Engineer - Professional2) Azure certifications | Microsoft3) Google Cloud Certifications4) Chef Certification5) Red Hat Certificate of Expertise in Ansible Automation6) Certification - SaltStack7) Puppet certification8) Jenkins Certification9) NGINX University10) Docker - Certification11) Kubernetes Certified Administrator12) Kubernetes Certified Application Developer13) Splunk | Education Programs14) Certifications | AppDynamics15) New Relic University Certification Center16) Elasticsearch Certification Programme17)SAFe DevOps courseDevOps engineer examBelow are the list of popular DevOps Engineer exams and certifications details -DevOps Exam Syllabus Training Duration Minimal Attempts Exam Re-Take InformationAWS Certified DevOps EngineeAWS_certified_devops_engineer_professional_blueprint.pdf3 MonthsNo Minimal RequirementWaiting Period: 14 days before they are eligible to retake the exam.No limit on exam attempts until the test taker has passedRHCA certification with a DevOpsRED HAT CERTIFIED3 Days for each training• Red Hat Certificate ofWaiting Period: 1 weekconcentrationARCHITECT: DEVOPScourseExpertise in Platform-as-a-Service • Red Hat Certificate of Expertise in Atomic Host Container Administration • Red Hat Certificate of Expertise in Containerized Application Development• Red Hat Certificate of Expertise in Ansible Automation • Red Hat Certificate of Expertise in Configuration ManagementDocker Certification Associate ExamDCA ExamNo Minimal AttemptsWait 14 days from the day you fail to take the exam againCertified Kubernetes Associate ExamCKA Exam4-5 WeeksNo Minimal AttemptsWait 14 days from the day you fail to take the exam againChef Certification ExamChef Cert Exam8 HoursLinkMinimal 1 week time
3900
How to Become a DevOps Engineer

Who is DevOps engineer?        DevOps enginee... Read More