
Domains
Agile Management
Master Agile methodologies for efficient and timely project delivery.
View All Agile Management Coursesicon-refresh-cwCertifications
Scrum Alliance
16 Hours
Best Seller
Certified ScrumMaster (CSM) CertificationScrum Alliance
16 Hours
Best Seller
Certified Scrum Product Owner (CSPO) CertificationScaled Agile
16 Hours
Trending
Leading SAFe 6.0 CertificationScrum.org
16 Hours
Professional Scrum Master (PSM) CertificationScaled Agile
16 Hours
SAFe 6.0 Scrum Master (SSM) CertificationAdvanced Certifications
Scaled Agile, Inc.
32 Hours
Recommended
Implementing SAFe 6.0 (SPC) CertificationScaled Agile, Inc.
24 Hours
SAFe 6.0 Release Train Engineer (RTE) CertificationScaled Agile, Inc.
16 Hours
Trending
SAFe® 6.0 Product Owner/Product Manager (POPM)IC Agile
24 Hours
ICP Agile Certified Coaching (ICP-ACC)Scrum.org
16 Hours
Professional Scrum Product Owner I (PSPO I) TrainingMasters
32 Hours
Trending
Agile Management Master's Program32 Hours
Agile Excellence Master's ProgramOn-Demand Courses
Agile and ScrumRoles
Scrum MasterTech Courses and Bootcamps
Full Stack Developer BootcampAccreditation Bodies
Scrum AllianceTop Resources
Scrum TutorialProject Management
Gain expert skills to lead projects to success and timely completion.
View All Project Management Coursesicon-standCertifications
PMI
36 Hours
Best Seller
Project Management Professional (PMP) CertificationAxelos
32 Hours
PRINCE2 Foundation & Practitioner CertificationAxelos
16 Hours
PRINCE2 Foundation CertificationAxelos
16 Hours
PRINCE2 Practitioner CertificationSkills
Change ManagementMasters
Job Oriented
45 Hours
Trending
Project Management Master's ProgramUniversity Programs
45 Hours
Trending
Project Management Master's ProgramOn-Demand Courses
PRINCE2 Practitioner CourseRoles
Project ManagerAccreditation Bodies
PMITop Resources
Theories of MotivationCloud Computing
Learn to harness the cloud to deliver computing resources efficiently.
View All Cloud Computing Coursesicon-cloud-snowingCertifications
AWS
32 Hours
Best Seller
AWS Certified Solutions Architect - AssociateAWS
32 Hours
AWS Cloud Practitioner CertificationAWS
24 Hours
AWS DevOps CertificationMicrosoft
16 Hours
Azure Fundamentals CertificationMicrosoft
24 Hours
Best Seller
Azure Administrator CertificationMicrosoft
45 Hours
Recommended
Azure Data Engineer CertificationMicrosoft
32 Hours
Azure Solution Architect CertificationMicrosoft
40 Hours
Azure DevOps CertificationAWS
24 Hours
Systems Operations on AWS Certification TrainingAWS
24 Hours
Developing on AWSMasters
Job Oriented
48 Hours
New
AWS Cloud Architect Masters ProgramBootcamps
Career Kickstarter
100 Hours
Trending
Cloud Engineer BootcampRoles
Cloud EngineerOn-Demand Courses
AWS Certified Developer Associate - Complete GuideAuthorized Partners of
AWSTop Resources
Scrum TutorialIT Service Management
Understand how to plan, design, and optimize IT services efficiently.
View All DevOps Coursesicon-git-commitCertifications
Axelos
16 Hours
Best Seller
ITIL 4 Foundation CertificationAxelos
16 Hours
ITIL Practitioner CertificationPeopleCert
16 Hours
ISO 14001 Foundation CertificationPeopleCert
16 Hours
ISO 20000 CertificationPeopleCert
24 Hours
ISO 27000 Foundation CertificationAxelos
24 Hours
ITIL 4 Specialist: Create, Deliver and Support TrainingAxelos
24 Hours
ITIL 4 Specialist: Drive Stakeholder Value TrainingAxelos
16 Hours
ITIL 4 Strategist Direct, Plan and Improve TrainingOn-Demand Courses
ITIL 4 Specialist: Create, Deliver and Support ExamTop Resources
ITIL Practice TestData Science
Unlock valuable insights from data with advanced analytics.
View All Data Science Coursesicon-dataBootcamps
Job Oriented
6 Months
Trending
Data Science BootcampJob Oriented
289 Hours
Data Engineer BootcampJob Oriented
6 Months
Data Analyst BootcampJob Oriented
288 Hours
New
AI Engineer BootcampSkills
Data Science with PythonRoles
Data ScientistOn-Demand Courses
Data Analysis Using ExcelTop Resources
Machine Learning TutorialDevOps
Automate and streamline the delivery of products and services.
View All DevOps Coursesicon-terminal-squareCertifications
DevOps Institute
16 Hours
Best Seller
DevOps Foundation CertificationCNCF
32 Hours
New
Certified Kubernetes AdministratorDevops Institute
16 Hours
Devops LeaderSkills
KubernetesRoles
DevOps EngineerOn-Demand Courses
CI/CD with Jenkins XGlobal Accreditations
DevOps InstituteTop Resources
Top DevOps ProjectsBI And Visualization
Understand how to transform data into actionable, measurable insights.
View All BI And Visualization Coursesicon-microscopeBI and Visualization Tools
Certification
24 Hours
Recommended
Tableau CertificationCertification
24 Hours
Data Visualization with Tableau CertificationMicrosoft
24 Hours
Best Seller
Microsoft Power BI CertificationTIBCO
36 Hours
TIBCO Spotfire TrainingCertification
30 Hours
Data Visualization with QlikView CertificationCertification
16 Hours
Sisense BI CertificationOn-Demand Courses
Data Visualization Using Tableau TrainingTop Resources
Python Data Viz LibsCyber Security
Understand how to protect data and systems from threats or disasters.
View All Cyber Security Coursesicon-refresh-cwCertifications
CompTIA
40 Hours
Best Seller
CompTIA Security+EC-Council
40 Hours
Certified Ethical Hacker (CEH v12) CertificationISACA
22 Hours
Certified Information Systems Auditor (CISA) CertificationISACA
40 Hours
Certified Information Security Manager (CISM) Certification(ISC)²
40 Hours
Certified Information Systems Security Professional (CISSP)(ISC)²
40 Hours
Certified Cloud Security Professional (CCSP) Certification16 Hours
Certified Information Privacy Professional - Europe (CIPP-E) CertificationISACA
16 Hours
COBIT5 Foundation16 Hours
Payment Card Industry Security Standards (PCI-DSS) CertificationOn-Demand Courses
CISSPTop Resources
Laptops for IT SecurityWeb Development
Learn to create user-friendly, fast, and dynamic web applications.
View All Web Development Coursesicon-codeBootcamps
Career Kickstarter
6 Months
Best Seller
Full-Stack Developer BootcampJob Oriented
3 Months
Best Seller
UI/UX Design BootcampEnterprise Recommended
6 Months
Java Full Stack Developer BootcampCareer Kickstarter
490+ Hours
Front-End Development BootcampCareer Accelerator
4 Months
Backend Development Bootcamp (Node JS)Skills
ReactOn-Demand Courses
Angular TrainingTop Resources
Top HTML ProjectsBlockchain
Understand how transactions and databases work in blockchain technology.
View All Blockchain Coursesicon-stop-squareBlockchain Certifications
40 Hours
Blockchain Professional Certification32 Hours
Blockchain Solutions Architect Certification32 Hours
Blockchain Security Engineer Certification24 Hours
Blockchain Quality Engineer Certification5+ Hours
Blockchain 101 CertificationOn-Demand Courses
NFT Essentials 101: A Beginner's GuideTop Resources
Blockchain Interview QsProgramming
Learn to code efficiently and design software that solves problems.
View All Programming Coursesicon-codeSkills
Python CertificationInterview Prep
Career Accelerator
3 Months
Software Engineer Interview PrepOn-Demand Courses
Data Structures and Algorithms with JavaScriptTop Resources
Python TutorialDevOps
4.7 Rating 48 Questions 25 mins read5 Readers

A DevOps engineer is an information technology professional who works with the developers and the IT operations team to ensure the stable production environment, smooth code releases, application availability, software implementation, development, and maintenance across the clock.
For DevOps engineer to succeed and perform well in role it is important for them to must have a deep understanding of both development (with the understanding of the fundamentals of whatever coding language is used) and operational processes which includes the administration of the organisation network and servers that host the application which is being created. Along with this there are some other responsibilities such as creating accounts, troubleshooting, updating permissions, and ensuring that everything is regularly backed up.
Not only good technical skills but DevOps engineer should also be a good team player with flexible nature. Because generally it requires to work irregular hours and stay on call a bit longer to resolve any production issues or bugs.
It’s also important for a DevOps engineer to have a solid understanding of SDLC (Software Development Lifecycle) and all its components for delivery pipeline to automate CI/CD pipelines as much as possible.
SSH stands for Secure Shell which is an administrative protocol that provide encrypted connection between two host and let users have control for the remote servers or systems over the Internet and work using the command line.
SSH is a secured encrypted version that runs on TCP/IP port 22 that has a mechanism for remote user authentication, input communication between the client and the host and sending the output back to the client in an encrypted form.
Below are the steps that Developer can follow while pushing a file from their local system to the GitHub repository using Git cli:
a. Initialize Git in the project folder: We can run the command after navigating to folder which we want to push to GitHub:
Git init
This will create a hidden. Git directory in folder which helps Git to recognize and store the metadata and version history for project.
b. Add Git files:
This command will tell Git about which files to include in commit.
Git add -A
We can use option -A or --all which refer to include all files.
c. Commit Added Files:
Git commit -m 'Add Project'
d. Adding a New Remote Origin:
Here remote refers to the remote version of working repository and "origin" is the default name given to remote server by Git.
Git remote add origin [copied web address]
e. Push to GitHub
This will push the file to the remote repository:
Git push origin master
KPIs stands for Key Performance Indicators. There are many DevOps KPI that are essential for lifecycle, but few of them are given below:
Time to Detection: This KPI will check the time required in the detection of failures or issues. Faster the detection of issues and bug, more it will be helpful in maintaining the security so as to have least downtime or user impact.
Increase frequency of the deployment which can lead to agility and faster compliance with the changing needs of users.
Reduced failed deployments rate refers to the number of deployments which result in outages or other issues.
Mean Time to recovery is used to measure time period between service being down till it becomes Up and running.
Application Performance: -This KPI is important to keep check on the performance before end-users faces the performance issues and reports the bugs.
Service Level Agreement Compliance: Service should be having high availability and uptime as high as 99.999%, since it's one of most crucial parameters for any organisation
Forking Workflow is different from GIT workflow in the way that Git workflow uses single server-side repository and act as ‘central’ codebase whereas forking workflow provides every developer its own server-side repositories. Forking Workflow is seen implemented in public open-source projects where it provides the advantage of contribution which later can be integrated without everyone pushing code to single central repository. The only access to pushing the code to official repository is with project maintainer.
For handling different machines which require different user account to log in we can set up inventory variables in the inventory file.
For example, below hosts have different usernames and ports:
If we want to automate the password input in playbook, we can create a password file which can store all the passwords for encrypted file and call will be made by ansible to fetch those when required.
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q testuser@gateway.example.com"'
Another way is we can have separate script which contains the password. But at time of calling, print password will be required to stdout for seem less working.
ansible-playbook launch.yml --vault-password-file ~/ .vault_pass.py
We can copy Jenkins jobs from one server to another by using below steps:
Those operations can be done even when Jenkins is running.
For changes like these to take effect, you have to click "reload config" to force Jenkins to reload configuration from the disk.
Reference- https://wiki.jenkins-ci.org/display/JENKINS/Administering+Jenkins
EBS and EFS are both faster as compared to Amazon S3, due to high IOPS and lower latency.
EBS can be scaled up or down with single API call. Since EBS is cheaper than EFS, it can be used for database backup and low latency interactive application that require consistent, predictable, performance.
We can recover branch that has already pushed changes in the central repository but has been accidentally deleted by checking out the latest commit of this branch in the reflog and then checking it out as a new branch.
We can use below commands to check if the list of branches is merged into master:
This command will help to get list of branches merged into HEAD (i.e., current branch)
This Git command will list the branches that have not been merged into current branch
This will list branches merged into master
Note: By default, this applies only to the local branches. If we apply -a flag will show both local and remote branches, and the -r flag shows only the remote branches.
Sudo stands for Super User DO where the super user is the root user of Linux and used as prefix "sudo" with any command to elevate privileges allowing user to execute command as another user and execute command at their root level.
To use sudo command, user needs to be added in sudoers file located at /etc path.
Git reflogs record when the tips of branches and other references were updated in the local repository and also maintains the branches/tags for log history that was either created locally or checked out. Reflogs are useful in various Git commands, to specify the old value of a reference. It can be used for recovery purpose.
For recovery purpose, we need to either create it locally or can checkout from remote repository to store reference logs.
Reflogs command shows commit snapshot of when the branch was created, renamed or commit details maintained by Git. Let's take an example where we have HEAD@ {5} which refer to "where HEAD used to be five moves ago", master @{two.weeks.ago} which refer to "where master used to point to two weeks ago in this local repository".
Now, Git log will show the current HEAD and all ancestral details of its parent. Basically, it will print where the commit HEAD pointing to, then its parent, then its parents and so on.
On other side, Git reflog doesn't show HEAD's ancestry details. Git reflog is an ordered list which shows the commits that HEAD has pointed to: it's undo history for our repository.
Git reflogs record when the tips of branches and other references were updated in the local repository and also maintains the branches/tags for log history that was either created locally or checked out. Reflogs are useful in various Git commands, to specify the old value of a reference. It can be used for recovery purpose.
For recovery purpose, we need to either create it locally or can checkout from remote repository to store reference logs.
Reflogs command shows commit snapshot of when the branch was created, renamed or commit details maintained by Git. Let's take an example where we have HEAD@ {5} which refer to "where HEAD used to be five moves ago", master @{two.weeks.ago} which refer to "where master used to point to two weeks ago in this local repository".
Now, Git log will show the current HEAD and all ancestral details of its parent. Basically, it will print where the commit HEAD pointing to, then its parent, then its parents and so on.
On other side, Git reflog doesn't show HEAD's ancestry details. Git reflog is an ordered list which shows the commits that HEAD has pointed to: it's undo history for our repository.
Blue Green Deployment is type of continuous deployment that consist of two identical environments Blue and Green, both running on production version, but configured in way where one is Live and other is idle. It focuses mainly on redirecting the traffic between two environments running with a different version of the application.
This Deployment pattern reduces downtime and the risk which can occur due to the deployment. In case any error occurs with the new version, we can immediately roll back to the stable version by swapping the environment.
To implement Blue-Green deployment, there should be two identical environments. Also, this requires Router or Load Balancer so that traffic can be routed to the desired environment
Here one of the either blue or green environment would indicate the old version of the application whereas the other environment would be the new version.
The production traffic would be moved gradually from the old version environment to the new version of the environment and once it is fully transferred, the old environment is kept on hold just in case of the rollback necessity.
We can implement Blue Green deployment in AWS by using Elastic Beanstalk service and then swapping Application, which can help us in providing the services for the automation of deployment process. Elastic Beanstalk helps in making the deployment process easy. Once we upload the application code with some version on Elastic Beanstalk and provide information about the application, it deploys our application in the Blue Environment and provide us with the URL. The above Environment configuration is then copied and used to launch the new version of application-i.e. Green Environment with its own different and separate URL.
This point of time our application is Up with the two environments but the traffic is navigated to only to one that is Blue Environment.
For Switching the environment to Green and re-directing the traffic to it, we need to choose other Environment details from Elastic Beanstalk Console and Swap it using Action menu. It leads Elastic Beanstalk to perform the DNS Switch and once DNS changes are done, we can terminate the Blue Environment. In this way, traffic will be redirected to Green Environment.
In case of Rollback required, we need to invoke the Switch Environment URL again.
Other than this ,there are a number of other solutions that AWS provides which we can use for implementing Blue Green deployment in our application , some of them are as follow:
Blue Green deployment provide many benefits to the DevOps team and proven to be useful in deploying new application features or fixing the software bug fix or issues .But it can be used only under below scenarios:
The above factor can lead to increase in cost. Since project has to bear the cost of two production environments and maintaining them. But costing factor can be controlled and managed well a little bit , if planned in the proper way.
Reference: https://www.knowledgehut.com/blog/devops/blue-green-deployment