
Domains
Agile Management
Master Agile methodologies for efficient and timely project delivery.
View All Agile Management Coursesicon-refresh-cwCertifications
Scrum Alliance
16 Hours
Best Seller
Certified ScrumMaster (CSM) CertificationScrum Alliance
16 Hours
Best Seller
Certified Scrum Product Owner (CSPO) CertificationScaled Agile
16 Hours
Trending
Leading SAFe 6.0 CertificationScrum.org
16 Hours
Professional Scrum Master (PSM) CertificationScaled Agile
16 Hours
SAFe 6.0 Scrum Master (SSM) CertificationAdvanced Certifications
Scaled Agile, Inc.
32 Hours
Recommended
Implementing SAFe 6.0 (SPC) CertificationScaled Agile, Inc.
24 Hours
SAFe 6.0 Release Train Engineer (RTE) CertificationScaled Agile, Inc.
16 Hours
Trending
SAFe® 6.0 Product Owner/Product Manager (POPM)IC Agile
24 Hours
ICP Agile Certified Coaching (ICP-ACC)Scrum.org
16 Hours
Professional Scrum Product Owner I (PSPO I) TrainingMasters
32 Hours
Trending
Agile Management Master's Program32 Hours
Agile Excellence Master's ProgramOn-Demand Courses
Agile and ScrumRoles
Scrum MasterTech Courses and Bootcamps
Full Stack Developer BootcampAccreditation Bodies
Scrum AllianceTop Resources
Scrum TutorialProject Management
Gain expert skills to lead projects to success and timely completion.
View All Project Management Coursesicon-standCertifications
PMI
36 Hours
Best Seller
Project Management Professional (PMP) CertificationAxelos
32 Hours
PRINCE2 Foundation & Practitioner CertificationAxelos
16 Hours
PRINCE2 Foundation CertificationAxelos
16 Hours
PRINCE2 Practitioner CertificationSkills
Change ManagementMasters
Job Oriented
45 Hours
Trending
Project Management Master's ProgramUniversity Programs
45 Hours
Trending
Project Management Master's ProgramOn-Demand Courses
PRINCE2 Practitioner CourseRoles
Project ManagerAccreditation Bodies
PMITop Resources
Theories of MotivationCloud Computing
Learn to harness the cloud to deliver computing resources efficiently.
View All Cloud Computing Coursesicon-cloud-snowingCertifications
AWS
32 Hours
Best Seller
AWS Certified Solutions Architect - AssociateAWS
32 Hours
AWS Cloud Practitioner CertificationAWS
24 Hours
AWS DevOps CertificationMicrosoft
16 Hours
Azure Fundamentals CertificationMicrosoft
24 Hours
Best Seller
Azure Administrator CertificationMicrosoft
45 Hours
Recommended
Azure Data Engineer CertificationMicrosoft
32 Hours
Azure Solution Architect CertificationMicrosoft
40 Hours
Azure DevOps CertificationAWS
24 Hours
Systems Operations on AWS Certification TrainingAWS
24 Hours
Developing on AWSMasters
Job Oriented
48 Hours
New
AWS Cloud Architect Masters ProgramBootcamps
Career Kickstarter
100 Hours
Trending
Cloud Engineer BootcampRoles
Cloud EngineerOn-Demand Courses
AWS Certified Developer Associate - Complete GuideAuthorized Partners of
AWSTop Resources
Scrum TutorialIT Service Management
Understand how to plan, design, and optimize IT services efficiently.
View All DevOps Coursesicon-git-commitCertifications
Axelos
16 Hours
Best Seller
ITIL 4 Foundation CertificationAxelos
16 Hours
ITIL Practitioner CertificationPeopleCert
16 Hours
ISO 14001 Foundation CertificationPeopleCert
16 Hours
ISO 20000 CertificationPeopleCert
24 Hours
ISO 27000 Foundation CertificationAxelos
24 Hours
ITIL 4 Specialist: Create, Deliver and Support TrainingAxelos
24 Hours
ITIL 4 Specialist: Drive Stakeholder Value TrainingAxelos
16 Hours
ITIL 4 Strategist Direct, Plan and Improve TrainingOn-Demand Courses
ITIL 4 Specialist: Create, Deliver and Support ExamTop Resources
ITIL Practice TestData Science
Unlock valuable insights from data with advanced analytics.
View All Data Science Coursesicon-dataBootcamps
Job Oriented
6 Months
Trending
Data Science BootcampJob Oriented
289 Hours
Data Engineer BootcampJob Oriented
6 Months
Data Analyst BootcampJob Oriented
288 Hours
New
AI Engineer BootcampSkills
Data Science with PythonRoles
Data ScientistOn-Demand Courses
Data Analysis Using ExcelTop Resources
Machine Learning TutorialDevOps
Automate and streamline the delivery of products and services.
View All DevOps Coursesicon-terminal-squareCertifications
DevOps Institute
16 Hours
Best Seller
DevOps Foundation CertificationCNCF
32 Hours
New
Certified Kubernetes AdministratorDevops Institute
16 Hours
Devops LeaderSkills
KubernetesRoles
DevOps EngineerOn-Demand Courses
CI/CD with Jenkins XGlobal Accreditations
DevOps InstituteTop Resources
Top DevOps ProjectsBI And Visualization
Understand how to transform data into actionable, measurable insights.
View All BI And Visualization Coursesicon-microscopeBI and Visualization Tools
Certification
24 Hours
Recommended
Tableau CertificationCertification
24 Hours
Data Visualization with Tableau CertificationMicrosoft
24 Hours
Best Seller
Microsoft Power BI CertificationTIBCO
36 Hours
TIBCO Spotfire TrainingCertification
30 Hours
Data Visualization with QlikView CertificationCertification
16 Hours
Sisense BI CertificationOn-Demand Courses
Data Visualization Using Tableau TrainingTop Resources
Python Data Viz LibsCyber Security
Understand how to protect data and systems from threats or disasters.
View All Cyber Security Coursesicon-refresh-cwCertifications
CompTIA
40 Hours
Best Seller
CompTIA Security+EC-Council
40 Hours
Certified Ethical Hacker (CEH v12) CertificationISACA
22 Hours
Certified Information Systems Auditor (CISA) CertificationISACA
40 Hours
Certified Information Security Manager (CISM) Certification(ISC)²
40 Hours
Certified Information Systems Security Professional (CISSP)(ISC)²
40 Hours
Certified Cloud Security Professional (CCSP) Certification16 Hours
Certified Information Privacy Professional - Europe (CIPP-E) CertificationISACA
16 Hours
COBIT5 Foundation16 Hours
Payment Card Industry Security Standards (PCI-DSS) CertificationOn-Demand Courses
CISSPTop Resources
Laptops for IT SecurityWeb Development
Learn to create user-friendly, fast, and dynamic web applications.
View All Web Development Coursesicon-codeBootcamps
Career Kickstarter
6 Months
Best Seller
Full-Stack Developer BootcampJob Oriented
3 Months
Best Seller
UI/UX Design BootcampEnterprise Recommended
6 Months
Java Full Stack Developer BootcampCareer Kickstarter
490+ Hours
Front-End Development BootcampCareer Accelerator
4 Months
Backend Development Bootcamp (Node JS)Skills
ReactOn-Demand Courses
Angular TrainingTop Resources
Top HTML ProjectsBlockchain
Understand how transactions and databases work in blockchain technology.
View All Blockchain Coursesicon-stop-squareBlockchain Certifications
40 Hours
Blockchain Professional Certification32 Hours
Blockchain Solutions Architect Certification32 Hours
Blockchain Security Engineer Certification24 Hours
Blockchain Quality Engineer Certification5+ Hours
Blockchain 101 CertificationOn-Demand Courses
NFT Essentials 101: A Beginner's GuideTop Resources
Blockchain Interview QsProgramming
Learn to code efficiently and design software that solves problems.
View All Programming Coursesicon-codeSkills
Python CertificationInterview Prep
Career Accelerator
3 Months
Software Engineer Interview PrepOn-Demand Courses
Data Structures and Algorithms with JavaScriptTop Resources
Python TutorialBig Data
4.8 Rating 69 Questions 30 mins read10 Readers

db.<collection>.find().skip(n).limit(n) Note: n is the pagesize, for the first page skip(n) will not be applicable limit(n) limits the documents to be returned from the cursor to n, skip(n) will skip n documents from the cursor |
This can be achieved in MongoDB using the $type operator. A null value, i.e., BSON type null has the type number 10. Using this type number, only those documents can be retrieved whose value is null.
Take the example of the below two documents in startup collection
{ _id: 1, name: "XYZ Tech", website: null }, { _id: 2, name: “ABC Pvt Ltd” }
The query { website : { $type: 10 } } will retrieve only those documents where the website is null, in the above case it would be the startup “XYZ Tech”
Note: The query { website : null } on the other hand will match documents where the website is null or the documents where the website field does not exist. For the above collection data, this query will return both the startups.
This is one of the most frequently asked MongoDB interview questions and answers in recent times.
only those documents that contain the field specified in the query.
For the following documents in employee collection
{ _id: 1, name: "Jonas", linkedInProfile: null }, { _id: 2, name: “Williams” }
The query { linkedInProfile: { $exists: true } } will return only the employee “Jonas”
In MongoDB we have Built-in roles as well as custom roles. Built-in roles already have pre-defined access associated with them. We can assign these roles directly to users or groups for access. To run mongostat we would require access to run the server status on the server.
Built-in role cluster monitor comes with required access for the same.
Custom roles or user-defined roles are the ones where we have to manually define access actions to a particular resource. MongoDB provides method db.createRole() for creating user-defined roles. These roles can be created in a specific database as MongoDB uses a combination of database and role name to uniquely define the role.
We will create a custom role mongostatRole that provides only the privileges to run mongostat.
First, we need to connect to mongod or mongos to the admin database with a user that has privileges to create roles in the admin as well as other databases.
mongo --port 27017 -u admin -p 'abc***' --authenticationDatabase 'admin'
Now we will create a desired custom role in the admin database.
use admin
db.createRole(
role: "mongostatRole",
privileges: [
{resource: { cluster: true }, actions: [ "serverStatus" ] }
],
roles: []
)This role can now be assigned to members of monitoring team.
In MongoDB data is stored as JSON documents. These documents can have different sets of fields, with different data type for each field. For example, we can have a collection with number, varchar and array all as different documents.
{ “a” : 143 }
{ “name” : “john” }
{ “x” : [1,2,3] }It is not correct to say MongoDB is schemaless, in fact, schema plays an important role in the designing of MongoDB applications. MongoDB has a dynamic schema having database structure with collections and indexes. These collections can be created either implicitly or explicitly.
Due to the dynamic behaviour of the schema, MongoDB has several advantages over RDBMS systems.
Schema Migrations become very easy as in traditional systems we had to use ALTER TABLE command after adding any column which could result in downtime. In MongoDB, such adjustments become transparent and automatic. For example, if we want to add CITY field to people collection, we can add the attribute and resave, that’s it. Whereas in a traditional system we would have to run ALTER TABLE command followed by reorg which would require downtime.
The first part of the query would give all documents where y>=10. So we will have 2 documents i.e
d> { "_id" : 4, "x" : 4, "y" : 10 }
e> { "_id" : 5, "x" : 5, "y" : 75 }Now the second part of the query would update the value of Y for above 2 documents to 75, but we already have a document with value y:75, that will not be updated.
Finally, we will have one 1 document that will be updated by the provided query.
d> { "_id" : 4, "x" : 4, "y" : 10 }Every operation on the primary is logged in operation logs known as oplog. These oplogs are replicated to For a healthy replica set system, it is recommended that all members are in sync with no replication lag. Data is first written on primary by the applications then replicated to secondary. This synchronization is important to maintain up-to-date copies of data on all members. Synchronization happens in 2 ways: initial sync and continuous replication.
The oplog is operation logs that keep an update of all operations that modify the data stored in databases. We can define the oplog size while starting MongoDB by specifying the --oplog option. If we do not specify this option it will take the default values which is 5% of physical memory in case of wiredTiger. While the default value is sufficient for most workloads in some cases we may need to change the oplog size for the replica set.
OPlog size is changed in a rolling manner, first, we change on all secondary and then a primary member of the replica set. To change oplog size
use local
db.oplog.rs.stats().maxSize
db.adminCommand({replSetResizeOplog: 1, size: "Size-in-MB"})MongoDB applies database operations on the primary and then records the operations on the primary’s oplog. The secondary members then copy and apply these operations in an asynchronous process. For each operation, there is separate oplog entry.
First, let’s check how many rows the query would fetch by changing delete to find operation.
db.sample.find( { state : "WA" } )
This will give all the rows with the state is WA.
{"firstName" : "Arthur", "lastName" : "Aaronson", "state" : "WA", "city" : "Seattle", "likes" : [ "dogs", "cats" ] }
{"firstName" : "Beth", "lastName" : "Barnes", "state" : "WA", "city" : "Richland", "likes" : [ "forest", "cats" ] }
{"firstName" : "Dawn", "lastName" : "Davis", "state" : "WA", "city" : "Seattle", "likes" : [ "forest", "mountains" ] }Now Ideally delete should remove all matching rows but query says deleteOne.
If the query would have said deleteMany then all the matching rows would have been deleted and there would be 3 oplog entries but deleteOne will remove first matching row. So 1 oplog entry will be generated with provided query
Consider the following compound index
{ "accountHolder": 1, "accountNumber": 1, "currency": 1 }The index prefixes are
{ accountHolder: 1 }
{ accountHolder: 1, accountNumber: 1 }Query plan will use this index if the query has the following fields
The $addToSet operator should be used with the $each modifier for this. The $each modifier allows the $addToSet operator to add multiple values to the array field.
Example, start ups are tagged as per the technology skill that they excel in
{ _id: 5, name: "XYZ Technology", skills: [ "Big Data", "AI", “Cloud” ] } Now the start up needs to be updated with additional skills
db.startups.update( { _id: 5 },
{ $addToSet: { skills: { $each: [ "Machine Learning", "RPA" ] } } }
)The resultant document after update()
{ _id: 5, name: "XYZ Technology", skills: [ "Big Data", "AI", “Cloud”, "Machine Learning", "RPA"] }Note: There is no particular ordering of elements in the modified set, $addToSet does not guarantee that. Duplicate items will not be added.
When "fast reads" are the single most important criteria, Embedded documents can be the best way to model one-to-one and one-to-many relationships.
Consider the example of certifications awarded to an employee, in the below example the certification data is embedded in the employee document which is a denormalized way of storing data
{
_id: "10",
name: "Sarah Jones",
certifications: [
{ certification: "Certified Project Management Professional”,
certifying_auth: “PMI”,
date: "06/06/2015"
},
{ certification: "Oracle Certified Professional”,
certifying_auth: “Oracle Corporation”,
date: "10/10/2017"
}
]
}In a normalized form, there would be a reference to the employee document from the certificate document, example
{ employee_id: "10",
certification: "Certified Project Management Profesional”,
certifying_auth: “PMI”,
date: "06/06/2015"
}Embedded documents are best used when the entire relationship data needs to be frequently retrieved together. Data can be retrieved via single query and hence is much faster.
Note: Embedded documents should not grow unbounded, otherwise it can slow down both read and write operations. Other factors like consistency and frequency of data change should be considered before making the final design decision for the application.
MongoDB has the db.collection.explain(), cursor.explain() and explain command to provide information on the query plan. The results of explain contain a lot of information, key ones being
Recursive queries can be performed within a collection using $graphLookUp which is an aggregate pipeline stage.
If a collection has a self-referencing field like the classic example of Manager for an employee, then a query to get the entire reporting structure for manager “David” would look like this
db.employees.aggregate( [
{
$graphLookup: {
from: "employees",
startWith: "David",
connectFromField: "manager",
connectToField: "name",
as: "Reporting Structure"
}
}
] )For the following documents in the employee collection,
{ "_id" : 4, "name" : " David " , "manager" : "Sarah" }
{ "_id" : 5, "name" : "John" , "manager" : "David" }
{ "_id" : 6, "name" : "Richard", "manager" : " John " }
{ "_id" : 7, "name" : "Stacy" , "manager" : " Richard " }Output of the above $graphLookup operation would result in the following 3 documents returned
{ "_id" : 5, "name" : "John" , "manager" : "David", … }
{ "_id" : 6, "name" : "Richard", "manager" : " John ", … }
{ "_id" : 7, "name" : "Stacy" , "manager" : " Richard", … }The hierarchy starts with “David” which is specified in startWith and there on the data for each of the members in that reporting hierarchy are fetched recursively