Search

How to Drop a MongoDB Database?

MongoDB is the most popular NoSQL database among both enterprise and startups, and the fact that it is highly scalable makes it perfectly suited for current web-apps, which need to scale once the user base increases. MongoDB is different from traditional relational databases because it uses json like objects to store data, instead of tables in relational databases.In this post, we will learn to drop a MongoDB database, using MongoDB Drop Database command. Dropping a database means dropping all the collections (tables in MongoDB) in it, along with the indexes. If you don’t want to drop the entire database, you can drop individual collections.PrerequisitesWe are using Windows 10 in this tutorial. Please make sure you have downloaded the MongoDB Community Server and installed it. It is a very easy setup and you will find a lot of good articles on the internet to guide you through this process. Do make sure that you have added it in the Environment variable in your PC. We have also created an example database, which contains a collection named products.OverviewDropping a database is quite simple and can be done with three methods from the command line and also by any GUI (Graphical User Interface) tool, which is used to connect with the running MongoDB instance.To delete a MongoDB database through the command line, we mainly use db.dropDatabase(). We will first understand how to do this.Drop Database definitionAs mentioned earlier, we drop a database in MongoDB, or delete MongoDB database using the db.dropDatabase() command.db.dropDatabase(<writeConcern>)This removes the current database, deleting all collections inside it, along with the indexes.writeConcern is the optional parameter, which is a document expressing the write concern to use, if greater than majority. Write concern can include the following fields:{ w: <value>, j: <boolean>, wtimeout: <number> }w option is used to request acknowledgement, that the write operation has passed to a specified number of mongod instances.j option is used to request acknowledgement, that the write operation had been written to the disk-journalwtimeout option is used to specify the time limit, to prevent the write operations from blocking indefinitely.Using the w option, the following <value> are available:w: 1 – Requests acknowledgement that the write concern has passed to the single mongod instance. It is also the default option.w: 0 – Requests no acknowledgement of the write operation.w: “majority” – It requests acknowledgement that the write operation has passed to the majority of primary and secondary servers.Now, we will look into the j option values.j: true – It requests acknowledgement that the different mongod instances, mentioned in w: <value>, have been written to the on-disk journal.The wtimeout option specifies the time limit in milliseconds, for the write concern. It causes write operations to return with an error after the specified limit.dropDatabaseWe can also use the dropDatabase command directly to delete MongoDB, current database. The command has the following form.{ dropDatabase: 1, writeConcern: <document>, comment: <any> }This command optional fields are-writeConcern is the optional parameter, which is in the form of a document expressing the write concern to use if it is greater than majority. We have just looked into it in detail earlier.comment is a user provided comment, which is attached to this command. Once set, this will appear in the logs.Dropping with dropDatabase CommandWe can drop the database through dropDatabase command. I am connected to my local mongoDB instance from the windows terminal, by giving the mongo command.> mongoAfter that to show all the databases in my MongoDB database, we will use the command show dbs command. It will show all our databases. Out of it I created the example database only.> show dbs admin    0.000GB config   0.000GB example  0.000GB local    0.000GBNow, to drop the example database, first we have to go inside it by using the use example command.> use example switched to db exampleNow, we will use the dropDatabase command from within a db.runCommand(). Now, the runCommand runs the command in the context of the current database. This means it will run the dropDatabase command in the current database, which is seen in this example:> db.runCommand( { dropDatabase: 1 } ) { "dropped" : "example", "ok" : 1 }As discussed, the earlier dropDatabase command accepts an optional writeConcern argument, and also a comment field. The syntax then is written as shown below.{ dropDatabase: 1, writeConcern: <document>, comment: <any> }Dropping with dropDatabase() MethodWe can drop the database through db.dropDatabase() method also. It also removes the current database, deleting all associated files.Once again connect to mongoDb database by giving the command mongo. After that we can check all the databases with show dbs command.For db.dropDatabase() method also, we need to go inside the database that we are dropping, by using use <dbName> command.Now, we will use the db.dropDatabase() command to drop the database, which is example in our case.> db.dropDatabase() { "dropped" : "example", "ok" : 1 } The db.dropDatabase() method also accepts an optional writeConcern argument. The syntax then becomes: { w: <value>, j: <boolean>, wtimeout: <number> }Dropping with Unix Shell and Eval commandThis method only works in a Unix environment like Ubuntu, Mac or WSL2 for Windows. Here, the mongo command with the database name can be used to connect to a specific database. For example, to connect to the example database, we can use the below command.$ mongo exampleNow, we can drop our example database with one line command, where we use the eval command followed by the JavaScript code that we wish MongoDB to execute. As seen earlier to drop the database, we use the db.dropDatabase() command. Now, we will wrap this method in printjson function, to get the output of the command.$ mongo example --eval "printjson(db.dropDatabase())" MongoDB shell version: 3.0.9 connecting to: example { "dropped" : "example", "ok" : 1 }Dropping with MongoDB CompassNow we can drop a database or delete MongoDB database very easily using a Graphical tool like MongoDB compass. Infact this tool was auto-downloaded when I had installed MongoDB on my Windows machine.So, first open the MongoDB Compass and you will see the option to connect to a database. We want to connect to our local database, in which the hostname is localhost and port is 27017. These are the default values for all MongoDB connections, unless you changed it during installation.Don’t change anything and click on the Connect button.We are now connected to the MongoDB running on our localhost and we can see all the databases including example database. Next, hover over example database, which will show a delete button. Click on the delete button.We will be shown a pop-up where we have to write the name of our database, which is example in our case. Then click on the DROP DATABASE button, to MongoDB remove database.After that it will take us back to the earlier screen and we can see the example database is dropped.ConclusionIn this article, we have learned about the different MongoDB delete database commands. We have also learned about the write concern optional parameter, which can be passed to both db.dropDatabase() method and dropDatabase command. This parameter is mainly used in large database systems, which have several servers. We hope you found this article useful! Keep learning.
How to Drop a MongoDB Database?
Nabendu
Nabendu

Nabendu Biswas

Author

Nabendu Biswas is a Full Stack JavaScript Developer, who has been working in the IT industry for the past 16 years and has worked for world’s top development firms, and Investment banks. He is a passionate tech blogger. He is also a tech youtuber and loves to teach people JavaScript. He is also an Apress author with three Gatsby books published. 

Posts by Nabendu Biswas

How to Drop a MongoDB Database?

MongoDB is the most popular NoSQL database among both enterprise and startups, and the fact that it is highly scalable makes it perfectly suited for current web-apps, which need to scale once the user base increases. MongoDB is different from traditional relational databases because it uses json like objects to store data, instead of tables in relational databases.In this post, we will learn to drop a MongoDB database, using MongoDB Drop Database command. Dropping a database means dropping all the collections (tables in MongoDB) in it, along with the indexes. If you don’t want to drop the entire database, you can drop individual collections.PrerequisitesWe are using Windows 10 in this tutorial. Please make sure you have downloaded the MongoDB Community Server and installed it. It is a very easy setup and you will find a lot of good articles on the internet to guide you through this process. Do make sure that you have added it in the Environment variable in your PC. We have also created an example database, which contains a collection named products.OverviewDropping a database is quite simple and can be done with three methods from the command line and also by any GUI (Graphical User Interface) tool, which is used to connect with the running MongoDB instance.To delete a MongoDB database through the command line, we mainly use db.dropDatabase(). We will first understand how to do this.Drop Database definitionAs mentioned earlier, we drop a database in MongoDB, or delete MongoDB database using the db.dropDatabase() command.db.dropDatabase()This removes the current database, deleting all collections inside it, along with the indexes.writeConcern is the optional parameter, which is a document expressing the write concern to use, if greater than majority. Write concern can include the following fields:{ w: , j: , wtimeout: }w option is used to request acknowledgement, that the write operation has passed to a specified number of mongod instances.j option is used to request acknowledgement, that the write operation had been written to the disk-journalwtimeout option is used to specify the time limit, to prevent the write operations from blocking indefinitely.Using the w option, the following are available:w: 1 – Requests acknowledgement that the write concern has passed to the single mongod instance. It is also the default option.w: 0 – Requests no acknowledgement of the write operation.w: “majority” – It requests acknowledgement that the write operation has passed to the majority of primary and secondary servers.Now, we will look into the j option values.j: true – It requests acknowledgement that the different mongod instances, mentioned in w: , have been written to the on-disk journal.The wtimeout option specifies the time limit in milliseconds, for the write concern. It causes write operations to return with an error after the specified limit.dropDatabaseWe can also use the dropDatabase command directly to delete MongoDB, current database. The command has the following form.{ dropDatabase: 1, writeConcern: , comment: }This command optional fields are-writeConcern is the optional parameter, which is in the form of a document expressing the write concern to use if it is greater than majority. We have just looked into it in detail earlier.comment is a user provided comment, which is attached to this command. Once set, this will appear in the logs.Dropping with dropDatabase CommandWe can drop the database through dropDatabase command. I am connected to my local mongoDB instance from the windows terminal, by giving the mongo command.> mongoAfter that to show all the databases in my MongoDB database, we will use the command show dbs command. It will show all our databases. Out of it I created the example database only.> show dbs admin    0.000GB config   0.000GB example  0.000GB local    0.000GBNow, to drop the example database, first we have to go inside it by using the use example command.> use example switched to db exampleNow, we will use the dropDatabase command from within a db.runCommand(). Now, the runCommand runs the command in the context of the current database. This means it will run the dropDatabase command in the current database, which is seen in this example:> db.runCommand( { dropDatabase: 1 } ) { "dropped" : "example", "ok" : 1 }As discussed, the earlier dropDatabase command accepts an optional writeConcern argument, and also a comment field. The syntax then is written as shown below.{ dropDatabase: 1, writeConcern: , comment: }Dropping with dropDatabase() MethodWe can drop the database through db.dropDatabase() method also. It also removes the current database, deleting all associated files.Once again connect to mongoDb database by giving the command mongo. After that we can check all the databases with show dbs command.For db.dropDatabase() method also, we need to go inside the database that we are dropping, by using use command.Now, we will use the db.dropDatabase() command to drop the database, which is example in our case.> db.dropDatabase() { "dropped" : "example", "ok" : 1 } The db.dropDatabase() method also accepts an optional writeConcern argument. The syntax then becomes: { w: , j: , wtimeout: }Dropping with Unix Shell and Eval commandThis method only works in a Unix environment like Ubuntu, Mac or WSL2 for Windows. Here, the mongo command with the database name can be used to connect to a specific database. For example, to connect to the example database, we can use the below command.$ mongo exampleNow, we can drop our example database with one line command, where we use the eval command followed by the JavaScript code that we wish MongoDB to execute. As seen earlier to drop the database, we use the db.dropDatabase() command. Now, we will wrap this method in printjson function, to get the output of the command.$ mongo example --eval "printjson(db.dropDatabase())" MongoDB shell version: 3.0.9 connecting to: example { "dropped" : "example", "ok" : 1 }Dropping with MongoDB CompassNow we can drop a database or delete MongoDB database very easily using a Graphical tool like MongoDB compass. Infact this tool was auto-downloaded when I had installed MongoDB on my Windows machine.So, first open the MongoDB Compass and you will see the option to connect to a database. We want to connect to our local database, in which the hostname is localhost and port is 27017. These are the default values for all MongoDB connections, unless you changed it during installation.Don’t change anything and click on the Connect button.We are now connected to the MongoDB running on our localhost and we can see all the databases including example database. Next, hover over example database, which will show a delete button. Click on the delete button.We will be shown a pop-up where we have to write the name of our database, which is example in our case. Then click on the DROP DATABASE button, to MongoDB remove database.After that it will take us back to the earlier screen and we can see the example database is dropped.ConclusionIn this article, we have learned about the different MongoDB delete database commands. We have also learned about the write concern optional parameter, which can be passed to both db.dropDatabase() method and dropDatabase command. This parameter is mainly used in large database systems, which have several servers. We hope you found this article useful! Keep learning.
1713
How to Drop a MongoDB Database?

MongoDB is the most popular NoSQL database among b... Read More

How to Become a Successful Full Stack Web Developer?

Full stack developer roles are among the hottest careers in the tech space now. These talented folks can develop a whole product from scratch. A full stack developer is a combination of Front-end developer and Backend developer. These two in themselves are full time jobs and most people make careers out of one of them. So, we will start with Front-end roadmap and then go to Back-end roadmap. A person interested in becoming a Full-stack developer needs to have proficiency in both the front end and back-end tools, just like I started as a Front-end developer and later on become a Full stack developer by mastering JavaScript backend technologies and databases.The demand for Full Stack Web DeveloperThe demand for Full stack developers is the highest in early-stage startups, where they want to create a Minimum Viable Product at the earliest to showcase to the investors. It is also a nice skill to have in addition to frontend technologies or backend technologies alone, since an employer prefers people with both skills.There are a lot of technologies to learn to be a Full-Stack developer. We will discuss about them in the coming sections.   List of technologies to master to become a Full-Stack developer A full-stack developer is actually a combination of Frontend developer and Backend developer. We need to master both, and both have different Roadmaps. Let’s start with the basics. The frontend is the web-site which we see and it is primarily made with HTML and CSS.  JavaScript was also used earlier but nowadays, it is created with JavaScript frameworks like ReactJS, Angular or Vue. All these frameworks require one to learn the basics of HTML, CSS, & JavaScript. So, we need to learn the basics followed by at least one framework.In the backend we have a lot of technologies and databases also. So, we need to choose one backend framework from Java (Spring Framework), JavaScript (NodeJS) etc and then also learn databases. Databases are divided into two categories, which is NoSQL(MongoDB) and SQL(PostgreSQL, MySQL, Oracle) databases. So, you need to choose one of the databases.We are also required to know about DevOps, which is a practice of harmonizing development and operations whereby the entire pipeline from development, testing, deployment, continuous integration and feedback is automated. The knowledge of either AWS or Azure based cloud ecosystem is required, and also CI/CD like Jenkins and containerizing & orchestrating applications using Docker and Kubernetes.1. Frontend RoadmapLearn the BasicsPlease refer to the attached figure for Front-end roadmap, as we will be referring to this throughout this article. We have to start our journey by learning HTML, CSS and JavaScript which is the base for a web-app or website. HTML has changed a bit over the years, with the introduction of HTML 5 and semantics tags, so make sure to update yourself. JavaScript which was released in 1995, didn’t change much during the next 20 years. But once more and more developers started using it, the ECMA committee decided to add some very nice features and enhance the language, and renamed it ES6 in 2015. After that they regularly added new features to the language and have just released ES2020 in June 2020, which has many additional features. So, learn the basic JavaScript first and then upgrade to ES6 and newer versions. CSS is what makes a website or web-app beautiful, and is often considered the hardest part by a developer. Earlier, CSS was very confusing and had a steep learning curve, because of the use of floats to create a layout. Developers usually used to work with CSS frameworks like bootstrap to design a site. But things have changed a lot with the invention of CSS Grid and Flexbox. Some of the best resources to learn the basics are - html.specdeveloper.mozilla.HTMLStyle CSSdeveloper.mozilla.CSSdeveloper.mozilla.JavaScriptGetting Deeper Now, just learning JavaScript and some basic CSS will not make you a good Front-end developer as you have to take a deep dive into JavaScript. We will discuss CSS later, after learning the essentials of JavaScript.JavaScript EssentialsThere are many things associated with JavaScript which we need to learn before moving forward.The Terminal The first thing to learn is to work in a terminal, and master some of the basic commands. If you are on a Mac, it’s already based on Linux and runs most Linux commands. If you are working on Windows then you must install git bash, which will give you a Linux environment to work with. In JavaScript frameworks, we need to run a lot of commands from the terminal, like if we want to install a third-party dependency by npm.  The basics of Linux can be learnt from their official site.1. Linux FoundationVersion ControlNext, learning version control is very important because we should always keep our code in some remote repository like Github. The industry works on Git, which is version control software. It is completely command-based and is used heavily everywhere. Learn the basic commands which will be useful even for an individual developer. Later on, when working with teams, more advanced knowledge of the git command is required.Through the git commands, we store our code in repositories. The most popular ones are Github and Bit Bucket, so we need to learn how to store and link them.The basics of git can be learnt from this awesome tutorial.1. Git TutorialTask Runners   Task runners are applications which are used to automate tasks required in projects. These tasks include minification of JavaScript and CSS files, CSS preprocessing like from SASS to CSS, image optimization and Unit testing. The three popular task runners are npm scripts, gulp and grunt. The npm script is nothing but the package.json file which comes with React projects or is created in a Node.js project using npm init. Gulp and Grunt are much bigger applications and also have a plugin ecosystem that is suited for large JavaScript projects. The basics for these two technologies can be learnt from here. 1. Gulp2. GruntModule Loader and Bundler  Both module loaders and bundlers are required for large JavaScript applications. Knowledge of both is required, if the project you are working is a big Vanilla JavaScript project. When a large JavaScript application consists of hundreds of files, the module loader takes care of the dependency and makes sure all the modules are loaded when the application is executed. Examples are RequireJS and SystemJS.Module bundlers also do the same thing, building it at the time of application build rather than at the runtime. Popular examples are Webpack and Rollup. 1. RequireJS2. Github3. Webpack4. RollupJSTesting  Testing nowadays is very important in any type of project. There are two types of testing; one is known as Unit testing and other as end-to-end testing. For unit testing we write test cases and the most popular tool nowadays is Jest. End-to-end testing is automated testing, which emulates the whole app. Suppose, an app has a login screen and then it shows posts. The testing tool will run the web-app to check whether all the functionalities are done correctly. The two most popular options today are Puppeteer and Cypress. The tutorials to refer for these topics are - 1. Jest2. Puppeteer3. CypressLibraries and FrameworkThey are the most important part of the JavaScript ecosystem nowadays. It all started with the release of AngularJS in 2010. Before that period most enterprise apps were made in Java and were desktop apps. But AngularJS changed everything, because it made it easy to manage big projects with JavaScript and helped to create complex web-apps.1. React   It is the most popular JavaScript library today and is used by both enterprises and startups that have a huge ecosystem. It is not a complete framework like Angular and we have to install third party dependencies for most things. But if you want to learn a framework that will get you a job, then that framework would be ReactJS, and its demand is not going away for the next 5 years. The component approach and its easy learning curve have made React more popular than other frameworks. A good starting tutorial for React is1. ReactJSState Management   In React state management can sometimes become complex, when we need to share data between components. We generally take help of external packages in it with the most popular being Redux. But we also have other state management libraries like XState and Recoil. Server-side rendering   With performance becoming important nowadays, Server-Side Rendering speeds up the React projects even faster. In SSR projects, the React code is rendered on the server and the client browser directly receives the HTML, CSS, JS bundle. The only framework to do it is NextJS. Static Site Generators   Lot of sites don’t need to be updated frequently and it is the place where the only Static Site Generator for ReactJS, which is GatsbyJS shines. With the help of GatsbyJS we can create extremely fast static sites and it gets into Wordpress domain a lot with it. GatsbyJS also has a huge ecosystem of plugins, which enhances its functionalities. React Testing   Unit testing is a very important part of ReactJS projects, especially the ones which are very large. Unit testing ensures that we have lower bugs in Production build. The two popular libraries are – Enzyme and Jest. 2. Angular    It is a complete framework and unlike React requires very few external dependencies. Everything is built within Angular and we don’t have to go outside for more features. Since it was among the earliest frameworks, older projects are in Angular and it is still widely used in enterprises. A good tutorial to learn Angular is below. Angular3. Vue    Vue is another very popular JavaScript library, which has the best features of both ReactJS and Angular and has become very popular in recent years. It is widely used in both enterprise and startups. A good tutorial to start with Vue is below. Vue4. NuxtJS   It is used for Server-Side Rendering in Vue projects and is similar to the NextJS framework used in ReactJS for SSR.  5. Svelte    It is the newest of all frameworks/libraries and has become quite popular, but still not used much in enterprises and startups. It is different from React, Vue and Angular and converts the app at build time rather than at run time as in the other three. Good tutorials to start with Svelte are below. SvelteSvelte handbookCSS Deep DiveA lot has changed in CSS after it included CSS Grid and Flexbox; it has become much easier for developers to work with. CSS Essentials   It is now mandatory for frontend developers to learn CSS Grid and Flexbox, because through it we can develop beautiful layouts with ease. More companies are moving away from CSS Frameworks and have started working with CSS Grid and Flexbox, which are now supported by all browsers. Good tutorials to learn Flexbox and CSS Grid are below. CSS FlexboxCSS GridCSSPreprocessors  CSS preprocessors are used to add special functionalities in CSS, which it lacks. An example is Sass, which adds special features like variables and nested rules in CSS and is widely used in the industry for larger projects. The other popular one is PostCSS, in which we can use custom plugin and tools in CSS. CSS Frameworks  Frameworks were very popular from the early days of CSS, when it was very complicated because of floats. Bootstrap  This is the most popular and oldest CSS framework; easy to learn and also has a wide variety of elements, templates and interfaces. Bulma   It is another CSS framework, which is very popular and much easier to use than bootstrap. Tailwind CSS   This is a fairly new CSS framework and is quite popular nowadays. It follows a different approach than the other frameworks and contains easier classes. Styled Components (React)   This is a CSS in JS library and is for React only. It is used to create components out of every style and is very popular in the React world.  CI/CDThe Continuous Integration/ Continuous deployment is mainly used by DevOps. But a frontend engineer should know its basics. It is used to build, test and deploy applications automatically.Github Actions    It is a freely available CI/CD pipeline, which directly integrates to your github based project and can be used in a variety of languages. Deployment It is again a task which mainly falls into the domain of Backend engineers and DevOps, but a frontend engineer should know some basic and simple tools. Static Deployment   These products are mainly used to deploy static sites, which consists of HTML, CSS and JavaScript only. Two very popular services are Amazon S3 and Surge.sh Node Application Deployment   The projects containing node code cannot be deployed using static deployment. Even if the project is a simple ReactJS project, it also uses node for processing. These applications require services which run the Node code and deploy it. The three most popular services are Vercel, Firebase and Netlify. 2. Backend Roadmap (Including Storage, Services & Deployment)Understanding the BackendBackend is the part of the website that provides the functionality, allowing people to browse their favorite site, purchase a product and log into their account, for instance. All data related to a user or a product or anything else are generally stored in databases or CMS (Content Management System) and when a user visits any website, they are retrieved from there and shown. One of the responsibilities of a backend engineer involves writing APIs, which actually interact with the database and get the data. They are also involved in writing schemas of database and creating the structure of databases. Backend EssentialsFor a backend engineer, working in a Linux environment is an essential skill. A lot of the configurations are done on the terminal. So, he or she should be very good with Linux commands.Also, they should know both commands and the use of any git powered platforms like Github or bitbucket.Languages and FrameworksAll of the popular languages have some framework, which has been used for backend development. These frameworks are generally used to create API endpoints, which are used to fetch or store data in the database. For example, when we scroll articles on Facebook, these articles are fetched from a database and we use the GET method to fetch them. Similarly, when we write an article and hit submit, it uses POST method.Now, different frameworks implement this GET, POST and other APIs also referred to as RESTful APIs in their own way.Java   Java is by far the oldest and the most used language for backend development. It is also used for a variety of other tasks like Android development, but it shines in the backend because of its multithreading abilities. So, enterprise grade web-apps and web-apps with a lot of traffic prefer Java, because it handles loads better. The most popular frameworks for backend development in Java are Spring Framework and Hibernate. Some good beginner's tutorials are - 1. Spring framework2. Hibernate3. JavatpointJavaScript   It is a very popular choice for backend development, because on the frontend side JavaScript is the only choice. So, a lot of frontend engineers can take this choice to become Full-stack developers. Node.js   It allows developers to use JavaScript to write server-side code, through which they can write APIs. Actually, the API part can be done by numerous frameworks of Node.js out of which Express is widely used. The other popular framework is Fastify. Some good beginner's tutorials are - 1. Nodejs2. ExpressJs3. fastifyPython   Python is one of the most popular languages among developers and has been used in a variety of fields. The two most popular frameworks for Python are Flask and Django. Some good beginner tutorials are - 1. Flask2. DjangoC#   It is a very popular programming language which was developed by Microsoft and it has the power of C++. Its popularity increased once the .NET framework was released for backend development. As Microsoft is very popular in enterprises, the .NET framework is generally preferred in enterprises. A good tutorial to learn .NET is - 1. Dotnet2. Dotnet FrameworkGo  Go language which is also referred to as Golang, has gained popularity in recent years. It is used a lot in Backend programming and the two popular frameworks are Gin and Beego. DatabaseFor a Backend engineer, after making APIs with framework based on language, it's time to learn about Databases. Databases are used to store most of the things which we see in a web-app, from user login credentials to user posts and everything else. In the earlier days we only used to have one type of Database and that was Relational databases, which use tables to store data. Now we have two other categories also, one being NoSQL databases and the other In-memory databases. 1. Relational databases   Relational databases allow you to create, update and delete data stored in a table format. This type of database mostly uses SQL language to access the data, hence is also known as an SQL database. MySQL  It is one of the oldest databases and was released in 1995. It is an open-source database and was very popular in the 2000s with the rise of LAMP (Linux, Apache, MySQL, PHP) stack. It is still widely in use, but there are other popular Relational databases. A good tutorial to learn MySQL is - 1. MySQLPostgreSQL  PostgreSQL, which is also known as Postgres is also an old open-source Relational database, which was released in 1996. But it gained popularity recently, as it goes very well with modern stacks containing NodeJS and other backend technologies. A good tutorial to learn PostgreSQL is - 1. PostgreSQLOracle is the most popular and oldest relational database. It was released in 1979 and still remains the number one preference for enterprise customers. All the big banks and other organizations, run on Oracle databases. So, the knowledge of Oracle is a must in many companies for an Engineer. A good tutorial to learn Oracle is - 1. OracleMS-SQL  MS-SQL is also known as Microsoft SQL and is yet another commercial Relational database. It has got different editions, used by different audiences. It is also heavily used by enterprise users and powers a whole lot of big systems around the world. A good tutorial to learn MS-SQL is - 1. SQLServer2. NoSQL databases  NoSQL databases are also called non-SQL databases. The NoSQL databases mainly store data as key-value pairs, but some of them also use a SQL-like structure. These databases have become hugely popular in the 21st century, with the rise of large web-apps which have a lot of concurrent users. These databases can take huge loads, even millions of data connections, required by web-apps like Facebook, Amazon and others. Beside this, it is very easy to horizontally scale  a NoSQL database by adding more clusters, which is a problem in Relational Databases. MongoDB  It is the most popular NoSQL database, used by almost every modern app. It is a free to use database, but the hosting is charged if we host on popular cloud services like MongoDB atlas. Its knowledge is a must for backend engineers, who work on the modern stack. MongoDB uses json like documents to store data. A good tutorial to learn MongoDB is - 1. MongodbIt is a proprietary database service provided by Amazon. It is quite similar to MongoDB and uses key-value pairs to store data. It is also a part of the popular AWS services. A good tutorial to learn DynamoDB is-DynamoDBCassandra is an open-source and free to use NoSQL database . It takes a different approach when compared to other NoSQL databases, because we use commands like SQL, which are known as CQL (Cassandra Query Language). A good tutorial to learn Cassandra is - Cassandra3. In-memory databases   The in-memory database is a database, which keeps all of the data in the RAM. This means it is the fastest among all databases.  The most popular and widely used in-memory database is Redis. Redis  Redis (Remote Dictionary Server) is an in-memory database, which stores data in RAM in a json like key-value format. It keeps the data persistent by updating everything in the transaction log, because when systems are shut down their RAM is wiped clean. A good tutorial to learn Redis - RedisStorageStoring the data is an important part of any application. Although this is mainly DevOps territory, every backend developer should know the basics for the same. We need to store the database data and also the backend code. Beside this the frontend code must also be stored somewhere. Nowadays everything is stored in the cloud, which is preferred by individuals, startups and enterprises. The two most popular cloud-based storages are – Amazon S3 Azure Blob Storage Good beginner's tutorials for both areServices and APIsThese are theoretical concepts and are implemented by various services, but a backend engineer should know them and how to use them. Restful APIs  This is by far the most popular way to get data from a database. It was made more popular, with the rise of web-apps. We do GET, PUT, POST and DELETE operations to read, update, create or delete data from databases. We have earlier discussed different languages and frameworks, which have their own implementations for these operations. Microservices Architecture  In microservice architecture, we divide a large and complex project into small, independent services. Each of these is responsible for a specific task and communicates with other services through simple APIs. Each service is built by a small team from the beginning, and separated by boundaries which make it easier to scale up the development effort if needed. GraphQL  It is the hottest new kid in the block, which is an alternative to the Restful APIs. The problem with Restful APIs is that if you want some data stored in database, you need to get the whole data sent by the endpoint. On the other hand, with GraphQL, you get a query type language which can return only the part of the data which you require.  DevOps & DeploymentA backend engineer requires a fair bit of DevOps knowledge. So, we will next deep dive into the methodologies in DevOps. 1. Containerization & Orchestration   Containers are a method of building, packaging and deploying software. They are similar to but not the same thing as virtual machines (VMs). One of the primary differences is that containers are isolated or abstracted away from the underlying operating system and infrastructure that they run on. In the simplest terms, a container includes both an application’s code and everything that code needs to run properly. Container orchestration is the automatic process of managing the work of individual containers for applications based on microservice architecture. The popular Containerization and Orchestration tools are – Kubernetes Docker Good beginner's tutorials for both are -Kubernetes2. DevOps   DevOps is a set of practices that combine software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. The two most popular DevOps services are AWS and Azure. Both of them are cloud based and are market leaders. Both of these platforms contain a wide variety of similar services. AWS  It consists of over 200 products and services for storage, database, analytics, deployment, serverless function and many more. AWS is the market leader as of now with 33% of market share. The AWS certifications are also one of the most in-demand certifications and a must for frontend engineers as well as Backend engineers. Azure  Microsoft Azure is second in terms of market share of cloud-based platforms, with 18% of the market. It also consists of SaaS (Software as a Service), PaaS (Platform as a Service) and IaaS (Infrastructure as a Service) like AWS. 3. PaaS (Platform as a Service)   There are several smaller players, which provide Platform as a Service and are much easier to use than services like AWS and Azure. With these services you can directly deploy your React or other web-apps, by just hosting them on GitHub and pushing the code. These services are preferred a lot by freelancers, hobbyists and small companies as they don’t require investment in learning complicated services like AWS and Azure. The three most popular PaaS services are Digital Ocean Heroku Netlify 4. Serverless  Serverless computing is an execution model where the cloud provider (AWS, Azure, or Google Cloud) is responsible for executing a piece of code by dynamically allocating resources and only charging for the number of resources used to run the code. The code is typically run inside stateless containers that can be triggered by a variety of events including http requests, database events, queuing services, monitoring alerts, file uploads, scheduled events (cron jobs), etc. The code that is sent to the cloud provider for execution is usually in the form of a function. AWS Lambda  It is an event-driven, serverless platform which is part of AWS. The various languages supported by AWS Lambda are Node.js, Python, Java, Go, Ruby and .NET. AWS Lambda was designed for use cases such as updates to DynamoDB tables, responding to a website click etc. After that it will “spin down” the database service, to save resources. Azure Functions  They are quite similar to AWS Lambda, but are for Microsoft Azure. Azure functions have a browser-based interface to write code to respond to events generated by http requests etc. The service accepts programming languages like C#, F#, Node.js, Python, PHP and Java. Serverless Framework  It is an open-source web-framework written using Node.js. The popular services like AWS Lambda, Azure functions and Google cloud functions are based on it. CI/CD A backend developer should know the popular CI/CD (Continuous Integration/Continuous deployment) tools. These tools help to automate the whole process of building, testing and deployment of applications. Github Actions   It is a freely available CI/CD pipeline, which directly integrates to your GitHub based project and can be used in variety of languages. Jenkins  Jenkins is the most popular CI/CD automation tool, which helps in building, testing and deployment of applications. Jenkins was written in Java and over the years has been built to support over 1400 plugins, which extend its functionalities. Circle CI  Circle CI is also a CI/CD automation tool, which is cloud based and so it is different from Jenkins. It is much easier to use than Jenkins, but has a smaller community and lower user base. SecuritySecurity is an important aspect of any application. Most applications containing user personal data, like email etc, are often targeted by hackers. OWASP   The Open Web Application Security Project (or OWASP), is a non-profit organization dedicated to web application security. They have free material available on their website, making it possible for anyone to improve their web application security. Protecting Services & databases against threats   Hackers target databases of popular web-apps on a regular basis to get sensitive information about their customers. This data is then sold to the highest bidder on the dark-net. When such public breaches are reported, then it's a reputation loss for the enterprise also. So, a lot of emphasis should be given to Authentication, Access, Backups, and Encryption while setting up a database. The databases should also be monitored for any suspicious activities. Besides this the API routes also need to be protected, so that the hacker cannot manipulate them. Career roles Most of the companies hire Frontend developers, Backend developers and DevOps engineers separately. This is because most of the enterprise projects are huge, in which roles and responsibilities are distributed. But there is a huge demand for Full Stack developers in the startup sector in US and India. These companies need specialists who can get the product out as soon as possible with agile and small teams. Top companies hiringAlmost every company on the planet is hiring web-developers or outsourcing the development work. Since the past decade, the demand for developers has risen exponentially. The top technology companies which hire full stack developers are Facebook, Amazon, Apple, Netflix, Google, Uber, Flipkart, Microsoft and more.  The sites of each of these companies are web-apps (excluding Apple and Microsoft), with complex frontend and backend systems. The frontend generally consists of React or Angular and the backend is a combination of various technologies. The DevOps part is also quite important in these web-apps as they handle millions of concurrent connections at once.Salaries  The salary of a beginner Frontend developer in India starts from Rs. 300,000($ 3980) per year in service-based companies to Rs. 12,00,000($ 15,971) per year in the top tech companies mentioned above. The salary of a Beginner Full-Stack developer in India starts at Rs. 4,50,000 ($ 5989) per year in service companies to Rs. 12,00,000($ 15,971) per year in top tech companies. The salary for an entry level Frontend developer in USA is $ 59,213 per year and for an entry level Full stack developer is $ 61,042 per year.Below are some sources for salaries. web-developerfull-stack-developerfront-end-vs-back-endTop regions where there is demand There are plenty of remote and freelancing opportunities in web-development across the world. The two countries with most developers and top tech companies are USA and India. Silicon Valley, which is the San Francisco Bay Area, in Northern California, USA is the hub of technology companies.  The top city in India to start a developer job is the Silicon Valley of India – Bengaluru. The number of jobs is more than all the other cities combined and it also has a very good startup ecosystem. Almost all the big technology companies mentioned earlier and top Indian service companies are located in the city. After Bengaluru, the city where the greatest number of technology jobs are based is Hyderabad, followed by Chennai and then Pune. Entry PointsThe demand for web-developers is high and anyone with a passion for creating apps can become a web-developer. An Engineering degree is not mandatory to land a job as a web developer.  The most in-demand skill today and for the next 5 years is React and its ecosystem. So, if you know HTML, CSS, JavaScript and React, it is impossible to not get a job. Career Pathway  Most people start as an intern Front-end developer or Intern Full-Stack developer and in many cases Intern Backend developer. Many companies directly hire junior Frontend/Backend/Full-stack developers.  After that, the next step is the role of Senior Frontend/Backend/Full-stack developers. Many Frontend and Backend developers become full stack developers at this level, by learning additional technologies. Senior resources in Frontend/Backend/Full-stack can then go on to assume Team Lead roles. These people manage small teams in addition to being individual contributors.  After this a professional can become a Project manager, whose main responsibility is managing the team. Another role is that of Technical Project Manager, who manages the team and also has hands-on knowledge in Technology. The last role at this level is that of a Software Architect, who handles and designs big projects and has to look at every aspect of the technology to create the enterprise app. Generally Full-stack developers are preferred in this role, as they need to know all technologies. The highest career milestone is CTO or Chief Technology Officer, who handles all the technology teams and makes all technology decisions in a Technology company. Job SpecializationThere are some Full stack development specializations which I see nowadays in the industry. Full stack developers who work with React in the Frontend and Java in the Backend are in great demand. Similarly, developers who work with Angular in the Frontend and .NET in the backend are in great demand.How KnowledgeHut can helpAll these free resources are a great place to start your Frontend or Full-Stack journey. Beside these there are many other free resources on the internet, but they may not be organized and may not have a structured approach.  This is where KnowledgeHut can make a difference and serve as a one stop shop alternative with its comprehensive Instructor-led live classes. The courses are taught by Industry experts and are perfect for aspirants who wish to become Frontend or FullStack developers.Links for some of the popular courses & Bootcamps by KnowledgeHut are appended below-CSS3JavaScriptReactJSNodeJSDevopsFull-stack developer BootcampFront-end developer Bootcampback-end developer BootcampConclusion This completes our article on the Full stack developer journey by combining both the Frontend and backend roadmap. There are many people who become backend developers first by working on languages like Java and then go on to learn React to become full stack developers.  Again, many developers learn front-end development first with frameworks like React, and then become full stack developers by learning Node.JS. This path is easier for developers because both React and Node.JS use the same language which is JavaScript.We hope you have found this blog useful, and can now take the right path to become a full stack developer. Good luck on your learning journey!
9710
How to Become a Successful Full Stack Web Develope...

Full stack developer roles are among the hottest c... Read More

How to do MongoDB Back Up, Restoration & Migration

Popular among both enterprises and startups, MongoDB is a database that is perfectly suited for web-apps that need to scale up once the user base increases. MongoDB is different from traditional relational databases because it uses json like objects to store data, instead of tables in relational databases. In this post, we will learn to backup and restore a MongoDB database. In all software products there is an import and export feature, which in database terms, deals with human-readable format. On the other hand, the backup and restore operations use MongoDB specific data, which preserve the MongoDB attributes.  So, when migrating the database, we should prefer backup and restore over import and export. But we should also keep in mind that our source and target systems need to be are compatible, which means that both should be Windows or both should be a Linux based system like Ubuntu/Mac. Prerequisites We are using Windows 10 in this tutorial. Please make sure you have downloaded the MongoDB Community Server and installed it. It is a very easy setup and you will find lot of good articles on the internet detailing this out. Please ensure that you have added it in the Environment variable in your PC. Backup Considerations In a production environment, backups act as a snapshot of the database at a certain point. Large and complex databases do fail or can be hacked. If that happens, we can use the last backup file to restore the database to the point, before it failed. These are some of the factors which should be taken into consideration when doing a recovery.  1. Recovery Point Objective We should know the objective of the recovery point, which means how much data we are willing to lose during a backup and restoration. A continuous backup is preferred for critical data like bank information and backups should be taken several times during the day. On the other hand, if the data doesn’t change frequently, then the backup can be taken every 6 months.  2. Recovery Time ObjectiveThis tells how quickly the restoration can be done. During restoration the application will be down for some time; and this downtime should be minimized, or else customers will be inconvenienced and it may result in loss of business or loss of customer trust.  3. Database and Snapshot IsolationThis refers to the distance between the primary database server and the backup server. If they are close enough i.e., in the same building, then the recovery time reduces. However, in the event of a physical event such as a fire, there is a likelihood of it having been destroyed along with the primary database. 4. Restoration Process We should always test our backups in test servers to see if they will work, in case a restoration is required.  5. Available Storage Backup of database generally takes a lot of space and in most cases, it will never be required. So, we should try to minimize the space taken on the disk, by archiving the database into a zip file.  6. Complexity of DeploymentThe backup strategy should be easy to set and should be automated, so that we don’t have to remember to take the backup after regular intervals. Understanding the Basics The first thing that we should know is that MongoDB uses json and bson(binary json) formats for storing data. So, people coming from a JavaScript background can relate to objects for json, which have a key-value pair. Also, json is the preferred format in which we receive or send data to an API endpoint. You can check the json data of a MongoDB database in any tool or online editors. Even the famous Windows application Notepad++ has a json viewer. Here’s a snapshot of what a json document would look like:As we can see from the above example, json is very convenient to work with, especially for developers.  But it doesn’t support all the data types available in bson. So, for backup and restoring, we should use binary bson. The second thing to keep in mind is that MongoDB automatically creates databases and collection names if they don’t exist during restore operations. Third, since MongoDB is a document-based database, in many use cases we store large amounts of data in one collection, such as the whole post of an article. MongoDB is also used extensively in large databases and big data. So, reading and inserting the data can consume a lot of CPU, memory and disk space. We should always run the backups during the non-peak hours like night. As already mentioned earlier, we can use import and export functions for backup and restoration of MongoDB databases, but we should use commands like mongodump and mongorestore to backup and restore respectively. MongoDB backup We will first cover backing up the MongoDB database. For this we use the mongodump command.  First open the Windows command prompt and go to the location where MongoDB is installed. If you have chosen the default setting, while installing MongoDB though the pop-up it will be installed in a location like C:\Program Files\MongoDB\Server\4.4\bin The version number may change if you are reading this blog in the future. Also, note that it’s better to run the command prompt in the Admin mode. So, once we open the command prompt, we need to change the directory to MongoDB bin folder by giving the below command. cd C:\Program Files\MongoDB\Server\4.4\binNow, enter mongod and press Enter. It will show some json text.Now, we can backup to any location. For this post I am backing up on my Desktop in a Backup folder, which I have created through the command line.Now, we have to run mongodump command, but it should be also present in our MongoDB bin folder. If it is not present, we need to download it from and install it. After this, copy the entire exe files from the download to the MongoDB bin folder. MongoDB Backup with no option Now, run the mongodump command from the bin directory. Here, we are not giving any argument so the backup of the whole database will be taken in the same bin directory.MongoDB Backup to an output directory Now, run the mongodump command from the bin directory. Here, the argument –out specifies the directory in which the data backup will be maintained. In our case we are giving the Backup folder that we had earlier created in the  Desktop.mongodump --out C:\Users\pc\Desktop\Backup Now, go to the desktop and you can find the backup that has been created in our Backup folder.  MongoDB Backup of a specific database MongoDB also allows us to backup a specific database from a collection of databases in mongodump using the –db option. I have an ‘example’ database, so to backup only that, I will use the below command.mongodump --db example --out C:\Users\pc\Desktop\Backup As you can see in the below output only the example database was backed up. MongoDB Backup a specific collection Now, if we want to only backup a specific collection, we need to use the –collection option and give the collection name. Also, note that the database name is mandatory in this case, as MongoDB needs to know which is  the database to search for in the collection. I have a products collection within the example database, so to backup only that I will use the below command. mongodump --db example --out C:\Users\pc\Desktop\Backup –collection products As, you can see in the below output only the products collection from example database was backed up. MongoDB Backup from remote MongoDB instances We can get the backup from remote MongoDB instances also. I have a lot of MongoDB databases for my personal projects on MongoDB atlas, which is the free to use Cloud database for MongoDB. To get a backup of remote databases, we have to use the connection string with –uri parameter. I used the below command. mongodump --uri "mongodb+srv://xxxx:xxxxxxxxxxx@cluster0.suvl2.mongodb.net/xxxxxDB?retryWrites=true&w=majority" --out C:\Users\pc\Desktop\Backup You can see in the below output the backup of the remote instance. MongoDB Backup procedures We should try to make the backup procedure as automated as possible. One of the best ways is to use a cron job, so that it can run every day. As, discussed earlier it is best to run the backup at night when the database has the least load.  Setting up a cron job is easier on a Linux or a Mac because its Windows equivalent is not as good. Alternatively, you can do install MongoDB in WSL2 for Windows which supports Ubuntu.Let’s suppose that for a Linux host which has a MongoDB instance running, you want to run the backup at 04:04 am daily. For this, you should open the cron editor in the terminal, by running the below command in the terminal.sudo crontab –e Now, in the cron editor, you need to add a command like below for our case. 4 4 * * * mongodump --out /var/backups/mongobackups/`date +"%m-%d-%y"`Restoring and migrating a MongoDB database When we restore the MongoDB database from a backup, we will be able to take the exact copy of the MongoDB information, including the indexes. We restore MongoDB by using the command mongorestore, which works only with the binary backup produced by mongodump. Now, we have already taken the backup of example database earlier and it is in our Backup folder. We will use the below command to restore it. In the arguments we will specify the name of the database first with –db option. After that with –drop, we make sure that the example database is first dropped. And in the final argument, we specify the path of our backupmongorestore --db example --drop C:\Users\pc\Desktop\Backup\example Now, if we check in the terminal, we have our example database restored properly.Conclusion In this article, we have learned about MongoDB backup and restore. We have learned the different options for creating the backups, and why and when backups are required. Keep learning!
5696
How to do MongoDB Back Up, Restoration & Migration

Popular among both enterprises and startups, Mong... Read More

What Is the Use of DNS Module in Node.Js?

Node.js gives the provision of using different modules. In this article, we will look into the use of DNS module in Node.  What is DNS and its importance?DNS stands for Domain Name System. The main function of the DNS is to translate the IP Address, which is a numerical label assigned to your computer. IP addresses can be thought of as names of computers on a network and are used to distinguish different devices and their locations.  For example, 8.8.8.8 is one of the many public IP addresses of Google.com. So, DNS can be considered as phonebook of the Internet. When we type any address like www.example.com in our browser, that request is sent to the Name Server which converts it to an IP Address(like 12.34.56.78). This is then sent to the respective server for further processing. The figure below shows how exactly this happens. Syntax The syntax for including the DNS module in our node application is – const dns = require(‘dns’) DNS methods and its descriptionsWe will look into a real example and some important DNS methods. Let us setup a basic node application by giving the command npm init -y in terminal, inside a folder. I had created an empty NodeJS folder for the same. $ npm init -y Wrote to D:\NodeJS\package.json: {   "name": "NodeJS",   "version": "1.0.0",   "description": "",   "main": "index.js",   "scripts": {     "test": "echo \"Error: no test specified\" && exit 1"   },   "keywords": [],   "author": "",   "license": "ISC" }The above commands create a basic package.json file, which is the basis of any Node.js project. We are using the -y option, so that we don’t have to enter the details manually. Next, open the folder in a code editor which is VSCode in my case. Here, I have created a file dns.js and the first line contains the code for importing the dns module.1. lookup()Next, we will call the dns lookup function, which takes two arguments. The first is the domain we want to lookup, which can be anything and is knowledgehut.com in our case. The second is the callback or function that we want to run, once the lookup is complete. The function that runs on completion takes two arguments. The first argument contains an error, if one occurs, and the second is the value or the IP address of the domain. So, inside our function if we have an error we are printing it in the console and returning, which means no further code will run. If we don’t have an error, we are printing the value. Add the below code in a dns.js file.const dns = require('dns');  dns.lookup('knowledgehut.com', (err, value) => {      if(err) {          console.log(err);          return;      }      console.log(value);  }) To run this, I am opening the Integrated terminal, which comes in VSCode by pressing Ctrl+J on Windows or Cmd+J on Mac. Here, give the command node dns to run our file dns.js. The output of the same is below. 54.147.15.161When we run this program, we are not getting any error and getting the IP address of the domain name. 2. resolve()The function resolve() is pretty much identical to the lookup() function. Our code remains the same and we have only changed the lookup to resolve. Add the below code in a dns.js file.const dns = require('dns');  dns.resolve('knowledgehut.com', (err, value) => {      if(err) {          console.log(err);          return;      }      console.log(value);  }) We can get the output by running node dns command from terminal.[ '34.236.195.104',   '50.16.1.247',       '54.147.15.161',     '3.223.64.88' ]But as we can see from the output, we got all the IP addresses associated with this domain.The resolve function actually goes and makes a network request to the DNS system, to see how many IP addresses are registered with that domain name. The lookup function actually just uses the computer’s internal mechanism first to see if there is an IP address that it can return without having to do a network request. So, resolve function is more accurate and should be used in production as it gives all the IP addresses associated with the domain. You can also provide another argument to the resolve function to specify what type of record you want to look up. For example, with the DNS system you can find the Mail exchange record of the domain. This record handles the request, when you send an email to the domain, and specifies which server should handle the request.So, in our code we will add MX as the second argument. Add the below code in a dns.js file.const dns = require('dns'); dns.resolve('knowledgehut.com', 'MX', (err, value) => {     if(err) {         console.log(err);         return;     }     console.log(value); })On running the node dns command from the Integrated Terminal again, we are getting the information of the Mail exchange of that domain in an array.[ { exchange: 'mail.knowledgehut.com', priority: 0 } ]  3. reverse() Now, we will look into the reverse function. It works exactly the same as lookup() and resolve(), but instead of supplying a domain name, we are supplying an IP address. This function goes into the DNS system, to find out if there are any reverse records associated with this IP address. We are using 8.8.8.8, which is the publicly available IP address for Google. Add the below code in a dns.js file. const dns = require('dns');  dns.reverse('8.8.8.8', (err, value) => {      if(err) {          console.log(err);          return;      }      console.log(value);  }) On running the node dns again, we will get the reverse record within an array. [ 'dns.google' ]  4. lookUp Service() This can be used to get the information of host, which includes the hostname and the service. We need to provide a valid IP address and a valid Port as arguments. It uses the Operating Systems getnameinfo to get this data. If the IP address or the Port are not valid, a TypeError will be thrown.   In our example, we are providing a known IP address along with the port 587. This port is used for Mail Exchange(MX).  Then we are console logging the host and service. Add the below code in a dns.js file. const dns = require('dns'); dns.lookupService('34.236.195.104', 587, (err, host, service) => {     if(err) {         console.log(err);         return;     }     console.log(host,'\n', service); })It is shown in console on running node dns in Integrated Terminal.ec2-34-236-195-104.compute-1.amazonaws.com    5875. resolve4()The resolve4 method is almost similar to the resolve() method. It also returns an array of IP addresses, but only the IPv4 addresses and not the newer IPv6 addresses. Still most of the websites use IPv4 address and this function will give a valid output. Add the below code in a dns.js file.const dns = require('dns');  dns.resolve4('knowledgehut.com', (err, value) => {      if(err) {          console.log(err);          return;      }      console.log(value);  }) It is shown in console on running node dns in Integrated Terminal. [ '50.16.1.247',       '54.147.15.161',     '34.236.195.104',    '3.223.64.88' ] 6. resolve6()The IPv4 is a 32 bit address, developed in the 90s. But since there are only 4 billion addresses, the world ran out and they were all used up. So, IPv6 was invented and since then many websites have this new IPv6 address. The resolve6() method internal mechanism is also like the resolve() method, but it only returns array of IPv6 addresses. Add the below code in a dns.js file.const dns = require('dns'); dns.resolve6('nodejs.org', (err, value) => {     if(err) {         console.log(err);         return;     }     console.log(value); }) It is shown in console on running node dns in Integrated Terminal.[ '2606:4700:8d75:a0e6:9d7:10c:f52a:f808' ]7. resolveMx()The resolveMx() method is used to get the Mail exchange records for a hostname. The Mail exchange records are also known as MX records. We need to pass the hostname as argument and we will receive the details in an array, if the request was successful. Add the below code in a dns.js file.const dns = require('dns'); dns.resolveMx('nodejs.org', (err, value) => {     if(err) {         console.log(err);         return;     }     console.log(value); })It is shown in console on running node dns in Integrated Terminal.  [ { exchange: 'aspmx.l.google.com', priority: 10 },         { exchange: 'alt1.aspmx.l.google.com', priority: 20 },    { exchange: 'alt2.aspmx.l.google.com', priority: 20 },    { exchange: 'aspmx2.googlemail.com', priority: 30 },      { exchange: 'aspmx3.googlemail.com', priority: 30 } ] 8. resolveNs() The resolveNs() method is used to get the Name Server(NS records) information of a hostname. The hostname is passed as argument and we receive the information back in an array. Add the below code in a dns.js file. const dns = require('dns');  dns.resolveNs('nodejs.org', (err, value) => {      if(err) {          console.log(err);          return;      }      console.log(value);  }) It is shown in console on running node dns in Integrated Terminal. [ 'pablo.ns.cloudflare.com', 'meera.ns.cloudflare.com' ] 9. resolveSoa() The resolveSoa() method is used to get the Start of Authority record(SOA record) for a given hostname. The SOA records contain a lot of important information about the hostname like Name Server, Host Master, Expiry time. The hostname is passed as argument, and we receive all the information in an object. Add the below code in a dns.js file. const dns = require('dns');  dns.resolveSoa('nodejs.org', (err, value) => {      if(err) {          console.log(err);          return;      }      console.log(value);  }) It is shown in console on running node dns in Integrated Terminal. { nsname: 'meera.ns.cloudflare.com',    hostmaster: 'dns.cloudflare.com',     serial: 2035938779,    refresh: 10000,    retry: 2400,    expire: 604800,    minttl: 3600 } 10. resolveTxt() The resolveTxt() method is used to get the txt queries(TXT records) for a given hostname. The TXT records were actually intended to put human-readable notes in DNS, by the domain administrator. But nowadays, it is also used to prevent email spam.  In the resolveTxt() method the hostname is passed as argument, but we receive the output as a two-dimensional array of text records available for that hostname. Add the below code in a dns.js file.const dns = require('dns');  dns.resolveTxt('nodejs.org', (err, value) => {      if(err) {          console.log(err);          return;      }      console.log(value);  }) The output is shown in console on running node dns in Integrated Terminal. [ [ 'v=spf1 include:aspmx.googlemail.com -all' ] ] 11. resolvePtr() The resolvePtr() method is used to get the pointer records(PTR records) for a given hostname. Now, a PTR record maps an IP address to a domain and is also called “reverse DNS entry”. It is used to convert an IP address to a domain name. This is mainly used as a security and anti-spam measure. 12. resolveNaptr() The resolveNaptr() method is used to get the Naming Authority Pointer records(NAPTR records) for a given hostname. The NAPTR records are newer type of DNS records, in which we can write in regular expressions. The NAPTR records are mostly used in applications, which support Internet Telephony. The resolveNaptr() method is useful to know, whether a domain supports SIP or some other VoIP(Voice Over IP) services. 13. resolveSrv() The resolveSrv() method is used to get the service records(SRV records) for a given hostname. The service records specify the host and port for specific services on a server like voice over IP(VoIP), instant messaging and other services. 14. setServers() The setServers() is a very important method, which is used to set the IP address and port of servers. The argument to this method, is an array of formatted array. Example for the same is below. dns.setServers([    '4.4.4.4',    '[2001:4860:4860::8888]',    '4.4.4.4:1053',    '[2001:4860:4860::8888]:1053'  ]); 15. getServers() The getServers() method of DNS is used to get all the IP addresses associated with a server. It returns the IP addresses, belonging to the server in an array. DNS promises API The dns.promises API returns promise objects, instead of the callbacks which we have seen earlier. So, they are more modern as most of the JavaScript community is moving towards promises, instead of callbacks. We need to access the promises API by using require(‘dns’).promises Almost all the methods that are in dns are also available in DNS promises API. The complete list is below. resolver.getServers() resolver.resolve() resolver.resolve4() resolver.resolve6() resolver.resolveAny() resolver.resolveCaa() resolver.resolveCname() resolver.resolveMx() resolver.resolveNaptr() resolver.resolveNs() resolver.resolvePtr() resolver.resolveSoa() resolver.resolveSrv() resolver.resolveTxt() resolver.reverse() resolver.setServers() We will look into some of the examples, along with syntaxes. 16. resolver.resolve4() This method takes the hostname as argument. On success the Promise is resolved with an array of IPv4 addresses. In the below example, we are using a different import, than our previous section.  Since, the resolver.resolve4() returns a promise we can use the modern syntax of ‘then and catch’ block. The .then is executed if the Promise resolves to success and the .error is executed if the Promise fails. Add the below code in a dns.js file. const { Resolver } = require('dns').promises;  const resolver = new Resolver();  resolver.resolve4('geeksforgeeks.org')      .then(addresses => console.log(addresses))      .catch(err => console.log(err)) The output is shown in console on running node dns in Integrated Terminal. In the case of success, we get an array with IPv4 addresses as in our case. [ '34.218.62.116' ]  17. resolver.resolveMx() This method takes the hostname as argument. On success the Promise is resolved with an array of Mail exchange(MX records) records.  In the below example, we are using the latest async-await format for the Promise. Add the below code in a dns.js file. const { Resolver } = require('dns').promises;  const resolver = new Resolver();  (async function() {      const addresses = await resolver.resolveMx('nodejs.org');      console.log(addresses)  })() The output is shown in console on running node dns in Integrated Terminal. [ { exchange: 'alt1.aspmx.l.google.com', priority: 20 },    { exchange: 'alt2.aspmx.l.google.com', priority: 20 },    { exchange: 'aspmx2.googlemail.com', priority: 30 },      { exchange: 'aspmx3.googlemail.com', priority: 30 },      { exchange: 'aspmx.l.google.com', priority: 10 } ]18. resolver.getServers() The resolver.getServers() method returns an array of IP addresses. We can use it as below, where we are first getting the IPv6 address, by using the resolve6() method. Once, we receive it we are using it inside the .then block and it will return all the IP addresses of the server. Add the below code in a dns.js file. const { Resolver } = require('dns').promises;  const resolver = new Resolver();  resolver.resolve6('nodejs.org')      .then(addresses => {          console.log('IPv6 is ', addresses);          console.log('Server address is ', resolver.getServers());      })      .catch(err => console.log(err)) The output is shown in the console on running node dns in Integrated Terminal. IPv6 is  [ '2606:4700:8d75:a0e6:9ca:10c:f52a:f808' ]  Server address is  [ '2405:201:3001:a3a::c0a8:1d01', '192.168.29.1' ] Error Codes A lot of errors can be thrown when we use any of the dns or dns promise methods. The errors which we can get are as below. dns.NODATA: DNS server returned answer with no data. dns.FORMERR: DNS server claims query was mis-formatted. dns.SERVFAIL: DNS server returned general failure. dns.NOTFOUND: Domain name was not found. dns.NOTIMP: DNS server does not implement requested operation. dns.REFUSED: DNS server refused the query. dns.BADQUERY: Mis formatted DNS query. dns.BADNAME: Mis formatted host name. dns.BADFAMILY: Unsupported address family. dns.BADRESP: Mis formatted DNS reply. dns.CONNREFUSED: Could not contact DNS servers. dns.TIMEOUT: Timeout happened while contacting DNS servers. dns.EOF: End of file. dns.FILE: Error reading file. dns.NOMEM: Out of memory. dns.DESTRUCTION: Channel is being destroyed. dns.BADSTR: Mis formatted string. dns.BADFLAGS: Illegal flags specified. dns.NONAME: Given host name is not numeric. dns.BADHINTS: Illegal hints flags specified. dns.NOTINITIALIZED: c-ares library initialization not yet performed. dns.LOADIPHLPAPI: Error loading iphlpapi.dll. dns.ADDRGETNETWORKPARAMS: Could not find GetNetworkParams function. dns.CANCELLED: DNS query cancelled.We will see an example of DNS error. In the below example of resolver.resolve6() method, we have given a domain name which doesn’t exist. Add the below code in a dns.js file. const { Resolver } = require('dns').promises;  const resolver = new Resolver();  resolver.resolve6('abc.tech')      .then(addresses => console.log(addresses))      .catch(err => console.log(err)) So, we are getting the NOTFOUND error, when we are running node dns from terminal. { Error: queryAaaa ENOTFOUND abc.tech      at QueryReqWrap.onresolve [as oncomplete] (internal/dns/promises.js:163:17)    errno: 'ENOTFOUND',    code: 'ENOTFOUND',    syscall: 'queryAaaa',    hostname: 'abc.tech' } Implementation considerations There is a difference in the was dns.lookup() runs and the other dns methods like dns.resolve(), dns.reverse() runs. The dns.lookup() will always resolve a given name using the same way a ping command works. It doesn’t make a network call and is implemented as a synchronous call to getaddrinfo() function. The functions dns.resolve() and dns.reverse() are implemented quite differently, and they don’t use the getaddrinfo() function. They will always perform a DNS query on the network. So, the result is more accurate and updated. So, these differences can have significant consequences to our NodeJS program and should be considered. SummaryIn this post, we have learnt about the various DNS methods available in our Node.JS. We can use these methods to get a lot of information about any host. Many of these methods need us to have network access to the required, but they can always be used for internal NodeJS codes also. Knowledge of these methods, along with network concepts are important for NodeJS application development. 
9584
What Is the Use of DNS Module in Node.Js?

Node.js gives the provision of using different mo... Read More

What Is the Relationship Between Node.Js and V8?

In this article, we will look into Node.js and V8. Node.js is a very important part of the JavaScript ecosystem, as it is used in the backend to produce a complete application. It is often considered a part of the popular MERN(MongoDB, ExpressJS, ReactJS and Node.js) stack and MEAN (MongoDB, ExpressJS, Angular and Node.js) stack. The V8 engine is what powers Node.js and it is an open-source engine on which even Chrome works. It parses and runs your JavaScript inside a Node environment. Overview on Node.jsNode.js was created by Ryan Dahl in 2009 and since then it has become a very popular backend technology. Till then the backend was dominated by languages like PHP, ASP.NET and Java. It has become popular because it enables a Frontend developer with JavaScript skills to easily create full stack apps.The formal definition on the official Node.js website describes Node.js as “a JavaScript runtime built on Chrome’s V8 JavaScript engine.”Node.js came into existence when its creator Ryan Dahl, understanding the power of V8, powered the Chrome browser and extended it so that it can run on your machine as a standalone application. Another part of the definition on the official Node.js website says,Node.js uses an event driven, non-blocking I/O model that makes it lightweight and efficient.I/O refers to input/output and this is where the additional functionality of Node.js comes into play. We can read and edit local files in Node.js and also do an HTTP request to an API. The earlier backend systems like PHP and ASP used to block the program till a network request was complete. But it was completely changed by Node.JS, which sends the request and then goes to the next line of code. So, it is non-blocking and faster than the earlier backend technologies. But it is a single-threaded technology and that is where it has some limitations, whereas Java shines because of it being multi-threaded. Yet another part of the official definition on the Node.js website says,Node.js package ecosystem, npm is the largest ecosystem of open-source libraries in the world. Over the past decade, an amazing community of open-source enthusiasts have created more than 1 million npm packages, which enhance the capabilities of Node.js.It is completely open-source and anyone can use it, as it has an MIT licence, for developing server-side and networking applications. It can run on all three Operating Systems i.e., Mac OS, Windows, and Linux.Overview on V8 JavaScript engineV8 is Google’s open-source JavaScript engine, which is written in C++. It was developed in 2008 for Google Chrome and Chromium based browsers (like Brave), but was used to build Node.js for server-side coding. In fact, the V8 engine, is also used by JSON based No-SQL databases like Couchbase and the popular MongoDB. Besides this, V8 also powers the popular desktop application framework Electron and the latest server-side runtime environment of Deno. V8 is a JavaScript engine, because it takes our JavaScript and executes it while browsing in Chrome. It actually provides a runtime environment in which JavaScript executes. The great thing about this is that the JavaScript engine is independent of the browser in which it executes. This is the feature that prompted the creator of Node.JS to choose V8 engine to power Node.JS and the rest is history. The popularity of Node.JS exploded and the V8 engine was also used to create desktop frameworks and databases.There are other JavaScript engines like SpiderMonkey used by Firefox, and JavaScript Core used by Safari. Microsoft’s Edge was originally based on Chakra JavaScript engine, but has been recently re-built with Chromium and V8 engine.How V8 Engine works A JavaScript Engine is an interpreter which executes JavaScript code. We can create JavaScript engine in two ways – the first way is to implement as a standard interpreter which is done by SpiderMonkey from Mozilla. The other way is the Just-in-time (JIT) compilation, which converts the native JavaScript code to machine code and that is the way V8 uses it. So, the difference between V8 code and others is that it does not produce any intermediate code. When a developer or program runs a JavaScript on V8(i.e. in browser or Node environment), the Ignition interpreter compiles the JavaScript code and generates non-optimized machine code. On runtime, the machine code is analyzed and re-compiled for best performance, by the Turbofan and Crankshaft components of V8. The V8 engine also uses some other components, along with the ones we had seen above. They are Liftoff and Orinoco– Liftoff is responsible for machine code generation in a highly optimized way. It generates code for each opcode and perform way better then Turbofan.Orinoco is responsible for garbage collection. It looks for disconnected memory allocations and perform operations to free up more space. It also update the pointers to new memory locations.V8 also uses a lot of different threads and they are – The primary thread fetches and compiles the JavaScript code.There is another thread which is used to optimize the running code, while the primary thread continues its execution. Yet another thread is used for profiling, which tells on runtime the methods that are needed to be optimized. Some of the threads also do garbage collection.The Just-in-Time ParadigmWe will learn a bit more about the Just-in-Time (JIT) compilation in V8. For a code to execute in any programming language, it must be converted into machine code, which the computer understands. There is a different paradigm for this transformation. Most of the traditional languages created before JavaScript like C++ and Java, perform something called Ahead-of-Time compilation. Here, the code is transformed into machine code before the execution of our program during compile time. Anyone who has worked with Java or C++ knows that we run commands like below to compile a Java or C++ program.javac MyJavaProgram.java  g++ -o mycppprogram mycppprogram.cpp This converts the code into machine code after which we can run our program with commands like below.  java MyJavaProgram  ./mycppprogram On the other hand, in languages like JavaScript and Python, each line of code is executed at runtime. This is done because it is impossible to know the exact code before execution. In a browser, you never compile a code first and then run it, because it is done automatically behind the scenes.So, the Ahead-of-Time compilation produces more optimized and fast code, because of the compilation done before hand. Which is why interpretation done by languages like JavaScript are slower.To overcome this problem in dynamic languages, the approach of Just-in-Time (JIT) compilation, was created, which combines the best of both interpretation and compilation. So, an interpretation step runs before the compilation step, where the V8 engine detects the more frequently used functions and code and compiles them using information from previous executions.During compile time, this code is re-compiled for optimal performance.What is the relationship between Node and V8?The Node.js is referred to as a runtime environment, which includes everything you need to run a program written in JavaScript.The core powering Node.js is this V8 engine. The diagram shows a comparison with the Java Virtual Machine (JVM), which power the Java Runtime environment. Beside the V8 engine the Node.js runtime environment adds many Node APIs to power the Node.js environment. We can also extend the functionality of our node code by installing additional npm packages.One thing to understand is that V8 is essentially an independent C++ library, that is used by Node or Chromium to run JavaScript code. V8 exposes an API that other code can use, so if you have your own C++ program, you can embed V8 in it and run a JavaScript program. That is how it is done by Node and Chrome.Suppose, we want to add a functionality in our JavaScript code to have statements like print(‘hello world’), in addition to the console.log(‘Hello World’). We can add our own implementation of print function in C++, in V8, which is anyway open sourced. Can Node.js work without V8?The current Node.js engine cannot work without V8. It would have no JavaScript engine and hence no ability to run any JavaScript code. The fact is that the native code bindings, which come with Node.js like the fs module and the Net module, rely on the V8 interface between C++ and JavaScript.  Although in the tech world everything is possible and Microsoft in July 2016, made an effort to use Chakra JavaScript engine (which was used in Edge browser at that time) in Node.js and replace the V8 engine,  that project never took off and Microsoft Edge itself recently moved to Chromium, which uses V8 JavaScript engine.The new kid on the block for server-side programming is DENO. Many consider that it could be a replacement to Node.js in the next 2-3 years, and it also uses V8 JavaScript engine under its hood.Summary We have got an overview of Node.js runtime environment and V8 JavaScript engine in this post. Then, we have gone through the working of the V8 engine. We also investigated details of the Just-in-Time compilation, used by V8 JavaScript engine. Also, we have understood the relationship between Node.js and V8 engine and how V8 engine is independent of Node.js.Lastly, we have learnt that it is not possible for Node.js to run without a JavaScript engine like V8. It can, however, be replaced by another JavaScript engine like Chakra from Microsoft; even though this is highly improbable,  it is still possible. 
5876
What Is the Relationship Between Node.Js and V8?

In this article, we will look into Node.js and ... Read More

What is query string in Node.js?

In this article, we will look into query string module in Node.js, and understand a few methods to deal with query string. The available methods can be used to convert query string into a JSON object and convert a JSON object into query string.What is Query StringA query string according to Wikipedia is a part of the uniform resource locator (URL), that assigns values to specified parameters. In plain English it is the string after the ? in a url. Some url examples are shown below.https://example.com/over/there?name=ferret https://example.com/path/to/page?name=ferret&color=purpleThe query string in first case is name=ferret and in second case is name=ferret&color=purpleNode.js Query string moduleNow, the Node.js query string module provides methods for parsing and formatting URL query strings. The query string module can be accessed using the below –const querystring = require(‘querystring’)We will now look into the below six methods in the next section.querystring.decode()querystring.encode()querystring.escape(str)querystring.parse(str[, sep[, eq[, options]]])querystring.stringify(obj[, sep[, eq[, options]]])querystring.unescape(str)Query String methods with descriptionLet us look into a real example to understand the important Query string methods. Let us setup a basic Node application by giving the command npm init -y in terminal, inside a folder. I had created an empty NodeJS folder for the same.$ npm init -y Wrote to D:\NodeJS\package.json: {   "name": "NodeJS",   "version": "1.0.0",   "description": "",   "main": "index.js",   "scripts": {     "test": "echo \"Error: no test specified\" && exit 1"   },   "keywords": [],   "author": "",   "license": "ISC" }The above commands create a basic package.json file, which is the basis of any Node.js project. We are using the -y option, so that we don’t have to enter the details manually.Next, open the folder in a code editor which is VSCode in my case. Here, I have created a file querystring.js and the first line contains the importing of querystring module.querystring.parse() MethodThe querystring.parse() method is used to parse the URL query string into an object that contains the key value pair. The object which we get is not a JavaScript object, so we cannot use Object methods like obj.toString, or obj.hasOwnProperty().The latest UTF-8 encoding format is assumed unless we specify a different encoding format. But we should stick to UTF-8 encoding as it is the standard and contains all international characters like the Chinese characters and the hindi characters. After that also if we need alternative character encoding, then the decodeURIComponent option should be used.The syntax for the method is below.querystring.parse( str[, sep[, eq[, options]]]) )As seen from the above syntax the method accepts four parameters, and they are described below.str: This is the only required string field and it specifies the query string that has to be parsed.sep: It is an optional string field, which if given specifies the substring used to delimit the key and value pairs in the query string. The default value which is generally used is “&”.eq: It is an optional string field that specifies the substring used to delimit keys and values in the query string. The default value which is generally used is “=”.options: It is an optional object field which is used to modify the behaviour of the method. It can have the following parameters:decodeURIComponent: It is a function that would be used to specify the encoding format in the query string. The default value is querystring.unescape(), about which we will learn later.maxKeys: It is the number which specifies the maximum number of keys that should be parsed. A value of “0” would remove all the counting limits, and it can parse any number of keys. The default value is set at “1000”.The below example shows the various options used in querystring.parse() method. Add the below code in querystring.js file, which we created earlier.// Import the querystring module const querystring = require("querystring");   // Specify the URL query string to be parsed   let urlQueryString = "name=nabendu&units=kgs&units=pounds&login=false";   // Use the parse() method on the string   let parsedObj = querystring.parse(urlQueryString);   console.log("Parsed Query 1:", parsedObj);   // Use the parse() method on the string with sep as `&&` and eq as `-` urlQueryString = "name-nabendu&&units-kgs&&units-pounds&&login-true"; parsedObj = querystring.parse(urlQueryString, "&&", "-");   console.log("\nParsed Query 2:", parsedObj); // Specify a new URL query string to be parsed   urlQueryString = "type=admin&articles=java&articles=javascript&articles=kotlin&access=true"; // Use the parse() method on the string with maxKeys set to 1   parsedObj = querystring.parse(urlQueryString, "&", "=", { maxKeys: 1 });   console.log("\nParsed Query 3:", parsedObj);   // Use the parse() method on the string with maxKeys set to 2   parsedObj =  querystring.parse(urlQueryString, "&", "=", { maxKeys: 2 });   console.log("\nParsed Query 4:", parsedObj); // Use the parse() method on the string with maxKeys set to 0 (no limits) parsedObj = querystring.parse(urlQueryString, "&", "=", { maxKeys: 0 }); console.log("\nParsed Query 5:", parsedObj);Now, run the command node querystring.js from the Integrated terminal in VSCode or any terminal. Note that you need to be inside the folder NodeJS, which we had created earlier. The output of the same will be below.Parsed Query 1: [Object: null prototype] { name: 'nabendu', units: [ 'kgs', 'pounds' ], login: 'false' } Parsed Query 2: [Object: null prototype] { name: 'nabendu', units: [ 'kgs', 'pounds' ], login: 'true' } Parsed Query 3: [Object: null prototype] { type: 'admin' } Parsed Query 4: [Object: null prototype] { type: 'admin', articles: 'java' } Parsed Query 5: [Object: null prototype] {   type: 'admin',   articles: [ 'java', 'javascript', 'kotlin' ],   access: 'true' }querystring.stringify() MethodThe querystring.stringify() method is used to produce a query string from a given object, which contains a key value pair. It is exactly the opposite of querystring.parse() Method.It can be used to convert the string, numbers and Boolean values for the key. You can also use an array of string, numbers or Boolean as values. This method of changing an object to query string is called serialized.The latest UTF-8 encoding format is assumed unless we specify a different encoding format. But we should stick to UTF-8 encoding as it is the standard and contains all international characters like the Chinese characters and the Hindi characters. If we still need an alternative character encoding, then the decodeURIComponent option should be used.Syntax for the method is below.querystring. stringify( obj[, sep[, eq[, options]]]) )As from the above syntax the method accepts four parameters, and they are described below.obj: This is the only required object field and it specifies the object that has to be serialized.sep: It is an optional string field, which if given specifies the substring used to delimit the key and value pairs in the query string. The default value which is generally used is “&”.eq: It is an optional string field that specifies the substring used to delimit keys and values in the query string. The default value which is generally used is “=”.options: It is an optional object field which is used to modify the behaviour of the method. It can have the following parameters:decodeURIComponent: It is a function that would be used to specify the encoding format in the query string. The default value is querystring.escape(), about which we will learn later.The below example shows the various options used in querystring.stringify() method. Add the below code in querystring.js file, which we created earlier.// Import the querystring module   const querystring = require("querystring");   // Specify the object that needed to be serialized   let obj = {   name: "nabendu",   access: true,   role: ["developer", "architect", "manager"],   };   // Use the stringify() method on the object   let queryString = querystring.stringify(obj);   console.log("Query String 1:", queryString);   obj = {       name: "Parag",       access: false,       role: ["editor", "HR"],   };   // Use the stringify() method on the object with sep as `, ` and eq as `:` queryString = querystring.stringify(obj, ", ", ":");   console.log("Query String 2:", queryString);   // Use the stringify() method on the object with sep as `&&&` and eq as `==`   queryString = querystring.stringify(obj, "&&&", "==");   console.log("\nQuery String 3:", queryString);Now, run the command node querystring.js from the Integrated terminal in VSCode or any terminal. Note that you need to be inside the folder NodeJS, which we had created earlier. The output of the same will be below.Query String 1: name=nabendu&access=true&role=developer&role=architect&role=managerQuery String 2: name:Parag, access:false, role:editor, role:HR        Query String 3: name==Parag&&&access==false&&&role==editor&&&role==HRquerystring.decode() MethodThe querystring.decode() method is nothing but an alias for querystring.parse() method. In our parse example, we can use it. So, add the below code in querystring.js file, which we created earlier.// Import the querystring module const querystring = require("querystring");   // Specify the URL query string to be parsed   let urlQueryString = "name=nabendu&units=kgs&units=pounds&login=false";   // Use the parse() method on the string   let parsedObj = querystring.decode(urlQueryString); console.log("Parsed Query 1:", parsedObj);As earlier, run the command node querystring.js from a terminal. And the output will be same as that with querystring.parse() method.Parsed Query 1: [Object: null prototype] { name: 'nabendu', units: [ 'kgs', 'pounds' ], login: 'false' }querystring.encode() MethodThe querystring.encode() method is nothing but an alias for querystring.stringify() method. In our stringify example, we can use it. So, add the below code in querystring.js file, which we created earlier.// Import the querystring module   const querystring = require("querystring");   // Specify the object that needed to be serialized let obj = { name: "nabendu", access: true,   role: ["developer", "architect", "manager"],   };   // Use the stringify() method on the object   let queryString = querystring.encode(obj);   console.log("Query String 1:", queryString);As earlier run the command node querystring.js from a terminal. And the output will be same as that with querystring.stringify() method.Query String 1: name=nabendu&access=true&role=developer&role=architect&role=managerquerystring.escape(str) MethodThe querystring.escape() method is used by querystring.stringify() method and is generally not used directly.querystring.unescape(str) MethodThe querystring.unescape() method is used by querystring.parse() method and is generally not used directly. SummaryIn this article we learnt about the useful query string module in Node.js, which is mainly used to parse URL query strings into Object format and also to change an object to URL query strings.
3651
What is query string in Node.js?

In this article, we will look into query string mo... Read More