Computer Science Interview Questions and Answers 2024

Here is a collection of the best interview Computer Science interview questions and answers, curated by experts in the field, to assist you in succeeding when you appear for computer science job interviews. With the help of the drafted answers, you can confidently face questions related to job positions like Software Developer L2, Software Developer L3, Software Developer L4, Full Stack Developer, and many more like Application Engineer, Support Engineer, etc. and make a good impression at your next interview with companies like Cognizant, Accenture, Wipro, HCL, Infosys, JP Morgan Chase, and others, by being well-prepared with these Computer Science interview questions and answers. Nearly all service-based, and product-based companies ask such questions during the interview for job roles with packages up to 45-60 lakh. So here are the top 60 Computer Science Interview Questions and Answers, 2024 edition. In this article, you will learn about various Computer Science engineering interview questions for freshers, intermediate professionals, and experienced professionals.

  • 4.7 Rating
  • 65 Question(s)
  • 35 Mins of Read
  • 8131 Reader(s)


An operating system is a piece of software that controls all the hardware and software resources. It acts as an interface between software and hardware. It serves as a resource manager and hides the hardware's underlying complexity to give users a convenient and effective environment in which they can run their applications smoothly.  

Let’s see various types of operating system: - 

Types of the operating system:  

  • Batch Operating System: These systems don't communicate with computers directly. An operator bundles together similar jobs that have the same needs into batches.  
  • Time-sharing operating System: This kind of operating system enables several users to share computer resources. (Maximum use of the available resources.)  
  • Distributed operating systems: These operating systems control several distinct computers while making them appear to be just one.
  • Network operating system: An operating system that manages data, users, groups, security, applications, and other networking operations is known as a network operating system. It runs on a server.  
  • Real-time operating system: This kind of operating system works with real-time systems, and it has a very short processing and response time window.  
  • Multiprocessing operating system: It is used to increase the performance of several CPUs inside a single computer system. A work can be split and carried out more swiftly by connecting many CPUs together.

The definition of an object according to the dictionary is a thing or a thing that is real. Oriented is a verb that means to be interested in a specific sort of item or entity. It is a programming paradigm. 

Object-oriented programming is essentially a philosophy or approach for computer programming that organizes software architecture around data or objects rather than functions and logic.

A data field is referred to as an object if it has distinct characteristics and behavior. In OOP, everything is categorized as self-sustaining objects.

Among developers, it is the most often used programming paradigm. Large, complicated, and actively updated or maintained programs work well with it.

A vehicle is an example of OOP in the real world. It fully exemplifies the effectiveness of object-oriented design.

The foundation of OOP consists of four pillars are: 

Benefits of OOPS:

  • Reuse of code is possible.
  • Code may be readily added to or modified without impacting other code blocks.
  • Offers security through data concealing and encapsulation techniques.
  • Helpful for grouping huge projects into collaborative development projects.
  • Debugging is simple. 

Difference between procedural language and object-oriented language: 

  • Procedural language: A procedural language, commonly referred to as an imperative language, solves problems from the top down. It highlights the step-by-step approach to problem solving and deconstructs the answer into functions, procedures, or subroutines. 
  • Object-oriented language: An object-oriented language, on the other hand, is built on the idea of objects. An object is an instance of a class, which encapsulates data and behaviour. It makes programme creation and maintenance easier due to ease with which we may add code without affecting other code blocks.

This is a frequently asked question in Computer Science interviews. Classes and objects are the building blocks of OOP. 

  •  Class

A class is an object's blueprint or template. Member variables, constants and functions, and other functionality are defined in the classes. It connects data and functions as a single entity. At runtime, it doesn't require any memory. Classes are not thought of as data structures.

  •  Objects

An object is a physical thing with characteristics, actions, and qualities. It is also called as a class instance. It includes variables and member functions that we have declared in the class. It takes up room in the memory. 

Keep in mind that while an object can exist without a class, the reverse is not feasible. 

  •  Computer Network

A computer network is a collection of linked computing systems that may communicate and share resources. These networked devices communicate data through wireless or physical technologies using a set of guidelines known as communications protocols.

  •  Internet

The internet is a massive, electronically connected network of computers and other devices.

  •  WWW

The World Wide Web is a compilation of all web sites and publications that may be found on the Internet by searching for their URLs (Uniform Resource Locator).

  •  WWW vs Internet

The following are the differences between the internet and the world wide web: The world wide web is where all web pages and documents are kept, and you need a unique URL for each website in order to access it. In contrast, the internet is a vast computer network that can be accessed through the World Wide Web.


The world wide web is software-oriented 

The internet is hardware-oriented 

The world wide web uses HTTP 

The internet uses IP addresses 

It is also one of the important general computer science interview questions of Networking that is asked in interview. 

Let’s see what is data and information before actually jumping to their difference. 


A collection of unprocessed, unstructured facts and information is known as data. Examples include text, observations, figures, symbols, and object descriptions. 

In other words, data is meaningless on its own and does not serve any purpose. 

Additionally, data is expressed in terms of bits and bytes, which are the fundamental units of information used in computer processing and storage. Although data can be captured, it cannot be meaningful without processing it. 

There are two types of data: 

  • Quantitative [Numerical form, E.g., height, weight, length] 
  • Qualitative [Descriptive form, E.g., eye colour of a human] 


Information is structured and organised data. It gives the data context and makes decision-making possible and make sense to us. Data are analysed and parts of data are interpreted to get information. 

Now, let’s see what is the difference between data and information.

Data vs Information


Data is a collection of facts 

Information places such facts into perspective 

Data is raw and disorganised 

Information is structured 

Each data point is unique and unrelated 

Information depicts that data in order to give a broad overview of how everything fits together 

Data by itself has no value 

Meaningful information is created after it has been studied and understood 

Data is dependent on information 

Information is not dependent on data 

Graphs, numbers, figures, or statistics are the most common visual representations of data 

Normally, information is communicated via words, language, thoughts, and concepts 

Attributes: An attribute in a database management system (DBMS) is a piece of information that specifies an entity. 

Let's see various types of attributes: -
Types of Attributes:

  • Simple Attribute: Attributes that cannot be further subdivided is a simple attributes.

Examples: In a database of an employee, ‘gender’ can be considered simple 

Attribute because it is not subdivided anymore. 

  • Composite Attribute: Attributes that can be broken into smaller components is a composite attributes.

For instance, a person's name may be broken down into first, middle, and last names.

Examples: In a database of an employee, ‘address’ can be considered a composite

Attribute because it can be further subdivided into two or more sub-attributes i.e., ‘house

number', ‘street name’, ‘city’, ‘state’, ‘PIN code’. 

Another example can be ‘name’ because it can be further subdivided into ‘First Name’,

‘Middle Name’ and ‘Last Name’. 

  • Single-valued Attribute: It is an attribute that have only a single value for a given entity.

Examples: In a database of an employee, ‘Employee ID’ can be considered as single-valued

attribute because every employee has a unique and only one ID that is given to them. 

  • Multi-valued Attribute: It is an attribute that have more than one value for a given entity. 

Examples: In a database of an employee, ‘phone number’ can be considered as

multi-valued attribute because an employee can have more than one phone number. 

Upper or lower limits may be used as a limit restriction.

  • Derived Attribute: It is an attribute that is not stored in a database directly but is derived from other attributes based on some calculations.

Example: In a database of an employee, ‘age’ can be considered as derived attribute

because it can be determined using the employee's birthdate and the current time.


  • In the actual world, an entity is any "thing" or "object" that can be distinguished from other things.
  • In Computer Science, an entity is an object that consists of an identity, which does not depend on the changes of its attributes. 

Let’s see various types of entity: - 

Types of Entity

  • Strong Entity: It does not depend on any other entity and has a primary key. 

Example: In a database of university, “student” can be considered as strong entity because each student has a unique ID, so the entity can exist without depending on any 

Other entities.

  • Weak Entity: It depends on another strong entity and cannot be individually recognised.

Example: In a database of university, the "Course Enrolment" can be considered as weak entity because it doesn't have a unique identifier without the corresponding " Student " and "Course" entities. 

In C++, structure works the exact way as class but there are some differences between them.


By default, Members of this is private.

By default, Members of this is public.

It is declared using the keyword struct.

It is declared using the keyword class.

It is mainly used for data abstraction and inheritance.

It is mainly used for data grouping.

  1. Network Protocols: A set of guidelines that specify how information is transferred between devices linked to the same network is known as a network protocol. Essentially, it enables linked devices to interact despite variations in internal operations, structure, or design. Network protocols enable you to communicate with folks all over the world and hence play a significant role in current digital conversations. 

Based on their scale, computer networks are classified into three kinds:

  • Local Area Network (LAN)
  • Metropolitan Area Network (MAN)
  • Wide area network (WAN)
  1. LAN: A local area network or LAN is a collection of computers that are linked together in a small area such as a school, hospital, or residence. Since there is no outside connection to the local area network, the data that is transferred is safe on the local area network and cannot be viewed from outside. Because of their compact size, LANs are quicker, with rates ranging from 10Mbps to 100Mbps.
  2. MAN: It is a network covers a larger area by connecting LANs to a larger network of computers. In the Metropolitan area network, various Local area networks are connected through telephone lines. The size of the Metropolitan area network is larger than LANs and smaller than WANs (wide area networks), a MAN covers the larger area of a city or town. 
  3. WAN: Data may be sent across large distances via a wide area network. The WAN is larger than the LAN and MAN. A WAN can encompass an entire nation, continent, or even the entire planet. Mobile broadband connections such as 3G, 4G, 5G and so on are also examples of WAN. 

Let’s see first, what is a topology: - 

  1. Topology: Topology is a branch of mathematics that deals with the properties of shapes and spaces that do not change when they are stretched, bent or twisted in certain ways.
  2. Bus topology: Every node, or piece of equipment on the network, is connected to a single main cable line in a bus topology. Data is sent from one location to another along a single path. Data cannot be sent in both directions. Linear Bus Topology is the name given to this topology when it has exactly two terminals. Small networks are where it is most frequently employed. 

The advantages of bus topology are as mentioned below:

  • It is affordable.
  • Compared to other topologies, the cable length needed is the shortest.
  • By connecting the connections, expansion may be completed quickly.

The followings are the drawbacks of bus topology:

  • The entire network would be destroyed if the primary wire snapped.
  • When there are many nodes and a lot of network traffic, the network performance is compromised and suffers.
  • The cable can only be a certain length.

Ring topology: Every computer is connected to another computer on each side known as a "ring topology." A ring-shaped network is created when the final computer is hooked up to the first. Each computer can have precisely two neighbouring computers for this architecture. Tokens are used to facilitate the exchange of data between devices. The computer station must possess the token in order to send data. Only once the transmission is finished is the token released, at which point other computer systems can use it to transfer data. 

Ring Topology is faster than Bus Topology. 

  • Swapping

Swapping is a method that enables a process to be temporarily moved to secondary storage (disc) and out of main memory, freeing up that memory for use by other processes. The system switches the process back from secondary storage to main memory at a later time. It eliminates memory-resident processes to minimize the degree of multi-programming. These eliminated processes can be put back into memory and resume their execution from where it left off, it is known as swapping. 

Swapping is required to increase the process mix. It is handled through Middle Term Scheduler which reduces degree of multi-programming. 

  • Context-Switching

It is a process of Switching from one process to another in the CPU.

  1. A state save of the active process and a state restoration of a separate process are both required in order to switch the CPU to another process.
  2. When this happens, the kernel loads the stored context of the newly scheduled process and stores the context of the old process in its Process Control Block (PCB).
  3. It is pure overhead since the system is switching without performing any beneficial work.
  4. Depending on the memory speed, the quantity of registers that must be copied, and other factors, speed varies from machine to machine.


A database is a location where data is kept in a form that makes it simple to access, manage, and modify. 


A group of connected data and a set of software tools for accessing that data make up a database-management system (DBMS). The database, a collection of data, often contains information important to an organisation. A DBMS's main objective is to offer a simple and effective method for storing and retrieving database data. 

The database itself, along with all the software and functions, is a DBMS. It is used to carry out many actions, including data insertion, access, updating, and deletion. 

Now, let’s see what are the disadvantages of file systems that leads to creation of DBMS. 

Disadvantages of File Systems

  • Data Redundancy and inconsistency 
  • Difficult to access data 
  • Data isolation 
  • Atomicity problems 
  • Integrity problems 
  • Security problems 

These are the few reasons, why we should use DBMS and not file systems for storing and managing data. 

Let’s see first what is a process: - 

Process: A process is a computer program that is currently in execution.

  • Orphan Process:  

It is the still-running process whose parent process has been destroyed.  The init process i.e., the first process of OS, adopts orphan processes.  

  • Zombie Process:  

A process that has finished its execution but still has an entry in the process table is considered a zombie process. Because the parent process still needs to read its child's departure status, zombie processes often happen for child processes. The zombie process is then removed from the process table when this is completed using the wait system call. This is referred to as the zombie reaping procedure. The child process remains a zombie till it is removed from the process table. The child process was terminated significantly early since the parent process may call wait () on it for a longer period of time.

Expect to come across this popular question in Computer Science job interview questions. SQL (Structured Query Language) is a standard language used to manage relational databases. 

  • Keys: A field or group of columns in a database table that is used to specifically identify each entry in the table is known as a key in SQL. In SQL, there are various distinct keys: - 
    • Primary Key: It is unique and NOT NULL. A primary key is a unique identifier for each row in a table. A table can have only one primary key. 
    • Foreign Key: A field in a table that relates to the primary key of another table is known as a foreign key. The relationship between two tables is created using a foreign key. 
    • Candidate Key: A group of one or more fields or columns known as a candidate key can be used to uniquely identify a record in a table. A primary key can also be a candidate key. 
    • Super Key: A super key is a combination of one or more keys that may be used to uniquely identify a record in a database. 

The rest of the keys are alternate keys, composite keys, unique key etc. 

  • Joins: A join procedure in SQL is utilized to merge rows from two or more tables based on a shared column. In SQL, there are several types of joins: -
    • Left Join (or Left Outer Join): Returns every record from the left table and the records that are matched from the right table. 
    • Right Join (or Right Outer Join): Returns every record from the right table and the records that are matched from the left table. 
    • Inner Join: Returns entries with values that are the same across both tables.
    • Full Outer Join: Returns every record if there a match in either left or right table.
  • Commands: The SQL commands are mainly categorized into five categories:
    • DDL: Data Definition Language 
    • DQL: Data Query Language 
    • DML: Data Manipulation Language 
    • DCL: Data Control Language 
    •  TCL: Transaction Control Language 
  • View: In SQL, it is a virtual table that is based on the result of a SELECT statement.

There are several types of views in SQL: 

  • Simple View 
  • Materialized View 
  • Indexed View 
  • Partitioned View 
  • Inline View 

A common question in Computer Science interview questions and answers for freshers, don't miss this one. The static keyword is mostly utilized for memory management. With variables, methods, blocks, and nested classes, the static keyword can be used. The static keyword is a property of the class rather than a class instance.

The static can be:

  1. Class Variable
  2. Class Method
  3. Block
  4. Nested class

Static Variable:
A variable is considered static if you declare it to be static.
The static variable can be used to refer to an attribute that is shared by all objects but is not specific to each object, such as the name of the organization for employees or the university for students. When the class is loaded, the static variable receives only one memory allocation in the class area.

  • It increases the memory efficiency of your software.

Uses of Static Variable

Maintain count: Static variables are commonly used to count that how many times a function has been called. For example, if you want to know how many times a function is called then you can declare a static variable in that function and increment its value each time the function is called. 

Data sharing between function calls: Data may be shared across function calls by using static variables. The value of a static variable set in one function call can also be utilised in subsequent function calls since static variables retain their values. 

Global variables: Static variables defined outside of a function have file scope, which implies that any function in the file may access them. This may be used to implement global variables with file-specific restrictions. 

Static method:
Any method that uses the static keyword is referred to as a static method.

  • A static method is an attribute of the class, not its object.
  • Invoking a static method is possible without first establishing a class instance.
  • A static method has access to and control over static data members' values.


  • Accessibility: Static methods are available to the class itself as well as to other classes since they may be invoked without generating an instance of the class. 
  • Memory Management: Static methods do not use memory for storing instance variables since they are not associated to a particular instance of the class. 
  • Encapsulation: Rather than having to be stored in each instance, static methods may be used to offer a degree of encapsulation for data that is shared by all instances of the class. 

Uses of Static Method

  • Helper method: Static methods can be used as helper methods to assist other methods in a class. For example, a static method can be used to validate input parameters or perform type conversions i.e., covert type from one to another like an int to string. 
  • Global Method: Static methods can be used as global methods that can be accessed everywhere in the program. This is useful for implementing common functions that are used across multiple classes or modules. 


Encapsulation is the technique of combining code and data into a single entity, such as a capsule containing a variety of medications. By making every data member of the class private, we may construct a class that is completely enclosed. We can now set and retrieve data from it using getter and setter functions. 

Data encapsulation differs from data hiding in that it concentrates on wrapping (or encapsulating) the complex data to give the user a more user-friendly view. Data hiding concentrates on restricting data use in a programme to ensure data security.

Benefits of encapsulation include:

  • It gives you control over the information. 
  • It is simple to test. 
  • It gives concealing of data. 

The capability of a class to acquire characteristics and properties from another class is referred to as inheritance. It is an important component of OOPs (Object Oriented programming system). 

Let’s see various types of inheritance: - 

Types of Inheritance:

  • Single Inheritance: A single inheritance occurs when one class inherits another class. In the example presented below, there is just one inheritance because the Cat class derives from the Animal class.
  • Multilevel Inheritance: Multilevel inheritance refers to a series of inheritances.
  • Example: The BabyCat class inherits from the Cat class, which in turn inherits from the Animal class.
  • Hierarchical Inheritance: It is the process through which two or more classes inherit properties from a single class.
  • Example: Dog and Cat classes inherit from the Animal class.
  • Multiple Inheritance: Multiple inheritance is the process through which one class inherits members of several other classes.
  • Hybrid Inheritance: It is an inheritance which consists of two or more different types of inheritance.

Note: Multiple and hybrid inheritance is not supported in Java.

Java programming only supports multiple and hybrid inheritance through interfaces.

Use of inheritance:

  • Reusability: By deriving new classes from existing ones, inheritance enables you to reuse existing code. The amount of code that has to be developed and maintained could be lessened as a result. 
  • Code Organization: Code may be organised via inheritance into logical hierarchies, which makes it simpler to grasp and maintain. This can aid in bringing down the code's complexity and improving its long-term maintainability. 

Let’s see various terms that are primarily used in inheritance. 

Terms used in Inheritance: 

  • Class: A class is a collection of items with similar characteristics. It is a template or blueprint from which objects are created.
  • Sub Class/Child Class: A class that inherits from another class is referred to as a subclass. A derived class, extended class, or kid class are other names for it.
  • Super Class/Parent Class: The class from which a subclass gets its characteristics is referred to as the superclass or parent class. It is sometimes referred to as a parent class or base class. 


The idea of polymorphism allows us to carry out a single operation in several ways.

Greek terms poly and morphs are the roots of the word polymorphism. Poly means numerous, and morphs implies forms.

Polymorphism entails diversity of forms.


A person can play many roles and carry out multiple duties at once like he/she can be a writer and a software developer at the same time. So, the same person behaves differently depending on the situation. This is polymorphism.


Compile-time polymorphism and runtime polymorphism are the two forms of polymorphism.

By using method overloading (Compile-time polymorphism) and method overriding (runtime polymorphism), we may implement polymorphism.

Compile-time polymorphism is also known as static polymorphism and runtime polymorphism is also known as dynamic polymorphism.

  • Compile-time Polymorphism:

Static polymorphism is another name for it. In order to create this form of polymorphism, either operators or functions must be overloaded. It is also known as early binding.

Method Overloading: When these functions have several instances of the same name but different parameters, they are said to be overloaded. A change in the number of arguments or/and a change in the type of arguments might cause a function to be overloaded.

Note: Operator Overloading is not supported by Java but supported by C++.

  • Runtime Polymorphism:

It is also known as late binding or dynamic polymorphism. It is achieved by method overriding. It is also called as dynamic method dispatch.

Dynamic Method Dispatch: The process by which a call to an overridden method is resolved at run time is known as dynamic method dispatch. 

Method Overriding: A function call to the overridden method is resolved during this procedure at runtime. Method Overriding is used to accomplish this kind of polymorphism. Contrarily, method overriding happens when a derived class defines one of the base class's member functions. It is referred to as overriding that base function.

Modifiers can be either access modifiers or non-access modifiers.

By using the access modifier, we may modify the access level of fields, constructors, methods, and classes.

Java access modifiers come in four different varieties:

  • Private: A private modifier has class-specific access levels. It is inaccessible to those outside the class.
  • Default: A default modifier's access level is limited to the package. It is inaccessible from the outside of the package. If you do not choose an access level, the default will be used.
  • Protected: A protected modifier has both internal and external access levels through a child class. The child class cannot be accessed from outside the package if it is not created.
  • Public: A public modifier's access level can be found anywhere. It is accessible from both within and outside of the class as well as from both inside and outside of the package.

C++ does not support default modifier, it has only public, private and protected. 

Non-access modifiers come in a variety of forms, including static, abstract, synchronised, native, volatile, transient, etc.

Getter and Setter methods are used to secure your code and safeguard your data.  

  • Getter returns the accessors value while setters set the accessors value. 
  • Getters begin with the word ‘get’ followed by the rest variable name and setters begin with the word ‘set’ followed by the rest variable name. 
  • The variable's initial letter should be capital in both the getter and setter functions. 
  • The programmers’ convenience in getting and setting the value for a certain data type like string, int, float, double, etc is provided via getter and setter. 

Horizontal partitioning, or "sharding," is a sort of database partitioning.

Actually, Sharding is a database design that break a big database into smaller, more manageable pieces called shards to scale the database horizontally. 

Instead of having all the data kept on a single, centralised server, the goal behind sharding is to distribute the workload of a database among several servers. 

When to Shard a table? 

Your database's performance can be greatly enhanced by slicing up your data. Here are some examples of how sharding might enhance performance: 

  • Reduced index size - Because the tables are split up and scattered over several servers, each database has fewer total records in each table. This lessens the size of the index, which enhances search performance in general. 
  • Spread a database over several computers - A database shard can be installed on a different piece of hardware, and numerous shards can be set up on different computers. This makes it possible to spread the database across a lot of computers, considerably enhancing speed. 
  • Segment data by region - In addition, if the database shard is based on some real-world segmentation of the data (for example, European customers vs. American customers), it could be able to deduce the right shard membership quickly and automatically and only query the relevant shard. 

When not to shard a table? 

Sharding should only be utilised when all other optimization methods are insufficient. The following possible issues are brought on by the complexity that database sharding introduces: 

  • Single point of failure: If one shard fails due to network, hardware, or system issues, the entire table will also fail. 
  • Backup Complexity: Each shard's database backups must be synchronised with those of the other shards. 
  • Operational Complexity: It is harder to add or remove indexes, columns, or change the schema. 

Indexing helps to improve database speed by reducing the number of disc accesses required during query processing. One kind of data structure is the index. It is employed to rapidly find and retrieve the data in a database table. The search key, which is the first column in the database, contains a copy of the primary key or candidate key for the table. To make it simple to obtain the appropriate data, the values of the primary key are kept in sorted order. The database's second column contains the data references. It includes a group of pointers that store the location of the disc block's address where the value of a certain key may be located. 

Various types of Indexing Methods are:

  • Primary Index  
  • Secondary Index 
  • Clustering Index  

Primary Index:

Primary indexing is the process of building an index based on the table's primary key. These primary keys have a 1:1 relationship between the records and are particular to each record. The performance of the searching operation is highly effective since primary keys are kept in sorted order.

There are two categories for the primary index: Dense and Sparse index. 

 Dense Index

  • For making faster search, every search key value in the data file is represented by an index record in the dense index.
  • The number of records in the index table and the main table are equal in this case. 

Sparse Index

  • Only a small number of objects have index records in the data file. 
  • Here, the index links to the main table records in a gap rather than to each item in the main table. 

Secondary Index:

When using sparse indexing, the size of the mapping also increases as the size of the table does. These mappings are often retained in the main memory to speed up address fetches. Using the address obtained via mapping, the secondary memory then searches the real data. The process of retrieving the address itself becomes slower as the mapping size increases. The sparse index will not function effectively in this situation. Secondary indexing is used to address this issue. 

Clustering Index:

An ordered data file can be used to construct a clustered index. On non-primary key columns, which might not be distinct for each record, the index is occasionally built. In this instance, we will aggregate two or more columns to obtain the unique value and make an index out of them in order to identify the record more quickly. This technique is known as a clustering index. 

When a large problem is broken down into multiple smaller sub-problems, it can be addressed more quickly just like partitioning. It creates smaller, more manageable data slices known as partitions from a large database that contains data metrics and indexes. Once the database has been partitioned, the data definition language can easily handle the smaller slices of the partitioned database rather than the entire enormous database. Partitioning solves the issue directly by making it comparatively easier to manage huge database tables

When to partition a table is shown by the following examples:

  • Tables larger than 2 GB should always be taken into consideration as partitioning candidates. 
  • Historical data tables where fresh data is appended to the most recent partition. A historical table where just the data for the current month is editable and the prior 11 months are read-only, is a common illustration. 

Let’s see various types of partition techniques that are primarily used: - 

Partitioning Techniques:

  • Single-level Partitioning
    • Hash Partitioning
    • Range Partitioning
    • List Partitioning
  • Composite Partitioning
    • Composite Range–Range Partitioning
    • Composite Range–Hash Partitioning
    • Composite Range–List Partitioning
    • Composite List–Range Partitioning
    • Composite List–Hash Partitioning
    • Composite List–List Partitioning
  • Hash Partitioning:

The hash technique used by Oracle to identify partition tables. This method evenly splits the rows into several divisions, ensuring that each partition has the same dimensions. Hash partitioning is the technique of breaking up database tables into smaller chunks using this hash method. Hash partitioning is the ideal method for regularly distributing data among several devices. 

  • Range Partitioning:

Based on the range of values of the specific partitioning keys for each partition of the data, range partitioning separates the data into a number of divisions. It is a well-liked partitioning strategy that frequently involves dates. For instance, it will include a table with the column name "JULY" and rows with dates ranging from JULY 1 to JULY 31 to represent the days of the month of JULY. All partitions that are smaller than a certain partition appear before its VALUES LESS THAN clause, and all partitions that are higher than that specific partition come after its VALUES LESS THAN clause. Using the MAXVALUE clause, the highest range partition is represented. 

  • List Partitioning:

A form of database partitioning called list partitioning divides data based on a list of values. List partitioning involves splitting a table into several partitions, each of which holds data depending on a particular set of values. 

  1. Composite Range-Range Partitioning: In this composite partitioning, the same range partitioning mechanism is used to create both the partition and the sub partition. 
  2. Composite Range–Hash Partitioning: This combines the range and hash partitioning techniques. Range partitioning is initially used to split the data table, and the resulting subdivisions are then further separated using a hash partitioning methodology. It combines the advantages of the two techniques, namely the range method's capacity for control and the hash method's capacity for information placement and striping.
  3. Composite Range–List Partitioning: Composite range-list partitioning divides information using the range approach, and then further divides each division using the list method.
  4. Composite List–Range Partitioning: The data is initially divided using the list partitioning strategy in this composite division. The range partition mode is used to split each of the specified partitions once the data has been organised into the different partitions in the list.
  5. Composite List–Hash Partitioning: This enables data that has previously been list-partitioned on a list to be hash sub-partitioned. Here, the hash partition procedure comes after the list partition.
  6. Composite List–List Partitioning: Partitioning and sub-partitioning using the List partitioning method are both included in this kind of composite partitioning system. The initial, enormous table is partitioned using the list approach, and the results obtained are again cut into smaller slices of data using the same list method.

 Normalization is a technique for structuring the data in a database that enables you to prevent anomalous insertion, update, and deletion of data as well as data redundancy. It is a procedure that involves examining relational schemas in light of their many functional relationships and primary keys. 

Relational database theory requires normalisation from the outset. It might result in the database having duplicate copies of the same data, which could lead to the formation of new tables. It is most likely asked technical interview questions computer science for freshers 

Three forms of data modification abnormalities can be distinguished: 

  1. Insertion Anomaly: When a new tuple cannot be inserted into a relationship because there is insufficient data, this is referred to as an insertion anomaly. 
  2. Deletion Anomaly: The term "deletion anomaly" describes a circumstance in which some vital data is unintentionally lost when some other data is deleted. 
  3. Modification/updation Anomaly: The update anomaly occurs when changing a single data value necessitates changing numerous rows of data. 

Types of Normal Forms

  • 1 NF: If a relation has an atomic value, it is in 1NF. 
  • 2 NF: If a relation is in 1 NF and all non-key qualities are completely dependent on the primary key, then it is in 2 NF. 
  • 3 NF: If a relation is in 2 NF and there is no transition dependence, it will be in 3 NF. 
  • BCNF: Boyce Codd's Normal Form (BCNF) is a better definition of 3NF, where every A -> B dependence has an A as a Super Key. 

Advantages of Normalization

  • Data redundancy is reduced with the use of normalisation. 
  • It improved database structure. 
  • It is a database design that is far more versatile. 
  • It retains relational integrity as a paradigm. 
  • It maintains data consistency within the database. 

Disadvantages of Normalization

  • You must first understand the user's wants before you can begin developing the database. 
  • The performance suffers when the relations are normalised to higher normal forms, such as 4NF and 5NF. 
  • The process of restoring higher degree relationships to normalcy takes a long time and is challenging. 
  • Careless decomposition might result in a poor database design, which would cause major issues. 

Transaction: The smallest processing unit that cannot be further split is a single task. A set of tasks might be referred to as a transaction. 

Let’s see various types of transaction states: -

Transaction States:  

  • Active: The transaction is being carried out at this state. Every transaction starts out in this condition. 
  • Partially Committed: A transaction is said to be in a partly committed state when it completes its final activity. 
  • Failed: If any of the tests performed by the database recovery system fail, the transaction is said to be in a failed state. Failure to complete a transaction prevents further action. 
  • Aborted: The recovery manager rolls back all of the database's write operations if any of the checks fails and the transaction reaches a failed state, returning the database to the condition it was in before the transaction was executed. Aborted transactions are those in this condition. Following a transaction abort, the database recovery module can choose between the following two actions: - Restart the transaction - Kill the transaction 
  • Committed: A transaction is deemed committed if all of its actions are completed successfully. The database system has now permanently recorded all of its effects. 

Process States: The different stages that a process may be in while being run on a computer system are referred to as "process states". Each of the following states is possible for a process.

  • New: In this process state, the OS is ready to choose the programme and turn it into a process.
  • Run: In this process state, CPU is allocated, and instructions are being carried out.
  • Waiting: In this process state, process is waiting for input/output.
  • Ready: In this process state, the process is already running in memory and is awaiting assignment to a processor.
  • Terminated: In this process state, the process's execution is complete. 

Process Queues: It determines that the process is in which state.

Job Queue:

  • In this queue, processes are in the new state.
  • It is present in secondary memory like hard disk.
  • In this, Long-term scheduling (LTS), also known as job scheduling, selects processes from a pool and loads them into memory so they may be executed.

Ready Queue:  

  • In this queue, processes are in the ready state.
  • It is present in main memory like RAM.
  • In this, Short-term scheduling on the CPU selects a process from the ready queue and sends it to the CPU.

Waiting Queue:  

  • In this queue, processes are in the wait state.

Constructor is called when an instance of any class is created as object. It is used to initialize the object. It is called every time an object is created using a keyword new. If we did not make any constructor in class then default constructor is called.

There are three types of constructors.

  • Parameterized constructor: It has finite number of parameters.
  • Default Constructor: It does not have any parameter.
  • Copy constructor: It takes an object and copies it into another object.
    • There is no copy constructor in Java like C++.
    • There is something called clone() in place of it.
    • It returns current class instance.

Some important terminology related to constructors are: 

  • this: this is the pointer to current object.
  • dot(.) operator: It is used to access member of a class or package.

It is just opposite to constructor. When we construct a class object, it takes up some memory space (heap). If we do not get rid of these things, they linger in the memory and take up space that is not essential from a programming point of view. We use the destructor to fix this issue. The destructor is used to delete or destroy an object, which releases the resource it was using till now.

Advantage of Destructor:

  1. It releases the resources that the item has been using.
  2. It is automatically invoked at the end of program execution. No explicit call is necessary.
  3. It cannot be overloaded and does not take any parameters.


  • Shallow Copy:

In a nutshell, a new object that has a pointer to the memory address of the old object is called a shallow copy. As a result, any modifications made to the original object will also be mirrored in the shallow duplicate. When you wish to replicate an item but do not need to create an exact duplicate of the original object, shallow copying might be handy. It's also called as "shallow cloning."  

  • Deep Copy:  

On the other hand, a deep copy is a totally independent copy of an item. Changes to the original object have no impact on the deep copy since it produces a new object with a completely different memory location. When you need to change the original item without changing the duplicate, this is helpful. 

In Python, for making shallow and deep copies, use the copy module's methods. A shallow copy is produced by the ‘copy.copy()’ method, and a deep copy is produced by the ’copy.deepcopy()’ function. 

A transaction is a very tiny programming unit that can include a number of low-level operations. Atomicity, Consistency, Isolation, and Durability, or ACID qualities, are requirements for a transaction in a database system to maintain correctness, completeness, and data integrity.  

  1. Atomicity: According to this criterion, a transaction must be considered as an atomic unit, meaning either all of its activities are carried out or none of them are. A transaction cannot be left in a partially finished state in a database. States ought to be established either before the transaction is carried out or after it has been carried out, aborted, or failed.
  2. Consistency: Any transaction must leave the database in a consistent state. No transaction should have a negative impact on the database's data. The database must continue to be consistent after a transaction has been completed if it was in a consistent state prior to the transaction's execution.  
  3. Isolation: The property of isolation specifies that all transactions will be carried out and processed as if they are the only transactions in the system in a database system when several transactions are being conducted concurrently and in parallel. No transaction will have an impact on another transaction's ability to take place.  
  4. Durability: The database must be strong enough to withstand system failures and restarts while still holding all of its most recent updates. The database will keep the updated data if a transaction modifies some data in a database and commits. The data will be updated once the system restarts if a transaction commits but the system crashes before the data could be committed to the disc.
  • Process:

A process is a computer program that is currently in execution. A process might produce what are referred to as "Child Processes," or new processes. The process is isolated, which means it does not share memory with any other processes, and therefore takes longer to complete.  

The following statuses are possible for the process: new, ready, running, waiting, terminated, and suspended.  

  • Thread:  

A process can have several threads, and these multiple threads are confined within a process since a thread is a segment of a process.  

Three states exist for a thread: running, ready, and blocked.  

The thread terminates faster than the process, but it does not isolate as the process does.  


Threads share same memory  

Each process has different memory  

Creation takes less time.  

Creation takes more time.  

Data is shared between threads.  

No data is shared between processes.  


Multitasking is the technique of carrying out many tasks at once.

A process is broken down into several distinct threaded subtasks, each of which has its own execution path. Multithreading is the term used to describe this idea.

In this, more than one processes is context switched.

In this, more than one threads is context switched.

There is memory protection and isolation.

Each program that the CPU is running needs its own memory and resources, which the OS must allot.

Resources are shared throughout that process' threads since there is no isolation or memory protection.

A process receives memory allocation from the operating system. several threads inside that process share the memory and resources allotted to the process.

Number of Central Processing Unit is 1.

Number of Central Processing Unit is more than 1.

Thread Context Switching
Process Context Switching

It is comparatively faster and cheaper. 

It is comparatively slower and costlier. 

OS shifts to another thread of the same process while saving the current state of the thread.

OS moves to another process by restoring its state after saving the current state of the current process.

In this Context SwitchingCPU’s cache state is preserved.

In this Context SwitchingCPU’s cache state is not preserved but its flushed.

It excludes switching the memory address space.

It includes switching the memory address space.

Let us see first what is a kernel: - 

Kernel: The core of an operating system is its kernel (OS). It serves as a link between a computer's hardware and its software programmes. The system's resources, including the CPU, memory, and input/output devices, are managed by the kernel.

Now, let’s see various types of kernels in operating system. 

  • Monolithic kernel
    • The kernel itself contains all functions.
    • It is large and bulky.
    • A lot of memory is needed to operate.
    • Less reliable, if one module fails, the entire kernel goes down.
    • High performance due to quick communication.

Example: Linux, Unix, and MS-DOS.

  • Micro Kernel
    • The kernel only contains major functions.
    • Memory management
    • Process management
    • File management and IO management are in user space.
    • It is smaller in size.
    • It is more secure. 
    • Performance is comparatively slow.
    • Inefficient transition between kernel mode and user mode.

Example: L4 Linux, Symbian OS, MINIX, etc.

  • Hybrid Kernel
    • In this, file management in user space and rest in kernel space.
    • It contains monolithic speed and design.
    • It contains micro's modularity and stability.
    • IPC occurs as well, although with lower overheads. 

Example: MacOS, Windows NT/7/10

Apart from that there are more types of kernels that is comparatively less used in OS.

  • Nano Kernel
  • Exo Kernel

Thrashing occurs when a computer system has insufficient memory and spends a lot of time switching data between the disk and memory rather than performing tasks.  

The process will immediately page-fault if it does not have the necessary number of frames to support the active pages. It must now swap out a page at this point. It must, however, replace a page that will be immediately required again because all of its pages are currently being used. As a result, it faults frequently, replacing pages that it needs to bring back in right away. Thrashing is the name for this high-paging action.  

Thrash is the term used in computer science to describe the poor performance of a virtual memory (or paging) system caused by the repetitive loading of the same pages when there is not enough main memory to retain them in mind.

The Causes of Thrashing are as follows:  

  • High level of multiprogramming. 
  • Lack of frames.  

Recovery from thrashing:   

  1. Use Don't Allow Bit: The "Don't Allow Bit" can be used to restrict how much memory each process or application is allowed to consume in order to avoid thrashing. The "Don't Allow Bit" can be set to prevent additional memory allocation when a process or application reaches its memory limit and forces the process to release any unused memory. 
  2. Adding more RAM: The amount of thrashing can be decreased by increasing the system's random access memory (RAM), which gives it more memory to work with. 
  3. Increase the size of the swap space: Increasing the swap area's size can help the system experience less thrashing if it currently has a low swap space. 
  4. Reduce the number of processes: Reduce the number of programmes operating at once or shut down any unnecessary processes to lessen thrashing. 
  5. Upgrade the hardware: The amount of time of thrashing can be decreased by installing a solid-state drive (SSD) or upgrading to a faster hard disc. 

Banker’s algorithm is used to avoid deadlock. 

The term was chosen because the algorithm might be implemented in a banking system to make sure that the bank never distributes its funds in a way that prevents it from meeting the needs of every customer. There should be a maximum number of each resource type's instances.  

The maximum amount that may be allocated safely to determine the allocation for all resources is tested using this algorithm. Before deciding whether or not to continue allocating, it also checks for all potential activities.  

For instance, there are G dollars total in the accounts of X number of account holders at a particular bank.  

A car loan is processed by the bank using software subtracts the loan amount for a car purchase from the total amount of money the bank possesses (G+ Fixed Deposit + Monthly Income Scheme + Gold, etc.).  

Additionally, it verifies whether the difference exceeds G or not. Even if every account holder withdraws funds at once, the auto loan won't be processed until the bank has enough cash.  

Friend class is a class that has been designated as a friend can access the private and protected members of other classes. Sometimes it is advantageous to provide a certain class access to another class's private members.  

A friend class can access both private and protected members of the class in which it has been declared as a friend.  

Friend Functions:  

Friend Functions, like friend class, can also access private and protected members.  

In C++, a friend function is a special function that, although not belonging to a class, has the ability to access its private and protected data.  

  • A friend function is a non-member function of a class, which is declared by the keyword "friend" so that it can access all the members of that class. 
  • The friend function's function declaration, not the definition, is the only place the term "friend" is used.  
  • Neither the object's name nor the dot operator is utilised when the friend function is invoked. The object, whose value it wants to access, may be accepted as an argument.  
  • Any portion of the class, such as the public, private, or protected, can be used to declare a friend function.  

The following are some crucial details about friend functions and classes:  

  • Friends should only be utilised in specific circumstances. The utility of encapsulating different classes in object-oriented programming is diminished if too many functions or external classes are specified as friends of a class with protected or private data.  
  • Friendships are not reciprocal. Class B does not immediately become a buddy of class A just because class A is.  
  • Friendship is not inherited.  
  • The concept of friend classes is not supported by Java.  

Merits of using Friend Class/Function:  

  • Friend functions can access members without having to inherit the class.  
  • The friend function connects two classes by having access to their personal information.  
  • It can be declared in the class's protected, private, or public part.  

Demerits of using Friend Class/Function:  

  • A class' private members can be accessed by friend functions from outside the class, which is against the law of data hiding.  
  • Friend functions cannot do any run time polymorphism in its members. 

Abstraction: Abstraction is showing only the essential information and hiding all the other unnecessary things and details.


Let’s say a man is driving a vehicle. The man only knows that pressing the accelerators will make a car go faster or that applying the brakes will cause the car to stop, but he has no idea how pressing the accelerator actually increases speed. He also has no idea how the accelerator, brakes, and other car controls are used inside the vehicle. 

Abstracts Class:

An abstract class is declared by the keyword “abstract”. It can contain abstract as well as non-abstract methods. An abstract class is one that can't be created on its own. It offers a base class definition from which subsequent classes can inherit and serves as a blueprint for developing concrete subclasses. 

Creation: abstract type name(parameter-list);

Properties of Abstract Class:  

  1. All classes must be declared abstract if they have one or more abstract methods.
  2. There cannot be any abstract class objects.
  3. We cannot declare abstract methods or abstract static constructors.
  4. We can declare static methods in abstract class because there can be no objects for abstract class. 
  5. Any subclass of an abstract class must either be declared abstract or implement every abstract method in the superclass.
  6. Abstract classes may have as much implementation as they see appropriate, including concrete methods (methods with bodies).

A must-know for anyone heading into a CS interview, this question is frequently asked in CS interview questions. Http and Https are protocols rather they are application-layer protocols used for transmitting data over the Internet.

Let us see what is a protocol and application-layer protocol. 

  • Protocol: A protocol is a set of rules for structuring and processing data in networking.
  • Application-layer Protocol: A protocol that works at the application level of the network communication model is known as an application-layer protocol. Some examples apart from HTTP and HTTPS are FTP (File Transfer Protocol), SMTP (Simple Mail Transfer Protocol), DNS (Domain Name System) and SSH (Secure Shell). 
  • HTTP: Files like HTML pages and photos are requested and sent from a web server to a client via HTTP. It is a stateless protocol, which implies that between requests, it keeps no track of the status of a connection. For networked, collaborative, hypermedia information systems, HTTP is utilized as an application-layer protocol. The HTTP protocol is used to interact between website visitors and server farms. World Wide Web uses it to control communications between web servers and browsers.
  • HTTPS: SSL/TLS encryption is used in HTTPS, a secure variant of HTTP, to safeguard the data being communicated between a client and a server. As a result, sending sensitive data over the Internet, including login passwords or financial information, via HTTPS is more secure. 

If a particular HTTP request has indeed been correctly answered, it will be indicated by a response status code.

Five categories have been used to categorize responses:

  • Informational answers (100 – 199)
  • Successful reactions (200 – 299)
  • Messages of redirection (300 – 399)
  • Client response errors (400 – 499)
  • Server error (500 – 599)

IP address: An IP address (Internet Protocol address) is a unique numerical identifier given to each connected device to a computer network that makes use of the Internet Protocol. A host's or network interface's identification and position inside the network are the two primary purposes of an IP address. Although IP addresses are often expressed as a string of alphanumeric letters, computers interpret them as binary codes (1s and 0s).

There are two main versions of IP addresses: IPv4 and IPv6.


  • 32-bit addresses are supported by IPv4, however, end-to-end connection integrity is not possible with IPv4 due to the inability of manual and Dynamic Host Control Protocol (DHCP) address setup.
  • 4.29 109 address space can be generated. Application-dependent security feature
  • IPv4 addresses are represented using decimal numbers.
  • Senders and forwarding routers execute fragmentation
  • IPv4 does not have an encryption or authentication capability. 
  • The four fields that makeup IPv4 are separated by dots (.)
  • There are five distinct classifications for IP addresses in IPv4. Classes A, B, C, D, and E are available.


  • IPv6 addresses are 128 bits long and offer auto- and renumbering address setup.
  • Connection integrity is achievable with IPv6 end-to-end.
  • The IPv6 protocol contains built-in security features like Internet Protocol Security (IPSEC). Its address encoding is in hexadecimal. It has a huge address space that can yield 3.41038 address space. It also supports multicast and multicast data delivery schemes. It also has encryption and authentication. It has a set header size of 40 bytes.
  • 8 fields make up IPv6, each field is separated by a colon (:)
  • IP addresses in IPv6 do not fall into any categories.
  • Variable Length Subnet Mask (VLSM) is incompatible with IPv6.

They all are various types of networking device. 

  1. Repeater: At the physical layer, a repeater functions. Its task is to regenerate the signal over the same network before it weakens or becomes corrupted in order to increase the amount of time that it may be sent over the same network. Repeaters do not enhance the signal, which is a crucial distinction to make. When the signal deteriorates, they replicate it piece by piece and restore it to its initial intensity. The gadget has two ports.
  2. Hub: Essentially, a hub is a multiport repeater. A hub joins several wires that come from several branches, like the connector in a star topology that joins various stations. Data packets are delivered to all connected devices since hubs are unable to filter data. In other words, all hosts linked by Hub continue to share a single collision domain. Additionally, they lack the intelligence to choose the optimum route for data packets, which results in waste and inefficiency.
  3. Bridge: The data link layer is where a bridge functions. A bridge is a repeater with the additional capability of content filtering via source and destination MAC address reading. Additionally, it is utilised to link two LANs that use the same protocol. It is a 2-port device since it has a single input and output port.
  4. Switch: A switch is a multiport bridge with a buffer and a design that can increase its performance and efficiency (more ports mean less traffic). A data link layer device is a switch. The switch may carry out error checking before forwarding data, which makes it incredibly efficient because it only forwards good packets to the right port and does not transmit packets with mistakes. To put it another way, the switch separates the hosts' collision domains while keeping the broadcast domain constant.
  5. Routers: Similar to a switch, a router directs data packets depending on their IP addresses. Mainly a Network Layer device, the router. LAN and WAN connections are often made via routers, which also decide how to route data packets based on a dynamically updated routing table. The broadcast domains of hosts linked by a router are divided.
  6. Gateway: A gateway, as its name indicates, is a passageway that connects two networks—which may use various networking models—together. They essentially serve as the messengers who interpret and transport data from one system to another. Protocol converters are another name for gateways, which may work at any network layer. In general, gateways are more complicated than switches or routers.
  1. Socket: A socket is one endpoint of a two-way communication channel between two networked programmes. By creating named contact points between which the communication takes place, the socket mechanism offers a form of inter-process communication (IPC).
  2. Subnet: A network inside another network is known as a subnet, or subnetwork. Networks become more effective with subnets. Network communication can go a shorter distance to its destination without using extraneous routers thanks to subnetting. Networks operate more effectively when communications move as directly as possible, much like the postal service. In order to avoid sending data packets on an inefficient path to their destination, a network sorts and routes data packets it receives from another network according to subnet.
  3. Subnet Mask: Similar to an IP address, a subnet mask is only used internally within a network. Subnet masks are used by routers to direct data packets to the correct location. Data packets travelling over the Internet do not contain subnet mask information, instead, they simply contain the destination IP address, which a router will match with a subnet.
  4. Protocol: A protocol is a set of rules for structuring and processing data in networking.
  5. Cookie: A cookie is a piece of information from a website that is saved in a web browser for subsequent retrieval by the website. Cookies are used to let a server know whether visitors have visited a specific website again. A cookie gives information and enables the website to display customised settings and tailored content when visitors return. Additionally, cookies keep track of user preferences, the contents of the shopping cart, and login or registration information. This is done so that any data supplied in a previous session, or any specified preferences may be simply retrieved when visitors revisit websites.
  6. Star Topology: A network structure known as a "star topology" has all nodes connected to the hub, which serves as the centre node, through cables. The hub might be either active or passive. Repeaters are found in active hubs, but passive hubs are regarded as non-intelligent nodes. To the central node, which serves as a repeater for data transmission, each node has a reserved link.
  7. Mesh Topology: Mesh topology is a type of topology where every node is linked to every other node by a network channel. There is a point-to-point link in mesh topology. It can link n nodes using n(n-1)/2 network channels. Routing and flooding are two of the data transfer methods available in mesh topology. The nodes in the routing method each have a routing logic, such as the logic for the quickest path to the target node or the logic to prevent routes with broken connections. The network nodes in the flooding strategy all get the same data. We no longer require routing logic as a result. Although this method strengthens the network, it adds unneeded load.
  8. Tree Topology: A tree topology is a topology in which all nodes are connected to the topmost node, or root node, in a hierarchical manner. It is also known as hierarchical topology for this reason. Three degrees of hierarchy are included in tree topology. Wide Area Network uses tree topology. It is a fusion of both Bus and Star topologies.
  9. Hybrid Topology: A network topology that combines two or more distinct topologies is known as a hybrid topology. It is an expensive topology but one that is also dependable and scalable. It benefits and suffers from the topologies that were utilised to construct it.  
  • Static Memory Allocation:
    • Allocation is carried out using Stack Memory. 
    • The variables' memory space is reserved permanently until the programme or function call is finished.  
    • During the compilation step, memory allocation takes place. 
    • Declared memory is retained during the entirety of the program's execution.   
    • When compared with dynamic memory allocation, it is simple.  
    • In this allocation, allotted memory cannot be utilised again.  
    • It has ineffective memory management. 
    • In static memory, the memory size cannot be modified after it has been allocated to a programme. 
  • Dynamic Memory Allocation
    • During the runtime step, memory allocation takes place.  
    • The new keyword only uses memory allocation for the variables when it is necessary.  
    • Heap memory is used for allocation.  
    • If memory is allocated for a programme in dynamic memory, the memory size can be altered later. 
    • It has improved memory management.  
    • If the memory that was allocated is no longer needed, it can be utilised again.  
    • When it comes to defining multi-dimensional arrays, it is more difficult.  
    • Memory that has been declared can be released, reused, and assigned to other purposes.

A theoretical framework known as the OSI (Open Systems Interconnection) model specifies and standardises a communication system's operations in seven different levels. 

  1. Physical layer: It addresses with the functional, electrical, and mechanical sides of the physical connection between devices, including the bit-transmission through a physical media like copper wire or optical fibre.
  2. Data Link Layer: It offers flow management, medium access control, and error detection and correction, providing dependable transmission of data frames through the physical layer.
  3. Network Layer: For reliable data frame transmission via the physical layer, network layer offers error detection and correction, flow control, and medium access control.
  4. Transport Layer: Data transmission from the source host to the destination host must be error-free and end-to-end. This is the responsibility of the transport layer.

The functions of the transport layer are: 

It makes it possible for the hosts who are communicating to have a discussion. 

It can also provide flow control, verification, and error checking. 

  1. Session Layer: The Open System Interconnection (OSI) model's fifth tier is the session layer. This layer enables users to create active communications sessions with one another across many devices. Establishing, maintaining, synchronising, and terminating sessions between end-user applications are its responsibilities. In the session layer, streams of data are received, further tagged, and then correctly resynchronized. This prevents message ends from being originally severed and additional data loss. In a nutshell, this layer creates a link between the session components. 
  2. Presentation Layer: The Open System Interconnection (OSI) model's sixth tier is called the Presentation Layer. Because it acts as a data translator for the network, this layer is also known as the translation layer. In order to transfer the data across the network, this layer extracts and modifies the data that it gets from the Application Layer. This layer's primary duty is to offer or specify the data format and encryption. The presentation layer is sometimes known as the syntax layer because it is in charge of ensuring that the data it receives or sends to other layers has the correct syntax.
  3. Application Layer: It is in charge of determining the structure and meaning of data that is sent between apps as well as providing services to the user. 

Note: In TCP/IP model, its application layer combines the features of the OSI model's session layer, presentation layer, and application layer. 

The idea of concurrency control falls within the database management system's Transaction category (DBMS). A DBMS procedure lets us manage two concurrent processes so that they may run without conflicting with one another, which happens in multi-user systems.

Executing many transactions simultaneously is the simplest definition of concurrency. To improve time efficiency, it is necessary. Inconsistency develops when many transactions attempt to access the same piece of data. Data consistency requires the use of concurrency control.

If we use ATMs as an example, numerous people cannot withdraw money simultaneously at various locations if concurrency is not used. We require concurrency here.

The following are concurrency control's advantages: 

  • There will be less waiting. 
  • The turnaround time will shorten. 
  • The use of resources will increase. 
  • Improved system performance & efficiency. 

Main problems in using Concurrency: Updates will be lost: Updates won't be saved if one transaction makes modifications and another one undoes them. The changes of one transaction override those of another. Uncommitted Dependency or Dirty Read Problem: When a variable is updated in one transaction and its value is deleted in a subsequent transaction, the variable that was updated in the first transaction is not updated or committed, giving us false values or the variable's previous value. Inconsistent retrievals: When one transaction updates several distinct variables and another transaction is in the process of updating those variables, the problem is that the same variable appears inconsistently in several contexts. 

Concurrency control strategies: The strategy for concurrency control are as follows:
Locking: Lock ensures that a current transaction has sole access to certain data objects. It initially obtains a lock to get access to the data items, and once the transaction is over, it releases the lock.
The various lock types are as follows:

  • Shared Lock [Transaction can only read the values of data items]. 
  • Exclusive Lock [Used for read-only and write-accessible data item values] 

Relationship constraints are rules that specify how different database entities relate to one another. In order to ensure that data is saved and retrieved consistently and properly, they are used to enforce business rules and preserve data integrity. 

There are two main types of binary relationship constraints. 

  1. Mapping Cardinality 
  2. Participation Constraints 

Mapping Cardinality / Cardinality Ratio:
Number of entities that may be connected to another entity through a connection is known as the mapping cardinality or cardinality ratio.

Types of Mapping Cardinality:

One to One: An entity in entity set A associates with no more than one entity in entity set B. And a B entity can only be related to a single A entity.

E.g., Let’s take an example of student database, where each student has only one roll number.

In this example, the relationship between the "Student" entity and the "Roll Number" entity would be a one-to-one relationship, as each student has only one roll number and each roll number is assigned to only one student. This can be represented using the following mapping cardinality: 

  • One student has one roll number (1:1) 
  • One roll number is assigned to one student (1:1) 

One to Many: N entities in B are connected to an entity in A. While only one of the entities in A is connected to the entity in B.

E.g., Let's take the example of a bookstore database. Each book (entity B) can be written by only one author (entity A), but an author can have written multiple books. Therefore, the relationship between authors and books would be a one-to-many relationship. 

This can be represented using the following mapping cardinality: One author can write many books (1:N) 

Many to one: Entity in A is connected to at most one entity in B. However, N entity in A can be connected to entity in B.
E.g., Let’s take the same example of student database, where many students can have the same academic advisor. 

In the case of the student database, many students could potentially have the same academic advisor (entity B), but one academic advisor can advise many students (entities A). Therefore, the relationship between students and academic advisors would be a many-to-one (N:1) relationship.

Many to Many: N entities in B are connected to an entity in A. While N entity in A is also connected to entity in B.

E.g., Again, let’s take an example of student database, where many students can be taught by many teachers. 

In the case of the student database where many students can be taught by many teachers, each student can be associated with multiple teachers (entities B), and each teacher can teach multiple students (entities A). Therefore, the mapping cardinality for this relationship would be many-to-many (N:N). 

Participation Constraints: Participation constraints can be defined as the minimum and maximum participation of an entity in a relationship. They determine whether an entity must participate in a relationship or not. 

There are 2 types of participation constraints: - 

  • Total Participation 
  • Partial Participation 
  • Firewall: A form of cybersecurity equipment called a firewall is used to control network traffic. The usage of firewalls may be employed to isolate network nodes from internal and external traffic as well as from apps specifically designed. Firewalls can be hardware, software, or cloud-based, and each form has certain advantages and disadvantages. A firewall's main objective is to enable genuine traffic to pass while blocking harmful requests for it and data packets. Based on their overall structure and mode of operation, several types of firewalls may be categorized. These eight firewall types are listed: Firewalls that filter packets Firewalls with stateful inspection - Circuit-level gateways Proxy firewalls also known as application-level gateways, next-generation firewalls, software firewalls, hardware firewalls, and cloud firewalls.
  • FTP: File Transfer Protocol, or FTP, is a network-based client-server protocol for file transfers between clients and servers.
  • SMTP: Simple Mail Transfer Protocol, or SMTP, is a standard for sending and receiving electronic mail that specifies the rules and semantics (e-mails).
  • DNS: Domain Name System, or DNS, is a naming scheme used for networked devices. It offers assistance in converting domain names to IP addresses.
  • Bandwidth: A network's bandwidth is the difference between its upper and lower frequency limits. It is the data transfer rate of a network i.e., measured in bits per second (Bps).
  • Node: A node is essentially a location where a connection happens. It might be a piece of hardware or a network.
  • TELNET: TELNET offers hosts across the network bi-directional text-oriented services for remote login. 


 In Java, a class's blueprint is called an interface. It has abstract methods and static constants. Java uses the interface as a tool to achieve abstraction. The Java interface can only include abstract methods, method bodies are not allowed. In Java, it is used to achieve multiple inheritance and abstraction. In other words, interfaces are capable of containing abstract methods and variables. Using the keyword interface, you can fully abstract a class’ interface from its implementation.  

In C++, there is not any concept of interfaces. 

The properties of interfaces are:   

  • Similar to abstract classes, interfaces cannot be used to create objects. 
  • Interface methods do not have a body, instead, the "implement" class provides one.  
  • You must override all of an interface's methods when implementing it.  
  • Interface methods are by default abstract and public.  
  • Interface attributes are by default public, static, and final.  
  • it cannot be used to create objects.  
  • we can have private methods in an interface.  
  • we can have default and static methods in an interface.  
  • It cannot be instantiated just like the abstract class. 
32-Bit OS
64-Bit OS

A 32-bit operating system can access 2^32 (i.e., 4GB of physical memory) different memory locations and contains 32-bit registers 

A 64-bit operating system can access 2^64 (i.e., 17,179,869,184 GB of physical memory) different memory locations and contains 64-bit registers.   

A 32-bit CPU design is capable of processing 32 bits of information and data 

A 64-bit CPU design is capable of processing 64 bits of information and data 

It is less secure compared to 64-bit OS 

It is more secure and less prone to hacking compared to 32-bit OS 

It is less efficient compared to 64-bit OS 

It is more efficient compared to 32-bit OS 

It is cheaper compared to 64-bit OS 

It is costlier compared to 32-bit OS 

Benefits of 64-bit operating systems over 32-bit ones:  

  • Addressable Memory: 64-bit CPUs possess comparatively very high memory addresses than 32-bit. 
  • Resource usage: Adding extra RAM to a machine running a 32-bit operating system has no performance impact. This is because a 32-bit OS is limited to addressing up to 4GB of RAM, regardless of the amount of physical RAM installed in the system. But if you switch that system's extra RAM over to Windows 64-bit, then the system can address more than 4GB of RAM, allowing it to use all the installed RAM. This can have a positive impact on performance, especially when running demanding applications or multitasking. 
  • Performance: The registers are where all calculations are done. Operands are loaded from memory into registers when you execute math in your code. Therefore, having bigger registers enables you to do more complex calculations simultaneously.  
    • While a 64-bit CPU can process 8 bytes of data in a single instruction cycle, a 32-bit processor can only process 4 bytes in a single cycle.  
    • (Depending on a processor design, there might be hundreds to billions of instruction cycles in 1 second.)  
  • Compatibility: 64-bit CPUs may run 64-bit and 32-bit operating systems. Although 32-bit OS can only be executed on 32-bit CPU.  

The components of operating system are: 

  • Process Management
  • File Management
  • Network Management
  • Main Memory Management
  • Secondary Storage Management
  • I/O Device Management
  • Security Management
  • Command Interpreter System
    • Process Management
      A method for controlling several operating system processes that are active at once is the process management component. Every software program that is now operating is connected to one or more processes. A process is active when you use a browser application like Chrome as a search engine, for instance.
    • File Management
      A file is a group of associated data that its author has specified. It frequently represents data and codes (both in source and object representations). Alphabetic, numeric, or alphanumeric data files are all acceptable. File management is in charge of classifying, saving, and retrieving files.
    • Network Management
      Network management controls how the computer communicates with other networked devices.
    • Main Memory Management
      Main memory management refers to the process of managing the computer's primary memory (also known as RAM). It is responsible for allocating and deallocating memory, managing fragmentation, memory mapping, cache management, paging and swapping. 
    • Because it is expensive, it has a smaller storage capacity. However, a program has to reside in the main memory in order to be run.
    • Secondary Storage Management
      The computer’s memory is far too little to permanently keep all of the data and program. Secondary storage is provided by the computer system as a backup for the primary memory. Hard drives and SSDs are the major storage devices used by contemporary computers today for both applications and data. However, CD/DVD drives and USB flash drives are also compatible with the secondary storage management. 
    • I/O Device Management
      It is the component in charge of controlling input and output processes, including keyboard, peripheral, and disc reading and writing. It offers a uniform interface for I/O operations and data transmission efficiency by buffering data.
    • Security Management
      The component in charge of implementing access restrictions and guarding the system against intrusions and harmful assaults.
    • Command Interpreter System
      The command interpreter is one of the most crucial parts of an operating system. The command interpreter serves as the user’s main point of contact with the rest of the system. 

Virtual Memory: Virtual memory enables a computer to compensate for physical memory shortages by temporarily moving pages of data from random access memory (RAM) to disc storage. 

It gives the user the impression that their primary memory is quite large. By considering a portion of secondary memory as the main memory, this is accomplished. (Swap-space)

In order to be executed, instructions must reside in physical memory. However, it restricts program size to that of actual memory. In many instances, the full program is not even required at once.

Therefore, having the capacity to run programmes that are only partially in memory would provide a number of advantages:

  • Program would no longer be limited by the amount of available physical memory.
  • Because each user program could take less physical memory, more programs could be run at the same time, with a corresponding increase in CPU utilization and throughput.

Virtual Function: A member function that is declared in a base class and redefined (overridden) by a derived class is known as a virtual function. 

  • Virtual functions can't be static. 
  • It can be a friend function of a class. 
  • Any class can’t contain virtual constructor but can contain virtual destructor. 
  • Virtual functions should have the same prototype in both the base class and any derivations. 

Page Replacement: Operating systems that use virtual memory and Demand Paging require page replacement. As is well known, only a certain set of pages for a process are loaded into memory when using demand paging. This is done so that multiple processes can run concurrently in memory. The operating system must determine which page will be substituted when a process requests the execution of a page that is present in virtual memory. This procedure, called page replacement, is a crucial part of managing virtual memory.  

The different types of Page Replacement Algorithms are:  

First In First Out (FIFO): This page replacement technique is the simplest. The operating system maintains a queue for all of the memory pages in this method, with the oldest page at the front of the queue. The first page in the queue is chosen for removal when a page needs to be replaced. 

Optimal Page Replacement: The optimal page replacement algorithm always chooses to replace the page that won't be utilised for the longest period of time in the future. By doing this, performance is enhanced and page faults are reduced. 

Optimal page replacement algorithm is the best page replacement algorithm as it gives the least number of page faults.  

Optimal page replacement algorithm cannot be implemented practically. Because it requires knowledge of the future memory access patterns, which is not possible to predict with certainty in most practical scenarios. As a result, the optimal page replacement algorithm is not practical for use in real-world systems. 

Least Recently Used (LFU): Using this algorithm, the least recently used page will be replaced.   

Most Recently Used (MRU): Using this approach, recently used pages will be replaced.

Belady’s anomaly: The phenomenon where increasing the number of page frames causes an increase in the frequency of page faults for a specific memory access pattern is known as Belady's anomaly.

The following page replacement algorithms frequently encounter this phenomenon:

  1. First in, first out (FIFO) 
  2. Second chance algorithm
  3. Random page replacement algorithm

Race Condition: When two or more threads have simultaneous access to common data and attempt to modify it, this is known as a race situation. You cannot predict the sequence in which the threads will attempt to access the shared data since the thread scheduling method might switch between threads at any time. Since both threads are competing to access/alter the data, the outcome of the change in data depends on the thread scheduling method.

Solution to Race Condition

  • Atomic operations 
  • Mutual exclusion using locks 
  • Semaphores 

Note: We cannot use a simple flag variable to solve the problem of race condition. 

Paging: Paging is a memory management scheme that eliminates the need for contiguous allocation of physical memory. The process of retrieving processes in the form of pages from the secondary storage into the main memory is known as paging. The basic purpose of paging is to separate each procedure into pages. Additionally, frames will be used to split the main memory. This scheme permits the physical address space of a process to be non – contiguous. Paging avoids external fragmentation. 

TCP: Transmission Control Protocol is a transport layer protocol. Data is reliably and error-free transmitted from the source to the destination computer by use of this connection-oriented protocol. Prior to transmission, a link is formed between the peer entities. TCP segments an incoming byte stream at the transmitting host and gives each segment a unique sequence number. TCP rearranges the segments on the receiving host and notifies the sender that the segments were correctly received. 

UDP: User Datagram Protocol is a transport layer protocol. It is a message-oriented protocol that offers a straightforward unreliable, connectionless service that is not recognised. It is appropriate for applications that do not need the flow control, error control, or sequencing features of TCP. It is used to send little amounts of data where delivery speed is more crucial than delivery precision. 

SCTP: Stream Control Transmission Protocol is a transport layer protocol. It incorporates elements of both TCP and UDP. It provides a dependable, connection-oriented service like TCP while being message-oriented like UDP. It is utilised for Internet telephony. 

Logical Address Space: During the program execution, the CPU creates the logical address. Since the logical address does not physically exist, it is also referred to as a virtual address. The CPU choses to refer to this particular address when attempting to access a physical memory location. The collection of all logical addresses produced by a program's perspective is referred to as the "logical address space.” Memory-Management Unit is a piece of hardware that maps logical addresses to their corresponding physical addresses.

  • Swap-Out: Swap-Out refers to the operation of shifting a process from main memory to secondary memory. 
  • Swap-In: Swap-In refers to the activity of moving a process from secondary memory to main memory.
  • Swap-Space: Swap-space refers to the portion of the disc where processes that have been swapped out are kept. 

Resource Allocation Graph:

A system resource allocation graph is a directed graph that can be used to more precisely define deadlock. The graph consists of a set of edges (set of "E"s) and a set of "V"s or vertices. The vertices V are divided into two groups of nodes, P (P1, P2, P3, etc.), which represent all of the system's processes, and R (R1, R2, R3, etc.), which represent all of the system's resource types.

P -> R represents the direct edge between P and R, meaning that P has asked for an instance of resource type R and is now waiting for that resource.

A resource type R instance has been assigned to the process P, which is indicated by the direct edge from R to P being denoted as R->P.

Here, the resource categories and processes are depicted visually by rectangles and circles, respectively.

A resource allocation graph is not considered to be in a stalemate if it lacks a cycle. Additionally, the system might or might not be in Deadlock if there is a cycle.

Resource request/release life cycle:

The following sequence describes the resource request/release life cycle of OS:

  • The process will request the resource. 
  • OS demonstrates a clear validation of the process request. 
  • The OS will check to see if the resource is available if the request made by the process is legitimate. 
  • The resource will be allocated to the process if it is readily accessible. If not, the procedure would have to wait. 
  • The process will begin execution if all of the resources needed at the outset are allotted. 
  • The process will release the resource after the execution of the process is finished.


Whenever a subclass needs to refer to its immediate superclass, it can do so by use of the keyword ‘super’.

Thus, super( ) always refers to the superclass immediately above the calling class.

This is true even in a multileveled hierarchy. 

It is used in variables, methods and constructors. 


class Animal { 
public void makeSound() { 
System.out.println("Animal make a sound"); 
class Human extends Animal { 
public void makeSound() { 
super.makeSound(); // calls the makeSound() method of the parent class 
System.out.println("Human speaks"); 

public class Main { 
public static void main(String[] args) { 
Human A = new Human (); 


Animal make a sound 
Human speaks 

In this example, we have an Animal class and a Human class that extends the Animal class. The Animal class has a makeSound() method that simply prints out a message. The Human class overrides the makeSound() method and first calls the makeSound() method of the parent class using super.makeSound(), and then prints out a message specific to the Human class. 

In C++, the super keyword is not used to refer to the parent class. 


When we are declaring a method, adding the modifier final at the beginning can prevent it from being overridden. Methods marked as final are incompatible with overriding. Methods marked as final can occasionally improve performance: Since the compiler "knows" they won't be replaced by a subclass, it is free to inline calls to them.

Sometimes we will want to prevent a class from being inherited. To do this, precede the class declaration with final. Declaring a class as final implicitly declares all of its methods as final, too.

As we might expect, it is not possible to declare a class as both abstract and final since an abstract class is incomplete by itself & relies upon its subclasses to provide complete implementations.


final public class MyClassName { 
// Class definition 

This class can no longer be inherited by another class. 

Const: Any moment the const keyword is associated with a method(), variable, pointer variable, or class object, that particular object/method()/variable is not allowed to change the value of its data items. 


const a = 5;

Now, the value of ‘a’ cannot be changed further.

One of the most frequently posed Computer Science Interview Questions, be ready for it.  

Port: Virtual points where network connections begin and stop are called ports. Ports are controlled by the operating system of a computer and are software-based. Each port is connected to a distinct procedure or service. Using ports, computers may distinguish between different types of communication with ease. For example, although using the same Internet connection, emails travel to a separate port than webpages.

Port Number: Every network-connected device uses a standard set of ports, each of which is given a port number. The majority of ports are set aside for certain protocols, for instance, port 80 is the designated location for all HTTP transmissions. The targeting of certain services or applications inside those devices is made possible via port numbers as opposed to IP addresses, which allow messages to move to and from particular devices. 

There are 65,535 potential port numbers, however not all of them are often used. The following are some of the most popular ports and the corresponding networking protocols: 

  • Ports 20 and 21: Protocol for File Transfer across Ports 20 and 21 (FTP). File transfers between a client and a server are done using FTP.  
  • Port 22: Secure Shell on port 22 (SSH). One of the various tunnelling technologies used to establish secure network connections is SSH. 
  • Port 25: Simple Mail Transfer Protocol (port 25) (SMTP). Email is sent via SMTP. 
  • Port 53: Domain Name System, port 53 (DNS). With DNS, users may load websites and apps without having to memorise a huge list of IP addresses. DNS links human-readable domain names to machine-readable IP addresses. 
  • Port 80: Hypertext Transfer Protocol, port 80 (HTTP). The protocol known as HTTP is what enables the World Wide Web. 

Deadlock occurs when two or more processes are unable to move forward because they are all reliant on a single event that actually never happens.

Resources assigned to a process are typically not preemptable. This indicates that once a resource is assigned to a process, there is no simple way for the system to remove it from the process unless the process voluntarily gives it up or the system administration kills the process. This might result in a condition known as stalemate.

If there is a stalemate among a group of processes or threads because they are all waiting for resources that are already in use.

A deadlock in a database occurs when many transactions are waiting for each other to release locks. 

 Deadlock Necessary Condition are as follows:

  • Mutual Exclusion: A minimum of one resource must be stored in a non-shareable mode, meaning that only one process may utilise it at a time. This is known as mutual exclusion. The requesting process must wait until the resource has been released if another process needs it sooner.
  • Hold and Wait: Minimum of one resource must be held by the process, and it must also be waiting for another resource.
  • No Pre-emption: A resource can only be released freely by the process that is holding it once that process has finished its tasks.
  • Circular wait: A collection of waiting processes (P0, P1, P2,..., Pn) must exist such that P0 is awaiting for a resource owned by P1, and P1 is awaiting for a resource held by P2….Pn-1 is waiting for resource held by Pn and Pn is waiting for a resource held by P1.

All 4 are the essential for deadlock to occur.

A process can recover from deadlock in the following ways:

  • Killing the process:
    • Kill all the processes which are involved in the deadlock.
    • Kill one process after the other based on priority.
  • Resource pre-emption: a resource will be pre-empted from the process, which are involved in the deadlock and pre-empted resources will be allocated through some other processes so that they may be possibly of recovering the system from deadlock. If the above condition is followed, it may be possible for the process to undergo starvation.
  •  Ostrich Algo: Ignores the deadlock.

Conditional Variable: A synchronisation primitive called a condition variable allows a thread to wait until a certain condition is met.

Conditional variable is used to avoid busy waiting. 


A shared integer variable across threads is all that a semaphore is. 

  1. Unlike mutex, which only permits multiple threads to access a single shared resource one at a time, allows multiple programme threads to access the finite instance of resources. 
  2. We may change the definition of the wait () and signal () semaphore operations to eliminate the necessity for busy waiting. A process must wait when it uses the wait () method and discovers that the semaphore value is not positive. However, the procedure itself causes a car block rather than participating in busy waiting. A process's status is changed to the Waiting state by the block operation, which adds it to a waiting queue connected to the semaphore. The CPU scheduler then assumes control and chooses a different process to run. 
  3. When another process performs a signal () action, it should restart all blocked processes that are waiting on semaphore S. A wakeup () action causes the process to resume by moving it from the waiting state to the ready state. After that, the process is added to the ready queue. 

There are 2 types of semaphores: 

Binary semaphore

  • It is also called mutex locks. 
  • The value of it can be 0 or 1. 

Counting semaphore

  • Can range over an unbounded domain. 
  • It is possible to use it to restrict access to a resource that has a limited number of instances. 

Peterson’s solution: Although Peterson's method only works for two processes or threads, it can be utilised to eliminate race conditions. 

Mutex/Locks: By limiting access to critical section to a single thread or process, locks may be utilised to establish mutual exclusion and prevent race conditions. It has starvation of threads with high priority. It can generate deadlock and debugging is a problem. 

Critical Section: The code segment known as the critical section is where processes and threads access and write to shared resources like files and common variables. Any process can be stopped mid-execution since processes and threads run simultaneously.

Concurrency: The simultaneous execution of several instruction sequences is known as concurrency. When many process threads are active at once, it occurs in the operating system. 

Thread Scheduling: A thread's execution is planned according to its priority. Although threads are running within the runtime, the operating system nevertheless allots processor time slices to each thread. 

Time Stamping: A DBMS-created unique identifier called a "time stamp" shows the approximate commencement time of a transaction. Whatever transaction we are carrying out, it records the transaction's beginning time and designates a precise time.

SDLC: The software development life cycle (SDLC) is a method that creates software quickly and at the lowest possible cost. 

Domain Resolution: Domain Resolution is a service that links a domain name to the IP address of a website space so that users of the registered domain name may quickly visit the website. A network location is identified by a site's IP address, which is a numerical address. The domain name is used to identify the web address rather than the IP address to aid memorization. The process of translating domain names into IP addresses is known as domain resolution. The DNS server is responsible for domain name resolution. 

Domain resolution is sometimes referred to as domain pointing, server setup, reverse IP registration, and other similar terms. 

Now, let’s see how domain resolution works: - 

  1. A DNS resolver receives a request from a web browser when a user types a domain name into the browser. 
  2. DNS resolver translate the domain name into an IP address by querying a DNS server. 
  3. The DNS server searches its database for the IP address related to the domain name. If a match is found, it gives the DNS resolver the IP address. 
  4. The DNS server will ask other DNS servers until it finds one that has the data if it does not have a record for the domain name. 
  5. The DNS resolver gives the web browser the IP address it has received from the DNS server. 
  6. The IP address is then used by the web browser to connect to the web server hosting the website associated with the domain name. 

3 Way Handshake:

This may alternatively be thought of as the process through which a TCP connection is made. 

TCP uses a technique known as Positive Acknowledgement with Retransmission to guarantee dependable communication (PAR). A segment is the name given to the Protocol Data Unit (PDU) of the transport layer. Now, until it receives an acknowledgement, a device employing PAR must resend the data unit. The receiver discards the segment if the data unit it has received is damaged (it validates the data using the transport layer's checksum feature, which is used for error detection). The data unit for which a positive acknowledgement was not received must thus be resent by the sender. From the foregoing procedure, it is clear that three segments must be sent and received by the sender (client) and receiver (server) in order for a trustworthy TCP connection to be made. 

Let’s see how this works

  • Step 1 (SYN): The client transmits a segment with SYN (Synchronize Sequence Number), which notifies the server that the client is likely to start communication and with what sequence number it starts segments with, in the first step of connecting to a server. 
  • Step 2 (SYN + ACK): The server sends a SYN-ACK signal response to the client request. SYN stands for the sequence number it is likely to start the segments with, and ACK stands for the response of the segment it received. 
  • Step 3 (ACK): The client acknowledges the server's answer in the last section, and they both establish a secure connection to begin the data transmission. 

Different class of Network

  • Class A network: The network is denoted by the first period, and the device inside the network is denoted by the second period. As an illustration, the network is denoted by "203" and the device is denoted by "0.113.112." 
  • Class B network: Prior to the second phase, the network is indicated by everything. Again, taking the example of, "203.0" denotes the network and "113.112" denotes the device connected to that network. 
  • Class C network: For Class C networks, the network is indicated by everything written before the third period. The Class C network is denoted by "203.0.113" in the same example, while the device is denoted by "112". 


How to Prepare for a Computer Science Interview?

To prepare for a role in the Computer Science domain, here are a few things you need to be thorough with: 

  • Computer Fundamentals (OS, DBMS, OOPS, Networking) 
  • Data Structures and Algorithms
  • Have at least 2 major projects in your resume related to web dev or android or iOS or ML or anything like it. 
  • Low level system design. 
  • Good in Aptitude and logical thinking.

Here are some Data Structures that you have to master: -

Array, Vector / ArrayList, Matrix, String, Linked List, Stack, Queue, Deque, Priority Queue, Binary Tree, BST, Heap, Graph, Set, Map, Multi Set, Multi Map, Hash Set, Hash Map, Pairs, Trie, Segment Tree, Fenwick Tree, Sparse Table.

Here are some Algorithms that you have to master: - 

Linear & Binary Search, Sorting, Swapping, Two Pointers, Three Pointers, Divide & Conquer, Sliding Window, Number Theory, Modulo Arithmetic, Prefix Sum, Greedy Algorithm, Recursion, Backtracking, Hashing, Dynamic Programming, Bit Manipulation, Kadane’s Algorithm, KMP Algorithm, Rabin Karp Algorithm, Boyer's Moore Algorithm, Z Algorithm, Brian Kernighan’s Algorithm, Sieve of Eratosthenes, Segmented Sieve, HCF & LCM Theorem, Master Theorem, Basic and Extended Euclidean Algorithm, BFS & DFS, Kruskal Algorithm, Prim's Algorithm, Dijkstra Algorithm, Bellman Ford Algorithm, Floyd Warshall Algorithm, Johnson’s Algorithm, Chinese Remainder Theorem, Wilson’s Theorem, Mo’s algorithm, Euler's Theorem, Combinatorics & Catalan Number, Pigeon Hole Principle, Inclusion Exclusion Principle

Here are some websites for practicing Data Structures and Algorithm: -

  • HackerRank 
  • HackerEarth 
  • Geeks for Geeks 
  • Top Coder 
  • CodeChef 
  • CodeForces 
  • LeetCode 
  • Interview Bit 
  • SPOJ

Now, it’s time to see some popular IDE for you to get started: -

  • Visual Studio Code 
  • IntelliJ IDEA 
  • Sublime Text 
  • PyCharm 
  • Atom 
  • Eclipse 
  • Code Blocks 
  • NetBeans 
  • Spyder 

Let’s see various job roles and companies in which you can apply after preparing computer science interview questions. There are various good companies with great job roles that want knowledge of computer science.   

Job Roles

  1. Software Developer: A software developer is one who is responsible for designing, coding, testing, and maintaining software applications. 
  2. Full Stack Developer: A full stack developer is a software developer who has expertise in both the front-end and back-end development of web applications. 
  3. Backend Engineer: A backend engineer focuses on the server-side development of web applications. 
  4. Frontend Engineer: A frontend engineer focuses on the client-side development of web applications. 
  5. Android or iOS Developer: An Android or iOS developer is a software developer who specializes in developing applications for either the Android or iOS operating systems. 
  6. Application Engineer: An application engineer is responsible for developing, deploying, and maintaining software applications. 
  7. DevOps Engineer: A DevOps engineer is responsible for automating and improving the processes used in software development and deployment. 
  8. Support Engineer: A support engineer provides technical support to clients and users of software applications. 
  9. Senior Software Engineer: A senior software engineer is an experienced software developer who leads projects, provides mentorship to junior developers, and makes important technical decisions. 
  10. Software Engineer Intern: A software engineer intern is a junior software developer who is in the process of learning and developing their skills. They typically work on projects under the supervision of senior software engineers, and may assist with coding, testing, and other aspects of software development. 

Top Companies

  1. Amazon: Amazon is a multinational technology company based in Seattle, Washington. It was founded by Jeff Bezos in 1994 and initially it is an online bookstore. 
  2. Cisco: Cisco Systems, Inc. is an American multinational technology company headquartered in San Jose, California. It was founded in 1984 and is one of the largest networking companies in the world. 
  3. Amdocs: Amdocs is a multinational corporation that provides software, services, and solutions to communications, media, and entertainment service providers. It was founded in 1988 and is headquartered in Chesterfield, Missouri. 
  4. Meta: Meta is a technology company that provides virtual and augmented reality solutions and products. The company was founded in 2012 and is headquartered in San Mateo, California. 
  5. Microsoft: Microsoft is a multinational technology company that is headquartered in Redmond, Washington. It was founded in 1975 by Bill Gates and Paul Allen, and has since grown into one of the largest and most influential technology companies in the world. 
  6. Google: Google is an American multinational technology company that specializes in Internet-related services and products. It was founded in 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University. Google is best known for its search engine, which is one of the most popular and widely used websites in the world. 
  7. Apple: Apple Inc. is an American multinational technology company headquartered in Cupertino, California. It was founded in 1976 by Steve Jobs, Steve Wozniak, and Ronald Wayne. 
  8. PayPal: PayPal Holdings, Inc. is a global technology company that operates a popular digital payments platform. It was founded in December 1998 as Confinity and later became PayPal, Inc. 
  9. DE Shaw: D. E. Shaw & Co., L.P. is a global investment management firm that was founded in 1988 by David E. Shaw. The firm is headquartered in New York City and has additional offices in London, Hong Kong, and other cities around the world. 
  10. Salesforce: Salesforce is a customer relationship management (CRM) platform that provides businesses with a suite of cloud-based applications for managing customer information and interactions. It was founded in 1999 and is headquartered in San Francisco, California. 

To develop your skills in the field of Computer science, you could check out these Computer Science Courses

Top Computer Science Interview Preparation Tips

Be it preparing for basic computer science interview questions or technical interview questions computer science, you need hands-on effective tips to prepare for the interview. Here are some common tips to prepare for computer science interview questions: 

  • Solve Company tags question on any platform like LeetCode or GeeksforGeeks. 
  •  Practice Array, String, LinkedList, Tree, Graph, Recursion and DP based questions. 
  • Work on your logical and problem-solving skills. 
  • Participate in contests on Codechef and Codeforces. 
  • Work on your projects. 
  • Clear computer science fundamentals i.e., OS, DBMS, OOPS, CN. 
  • Set aside some time to prepare for aptitude and low-level design.

What to Expect in a Computer Science Interview?

In an interview with computer science, you can expect that interviewer can ask you  technical interview questions in computer science like solving Data Structure and algorithm questions, explaining computer science fundamentals, etc. You should be familiar with at least one object-oriented programming language like C++ or Java. You must have the opportunity to discuss your projects and the tech stack you have worked with, and be prepared to answer low-level system design questions because sometimes it is also asked. Be ready to answer computer science behavioral interview questions and managerial questions along with aptitude and have a look at the things that you are added to your resume.

The nature of the interview questions will depend largely on the company you are applying to. For instance, startups may value your dev skills more than your DSA skills or if you applying to product-based companies like Google, Amazon, Microsoft, etc. then your DSA and problem-solving skills may be valued more.

Good Luck with Your Interview!

So far, we have seen computer science fundamentals interview questions related to Operating Systems, Database Management Systems, Computer Networking and OOPs. Apart from technical interview questions, computer science and CS fundamentals interview questions, we have seen how to prepare for a computer science interview, covered some key tips and saw what the interviewer might expect from you. We have covered the most commonly asked basic computer science interview questions as well as advanced CS fundamentals interview questions.

Software engineering is one of the most in-demand and high-paying fields. If you love to solve problems, then coding could be the perfect fit for you.

Now if you want to get trained in programming, you may check out these Computer Programming training courses.

There are many profiles to which you can apply after the course like: 

  • Software Developer L2 
  • Software Developer L3 
  • Software Developer L4 
  • Full Stack Developer
  • Backend Engineer 
  • Frontend Engineer 
  • Android or iOS Developer. 

The pay can go from 5 lacks per annum to 80 lacks per annum if you can answer technical interview questions for freshers CSE. According to, the average base pay for a Software developer is ₹7,00,000 per year. Some of the companies that hire for these roles are Google, Yahoo, Microsoft, Facebook, Adobe, Nokia, etc. 

If you are determined to ace your next interview as a Software Engineer, these common computer science interview questions along with interview questions for freshers and CS fundamentals interview questions will fast-track your career.  

Prepare well for and go for the interview with confidence and answer as many questions as you can.  

Crack your computer science interview with ease and confidence, Thanks for Reading! 

Read More