Juggling multiple tasks at once? That's what multi-tasking operating systems do with processes. But to avoid data chaos, processes need traffic control. Process synchronization in OS ensures only one process uses shared data at a time, preventing conflicts. This is critical for processes that work together, and the operating system provides the tools to keep things running smoothly.
We will learn what is process synchronization in OS, the need for synchronization in this blog. And if you want in-depth knowledge about OSs to pursue a career, you can take our Web Development Course.
What is Process Synchronization in OS?
An operating system is a piece of software that controls all of the installed programs on a computer or other device, essentially making it run more efficiently. Because of this, the operating system frequently has to handle multiple tasks at once. Unless these processes running simultaneously share a resource, this usually doesn't pose a synchronization problem in OS.
Take a bank as an example, let's say you had x rupees in your account at the beginning. Now you withdraw some money from your bank account, and someone tries to look at the balance of your account at the same time.
The total balance remaining in your account after the transaction will be less than x because you are taking money out of it. However, since the transaction takes some time, the person interprets x as your account balance, resulting in inconsistent data. We could guarantee consistent data if we could somehow make sure that only one process was running at once.
In the above image, if Process1 and Process2 occur simultaneously, User 2 will receive the incorrect account balance as Y as a result of Process1 being executed when the balance is X. When multiple processes share a single resource in a system, inconsistent data can result, which is why process synchronization is required in the operating system.
PrepBytesExample of Process Synchronization in Operating System
Following is the example of process synchronization in operating system:
1. Bounded Buffer Problem
A producer attempts to insert data into a buffer slot that is empty. Data is attempted to be removed from a filled slot in the buffer by a consumer. If those two processes are run simultaneously, the output won't be what was anticipated. There must be a way to enable the independent operation of the producer and consumer.
Solution:
The use of semaphores is one solution for this issue.
- m, a binary semaphore that is used to acquire and release the lock, is the semaphore that will be used in this situation.
- Given that all slots are initially empty, empty is a counting semaphore whose initial value is the number of slots in the buffer.
- A counting semaphore with a starting value of 0 is called "full."
- The number of empty slots in the buffer is represented by the current value of empty, and the number of occupied slots in the buffer is represented by the current value of full.
The structure of the producer process
while (true) {
...
/* produce an item in next_produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
}
The structure of the consumer process
while (true) {
wait(full);
wait(mutex);
...
/* remove an item from buffer to
next_consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
}
2. Readers-Writers problem
Reader-writer conflicts in data management pose a challenge: prioritize readers and risk writer starvation, or prioritize writers and leave readers waiting. Reader-priority can lead to writers being perpetually blocked by a constant stream of readers, while writer-priority allows writers to jump the line but leaves readers waiting indefinitely.
Both approaches have limitations, requiring careful consideration for the specific needs of your program.
The structure of a writer process
while (true) {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
}
The structure of a reader process
while (true){
wait(mutex);
read_count++;
if (read_count == 1)
wait(rw_mutex);
signal(mutex);
...
/* reading is performed */
...
wait(mutex);
read count--;
if (read_count == 0)
signal(rw_mutex);
signal(mutex);
}
What is Race Condition?
A race condition is a situation in which the behavior of software or a system depends on the timing or ordering of uncontrollable events, such as the scheduling of processes or threads. The result is unpredictable, rendering the system unreliable.
Key Points:
- It occurs when two or more threads can access shared data and try to change it simultaneously.
- The final outcome depends on the timing of the execution.
- This is common in multi-threaded or multi-process environments.
- This could result in severe problems, including data corruption, system crashes, and security vulnerabilities.
What is Critical Section Problem?
The critical section problem is a classical problem in concurrent programming where several processes access and manipulate shared data without conflicts.
Key Points:
- Critical Section: This section of a code segment in which shared resources are accessed.
- Mutual Exclusion: Only one process is allowed inside the critical section at a given moment.
- Progress: If no process is in the critical section, any process that requests entry to its critical section must be allowed to proceed.
- Bounded Waiting: There has to be a limit on the number of other processes admitted to the critical section after a process requests to enter.
What is Peterson’s Solution?
Peterson's Solution is a software-based protocol for solving mutual exclusion problems. It allows two processes to share the same resource without causing conflict. It is an algorithm that illustrates how mutual exclusion may be achieved.
Key Points:
- Uses two shared variables: flag and turn.
- Each process sets its flag to indicate it wants to enter the critical section.
- The turn variable helps decide which process should be allowed to enter the critical section if both processes desire to do so simultaneously.
- Ensures mutual exclusion, progress, and bounded waiting.
- It is designed for two processes; however, if modifications are made, it can be extended to more processes.
Why Process Synchronization is Important in Operating System?
- Mutual exclusion: No other process should be permitted to run in the critical section while the current process is there.
- Progress: Any thread must be allowed to enter the critical section if no processes are still running inside it and other processes are waiting outside it to execute. Only those processes that aren't running in the other section will decide which process will enter the critical section.
- No starvation: A process that is starved waits interminably to access the critical section but is never given the opportunity. Bounded Waiting is another name for no starvation.
- A process shouldn't take an eternity to enter the crucial area.
- There should be a cap or bound that specifies how many other processes are permitted to access the critical section before a process can request access to its critical section.
- This process should be permitted access to the critical section once this bound is reached.
How Does Process Synchronization in OS Works?
Let's examine the precise reasons why Process synchronization in OS is necessary. For example, in this case, there is a good chance that the data read by process1 will be incorrect if process2 attempts to change the data present at the same memory location while process1 is attempting to read it.
ScalerEssential Sections of a Program
Let's examine various components/sections of a program:
- Entry Section: This section determines when a process begins.
- Critical Section: The critical section permits only one process to modify the shared data and ensures that this is the case.
- Exit Section: After one process has finished running, the Exit section handles the entry of other processes into the shared data.
- Remainder Section: The remaining portion of the code that is not divided into the categories listed above is found in the Remainder section.
If the idea of creating apps and websites with user-friendly interfaces intrigues you, you can go for a Full Stack Developer Course to learn how to build, implement, secure, and manage programs and build proficiency across the business logic, user interface, and database stacks.
Looking to enhance your programming skills? Get certified in Python programming online and unlock endless possibilities. Join now!
Types of Process Synchronization in Operating System
The following are the types of process synchronization in operating system:
1. Independent Processes
When a process is executed without affecting or having an impact on another process, that process is referred to as an independent process. For example the process without any shared databases, files, variables, etc.
2. Cooperative Processes
In a computer system, different processes can operate as independent or collaborative processes within the operating system. When a process won't be impacted by other processes running on the system, it is said to be independent. Data sharing between processes is not done by independent processes. On the other hand, any other process running on the system may impact a collaborating process. Data is shared between processes that are cooperating.
Advantages & Disadvantages of Process Synchronization in OS
Advantages:
- Consistency and reliability: Ensures data consistency and reliability by not allowing race conditions.
- Deadlock Prevention: Proper synchronization can prevent a deadlock, where processes wait indefinitely.
- Resource Management: Manages resource sharing among concurrently running processes efficiently.
Disadvantages:
- Complexity: This causes the code to be complex, making it challenging to develop and debug.
- Synchronization overhead: Synchronization mechanisms introduce overhead in performance because of context-switching and waiting times involved.
- Deadlock Risks: Deadlocks and starvation might occur with incorrect synchronization.
- Scalability issues: The operation might lack a great solution for a large number of processes or threads.
By addressing these topics, you can better understand the importance of synchronization in concurrent programming and how different mechanisms help in managing shared resources effectively.
Conclusion
Coordinating the execution of processes so that no two processes access the same shared resources and data is known as process synchronization. We hope you found this blog helpful in understanding process synchronization in OS, what synchronization is in operating system and other related aspects.
If you are interested in learning more about operating systems and how they work, you can go for KnowledgeHut’s Web Development course. There, you will find everything you need to know about developing apps for different platforms and acquiring top tech skills like React, Node.js, Full-Stack, JavaScript, etc. There, you will find everything you need to know about developing apps for different platforms and acquiring top tech skills like React, Node.js, Full-Stack, JavaScript, etc.