Operating System Question and Answers
( Suggestion : keep refreshing the page for updated content & Search Questions Using Find )
Q1)A). Assume that a Secondary storage device contains 5 pages (PgNO, PgN1, PgN2, PgN3 and PgN4) of a Process P in which each page has the size of 5 bytes. Each page has 5 instructions and each instruction has the size of 1 Byte. To execute the instruction, they are allocated in the memory frames 5,6,3,2 and 1 respectively. The frame numbers in the memory are FNO, FN1, FN2, FN3 and FN4 and the base address of FND is 5000, Identify the logical and physical addresses of the 5th instruction in PgN1 and 3rd instruction of PgN2.
Ans:To find the logical and physical addresses of the 5th instruction in PgN1 and the 3rd instruction of PgN2, we need to follow these steps:
- Calculate the logical address of the instruction within the page.
- Calculate the physical address of the page in memory.
- Add the logical address to the physical address of the page to get the final physical address.
Let’s break it down:
Given information:
- Each page size: 5 bytes
- Each instruction size: 1 byte
- Page allocation in memory: FNO (PgN1), FN1 (PgN2), FN2 (PgN3), FN3 (PgN4)
- Logical Address Calculation:
- For the 5th instruction in PgN1:
- Logical address = Offset within the page + Page number
- Logical address = (5 – 1) + 0 (PgN1 is page 0)
- Logical address = 4
- For the 3rd instruction in PgN2:
- Logical address = Offset within the page + Page number
- Logical address = (3 – 1) + 1 (PgN2 is page 1)
- Logical address = 3
- For the 5th instruction in PgN1:
- Physical Address Calculation:
- Given that the base address of FND is 5000, and frame allocation:
- FNO (PgN1) is in frame 5
- FN1 (PgN2) is in frame 6
- FN2 (PgN3) is in frame 3
- FN3 (PgN4) is in frame 2
- FN4 (not used in this case)
- To calculate the physical address of a page in memory, we use the formula:
- Physical address = Base address + (Frame number * Page size)
- For the 5th instruction in PgN1:
- Physical address = 5000 + (5 * 5) = 5025
- For the 3rd instruction in PgN2:
- Physical address = 5000 + (6 * 5) = 5030
- Given that the base address of FND is 5000, and frame allocation:
So, the logical and physical addresses are as follows:
For the 5th instruction in PgN1:
- Logical Address: 4
- Physical Address: 5025
For the 3rd instruction in PgN2:
- Logical Address: 3
- Physical Address: 5030
Q1.B)Consider six memory partitions of size 300 KB, 500 KB, 600 KB, 550 KB, 450 KB and 250 KB. These partitions need to be allocated to four processes of sizes 360 KB, 410 KB, 570 KB and 520 KB in that order. (7 M) Perform the allocation of processes using- 84643-2023/09/10-21 i. First Fit Algorithm ii. Best Fit Algorithm iii. Worst Fit Algorithm
Ans: Let’s perform the allocation of processes using the First Fit, Best Fit, and Worst Fit allocation algorithms for the given memory partitions and process sizes.
Memory Partitions (in KB):
- 300 KB
- 500 KB
- 600 KB
- 550 KB
- 450 KB
- 250 KB
Processes (in KB) in order of arrival:
- 360 KB
- 410 KB
- 570 KB
- 520 KB
i. First Fit Algorithm: In the First Fit algorithm, we allocate the first available partition that is large enough to accommodate the process.
Process 1 (360 KB) → Allocate to Partition 3 (600 KB) Remaining partitions:
- 300 KB
- 500 KB
- 240 KB (600 KB – 360 KB)
- 550 KB
- 450 KB
- 250 KB
Process 2 (410 KB) → Allocate to Partition 4 (550 KB) Remaining partitions:
- 300 KB
- 500 KB
- 240 KB
- 140 KB (550 KB – 410 KB)
- 450 KB
- 250 KB
Process 3 (570 KB) → Allocate to Partition 2 (500 KB) Remaining partitions:
- 300 KB
- 30 KB (500 KB – 470 KB)
- 240 KB
- 140 KB
- 450 KB
- 250 KB
Process 4 (520 KB) → Allocate to Partition 6 (250 KB) Remaining partitions:
- 300 KB
- 30 KB
- 240 KB
- 140 KB
- 450 KB
- 0 KB (250 KB – 520 KB)
ii. Best Fit Algorithm: In the Best Fit algorithm, we allocate the smallest available partition that can accommodate the process.
Process 1 (360 KB) → Allocate to Partition 3 (600 KB) Remaining partitions:
- 300 KB
- 500 KB
- 240 KB (600 KB – 360 KB)
- 550 KB
- 450 KB
- 250 KB
Process 2 (410 KB) → Allocate to Partition 4 (550 KB) Remaining partitions:
- 300 KB
- 500 KB
- 240 KB
- 140 KB (550 KB – 410 KB)
- 450 KB
- 250 KB
Process 3 (570 KB) → Allocate to Partition 3 (240 KB) (Smallest partition that can accommodate) Remaining partitions:
- 300 KB
- 500 KB
- 0 KB (240 KB – 240 KB)
- 140 KB
- 450 KB
- 250 KB
Process 4 (520 KB) → Allocate to Partition 5 (450 KB) Remaining partitions:
- 300 KB
- 500 KB
- 0 KB
- 140 KB
- 0 KB (450 KB – 520 KB)
- 250 KB
iii. Worst Fit Algorithm: In the Worst Fit algorithm, we allocate the largest available partition for the process.
Process 1 (360 KB) → Allocate to Partition 3 (600 KB) (Largest partition available) Remaining partitions:
- 300 KB
- 500 KB
- 240 KB (600 KB – 360 KB)
- 550 KB
- 450 KB
- 250 KB
Process 2 (410 KB) → Allocate to Partition 3 (240 KB) (Largest partition available) Remaining partitions:
- 300 KB
- 500 KB
- 0 KB (240 KB – 240 KB)
- 550 KB
- 450 KB
- 250 KB
Process 3 (570 KB) → Allocate to Partition 4 (550 KB) (Largest partition available) Remaining partitions:
- 300 KB
- 500 KB
- 0 KB
- 0 KB (550 KB – 570 KB)
- 450 KB
- 250 KB
Process 4 (520 KB) → Allocate to Partition 5 (450 KB) (Largest partition available) Remaining partitions:
- 300 KB
- 500 KB
- 0 KB
- 0 KB
- 0 KB (450 KB – 520 KB)
- 250 KB
After allocation using the three algorithms, all processes have been allocated to partitions, but there may be some internal fragmentation in certain cases, as shown in the remaining partitions.
Q2)A)Suppose that a system is in an unsafe state. Show that it is possible for the threads to complete their execution without entering a deadlocked state.
Ans: In a computer system, an unsafe state is a situation where multiple threads or processes are contending for resources in such a way that it is possible for a deadlock to occur. However, it is important to understand that not every unsafe state will necessarily result in a deadlock. There are scenarios in which threads can still complete their execution without entering a deadlock. This can happen due to various factors, including resource allocation, thread scheduling, and the specific timing of resource requests and releases. Let’s explore a simple example to illustrate this.
Suppose you have a system with two resources (e.g., printers) and three threads (T1, T2, and T3) that need to access these resources to complete their tasks. Each thread follows the standard resource allocation protocol of requesting resources before execution and releasing them afterward.
Here’s a sequence of events that can occur in an unsafe state without resulting in a deadlock:
- Thread T1 requests resource R1.
- Thread T2 requests resource R2.
- Thread T1 is granted resource R1 and continues execution.
- Thread T3 requests resource R1. At this point, the system is in an unsafe state because T1 holds R1, and T3 requests it.
- Thread T1 completes its execution and releases resource R1.
- Thread T3 is granted resource R1 and continues execution.
- Thread T2 completes its execution and releases resource R2.
- Thread T3 completes its execution and releases resource R1.
In this scenario, even though the system was in an unsafe state at one point when T3 requested a resource that was already held by T1, it didn’t result in a deadlock. This is because of the following key factors:
- Resource Allocation: The system was able to allocate and release resources dynamically, allowing threads to make progress.
- Order of Resource Acquisition: The order in which threads requested resources and were granted access (T1, T2, T3) played a crucial role in preventing a deadlock.
- Timely Resource Release: Threads released resources as soon as they finished their tasks, making resources available for other threads to use.
It’s important to note that while it is possible for threads to complete their execution without entering a deadlock in an unsafe state, this is not a guaranteed outcome. Deadlocks are a complex problem, and their occurrence depends on the specific timing and interleaving of resource requests and releases. To ensure robust and deadlock-free systems, proper synchronization mechanisms, such as semaphores or mutexes, should be employed to manage resource access and prevent potential deadlocks.
Q2)B)Assume that a system has a single processor to process multiple processes but only one process at a particular time. There are 6 processes (P1, P2, P3, P4, P5 and P6) waiting in the Ready queue to execute. Their arrival times, burst times and priorities are tabulated as follows in (7 M) milliseconds: Process Id Arrival time Burst Time P1 6 4 P2 5 5 P3 2 2 P4 7 1 P5 10 12 P6 8 10 a). Prove that Shortest Job Next method is the optimum over Round Robin Scheduling algorithm with quantum time is 2 ms. b) Find the average turnround time for both the process scheduling algorithm.
Ans: Let’s analyze the given processes using both Shortest Job Next (SJN) and Round Robin (RR) scheduling algorithms to determine which one is optimal and calculate the average turnaround time for both.
a) Shortest Job Next (SJN) vs. Round Robin (RR):
- SJN Scheduling:
For SJN scheduling, we choose the process with the shortest burst time from the ready queue to execute first. Let’s sort the processes based on their burst times:
P3 (2 ms) -> P4 (1 ms) -> P2 (5 ms) -> P1 (4 ms) -> P6 (10 ms) -> P5 (12 ms)
Now, let’s execute these processes in the order mentioned above:
Time 2 ms: P3 Time 3 ms: P3 (Completed) Time 4 ms: P4 Time 5 ms: P4 (Completed) Time 10 ms: P2 Time 15 ms: P1 Time 19 ms: P6 Time 29 ms: P5 Time 41 ms: All processes completed
Total turnaround time for SJN scheduling = (41 – 2) ms = 39 ms
- Round Robin (RR) Scheduling:
For RR scheduling with a quantum time of 2 ms, we’ll execute processes in a round-robin fashion:
Time 2 ms: P3 (Completed) Time 4 ms: P4 (Completed) Time 6 ms: P2 Time 8 ms: P1 Time 10 ms: P6 Time 12 ms: P2 (Completed) Time 14 ms: P1 (Completed) Time 16 ms: P6 (Completed) Time 18 ms: P5 Time 20 ms: P5 (Completed)
Total turnaround time for RR scheduling = (20 – 2) ms = 18 ms
b) Average Turnaround Time:
Average Turnaround Time for SJN scheduling = (Sum of individual turnaround times) / (Number of processes) = (0 + 2 + 10 + 15 + 19 + 29) ms / 6 = 75 ms / 6 = 12.5 ms
Average Turnaround Time for RR scheduling = (Sum of individual turnaround times) / (Number of processes) = (2 + 4 + 12 + 14 + 16 + 20) ms / 6 = 68 ms / 6 = 11.33 ms (rounded to 2 decimal places)
Comparing the average turnaround times:
- SJN Scheduling: 12.5 ms
- RR Scheduling: 11.33 ms
The Round Robin (RR) scheduling algorithm with a quantum time of 2 ms is the optimum in this case, as it results in a lower average turnaround time compared to Shortest Job Next (SJN) scheduling.
Q3)A) Assume that a there are 6 processes (P1,P2,P3, P4, P5 and P6) which require the Resources R1, R2 and R3 to complete their tasks. The following table represents about their requirement and allocation made currently. Process # Max Requirement Allocated A B C A B C A Available Now B C Pl 4 2 3 2 1 2 2 2 2 P2 5 2 4 4 2 P3 7 2 3 6 3 P4 8 4 3 5 3 1 P5 5 12 5 3 12 3 3 P6 10 7 5 Apply the BANKER’S Algorithm to check whether there is any possibility of deadlock for the data given above.
Ans: To determine whether there is any possibility of deadlock using the Banker’s Algorithm, we need to perform the following steps:
- Calculate the Need matrix.
- Check if the system is in a safe state.
- Check if a request for resources can be granted without leading to an unsafe state.
Let’s start with step 1 by calculating the Need matrix:
Need = Max – Allocated
Need matrix:
A B C
P1 2 0 1
P2 0 0 2
P3 0 0 0
P4 0 0 0
P5 0 0 0
P6 0 0 0
Now, let’s calculate the total available resources:
Available = Available Now – Allocated
Available = (2 – 2, 2 – 1, 2 – 2) = (0, 1, 0)
Now, we will check if the system is in a safe state. To do this, we will use the Banker’s Algorithm:
Step 1: Initialize work = Available, finish[i] = false for all processes.
Step 2: Find an i such that both: a) finish[i] == false b) Need[i] <= work
If such an i exists, go to Step 3. If no such i exists, the system is not in a safe state, and there is a possibility of deadlock.
Step 3: Add the allocated resources of process P[i] to work, i.e., work = work + Allocated[i], and mark process P[i] as finished, i.e., finish[i] = true.
Repeat Steps 2 and 3 until all processes are marked as finished or until no such i exists in Step 2.
Let’s apply the Banker’s Algorithm:
- Initially, work = (0, 1, 0)
- Process P3 can be satisfied: Need[3] <= work, so we allocate resources to P3.
- Updated work = (6, 1, 0), finish[3] = true
- Process P4 can be satisfied: Need[4] <= work, so we allocate resources to P4.
- Updated work = (11, 5, 0), finish[4] = true
- Process P5 can be satisfied: Need[5] <= work, so we allocate resources to P5.
- Updated work = (16, 8, 0), finish[5] = true
- Process P6 can be satisfied: Need[6] <= work, so we allocate resources to P6.
- Updated work = (26, 15, 0), finish[6] = true
Now, all processes are marked as finished. Since all processes were able to complete, the system is in a safe state.
Therefore, there is no possibility of deadlock with the given resource allocation and maximum requirements.
Q3)B)Let P1, P2 and P3 are the processes and R1 and R2 are the two types of resources. Each resource is only one available for allocating to a process. Consider the following information and draw the RAG (Resource Allocation Graph) and WFG (Wait for Graph) and check whether there would be any deadlock?
(i) P1 is using R1 now
(ii) P2 holds R2 but waits for R1
(iii) P3 waits for the resources R1 as well as R2.
Ans: To analyze the possibility of a deadlock, let’s first draw the Resource Allocation Graph (RAG) and the Wait-for Graph (WFG) based on the provided information:
1. Resource Allocation Graph (RAG):
– P1 is using R1, so there is a directed edge from R1 to P1.
– P2 holds R2 but waits for R1, so there is a directed edge from R2 to P2 and a directed edge from P2 to R1.
– P3 waits for both resources R1 and R2, so there is a directed edge from P3 to R1 and a directed edge from P3 to R2.
The RAG looks like this:
P1 (R1)
|
V
P2 (R2)
|
V
P3
2. Wait-for Graph (WFG):
In the Wait-for Graph, we represent processes as nodes, and if one process is waiting for another to release a resource, we draw a directed edge from the waiting process to the process it is waiting for.
– P2 holds R2 but waits for R1, so there is a directed edge from P2 to P1.
– P3 waits for both resources R1 and R2, so there is a directed edge from P3 to P1 and a directed edge from P3 to P2.
The WFG looks like this:
P2 –> P1
|
V
P3
Now, let’s analyze the graphs:
In the RAG, if there is a cycle that contains only resources and not processes, it indicates the possibility of a deadlock. In this case, there is no cycle containing only resources, so the RAG does not show a deadlock.
In the WFG, if there is a cycle, it suggests a potential deadlock. In this case, there is a cycle (P2 -> P1 -> P2), which means that P2 is waiting for a resource that P1 currently holds, and P1 is indirectly waiting for P2 to release R2. This cycle indicates a potential deadlock situation.
Therefore, based on the Wait-for Graph (WFG), there is a possibility of deadlock in the given scenario.
To prevent deadlock, you can implement a deadlock prevention algorithm, such as resource allocation with priority, resource allocation timeout, or resource allocation with preemption.
Q4)A)Differentiate Symmetric and Asymmetric Clustering systems
Ans: Symmetric and asymmetric clustering systems are two distinct approaches used in computer science and data management for organizing and distributing data. Here’s a brief differentiation between them:
- Symmetric Clustering:
- Balance: Symmetric clustering systems distribute data evenly across all nodes or servers in the cluster. Each node in the cluster has an equal share of the data and processing load.
- Scalability: They are typically easier to scale since adding or removing nodes doesn’t require complex data redistribution.
- Performance: Symmetric clustering provides high read performance as any node can respond to read requests.
- Examples: Symmetric clustering is commonly used in systems like Hadoop HDFS (Hadoop Distributed File System) and Cassandra.
- Asymmetric Clustering:
- Balance: Asymmetric clustering systems do not distribute data and processing equally among nodes. Some nodes may have a heavier load or more data than others.
- Scalability: They can be challenging to scale because adding or removing nodes may require data rebalancing efforts.
- Performance: Asymmetric clustering can lead to uneven performance, especially for read operations, as some nodes might be overloaded.
- Examples: Asymmetric clustering is less common but can be found in specialized situations where certain nodes have unique roles, such as in some fault-tolerant or high-availability systems.
In summary, symmetric clustering aims for balanced distribution and ease of scalability, while asymmetric clustering may have uneven data distribution and could be more complex to manage. The choice between them depends on the specific requirements and constraints of the system you are designing or using.
Q4)B) Consider the Dining and Philosopher problem in which there are 5 philosophers (Pho Ph1,Ph2.Ph3 and Ph4). Apply MONITOR synchronization tool for the following two instances: i). When all the philosophers are in the THINKING state initially, prove that any one of the philosopher is permitted to eat. ii). When the philosophers Ph2 and Ph4 are eating, Prove that the philosopher Ph1 is not permitted to eat
Ans: The Dining Philosophers problem is a classic synchronization problem used to illustrate issues that can arise in concurrent systems. To address this problem using a MONITOR synchronization tool, let’s define a basic structure for the monitor and then analyze the two instances:
Monitor DiningPhilosophers {
bool[5] isEating;
procedure PickUp(int i) {
if (isEating[(i + 4) % 5] || isEating[(i + 1) % 5]) {
wait();
}
isEating[i] = true;
}
procedure PutDown(int i) {
isEating[i] = false;
notify();
}
}
In this monitor, we have an array isEating
to track whether each philosopher is eating. PickUp
and PutDown
are used to manage access to the forks. Philosophers wait if either of their neighbors is eating, ensuring that two philosophers don’t eat simultaneously.
Now, let’s analyze the two instances you provided:
i) When all the philosophers are in the THINKING state initially:
In this case, since all philosophers are initially thinking, no one is eating. When any philosopher (let’s say Ph1) decides to eat, they will execute PickUp(1)
:
isEating[0]
andisEating[2]
are false (since neighbors are thinking).- Therefore,
PickUp(1)
will setisEating[1]
to true, and Ph1 is allowed to eat.
ii) When Ph2 and Ph4 are eating:
If Ph2 and Ph4 are eating, it means isEating[1]
and isEating[3]
are true. Now, if Ph1 wants to eat, it will execute PickUp(0)
:
isEating[4]
is false (Ph4’s right neighbor is not eating).isEating[1]
is true (Ph1’s left neighbor is eating).
Since isEating[1]
is true, Ph1 will have to wait, and it is not permitted to eat until its left neighbor (Ph2) finishes eating and calls PutDown(1)
.
So, in the second instance, Ph1 is not permitted to eat while Ph2 and Ph4 are eating due to the synchronization rules implemented in the monitor.
Q5.A) Consider the following disk request sequence for a disk with 100 tracks 45, 21, 67, 97, 4, 50, 89, 52, 61, 87, 25.Head pointer starting at 72. Find the number of head movements in cylinders using i) FCFS scheduling ii) SCAN iii) SST
Ans:
i) FCFS (First-Come-First-Serve) Scheduling:
- The head starts at track 72.
- The sequence of requests is: 45, 21, 67, 97, 4, 50, 89, 52, 61, 87, 25.
- Using FCFS, you simply process the requests in the order they are given.
- The total head movements would be the sum of the absolute differences between adjacent tracks in the order they are visited.
- Head movements: |72-45| + |45-21| + |21-67| + |67-97| + |97-4| + |4-50| + |50-89| + |89-52| + |52-61| + |61-87| + |87-25| = 877
ii) SCAN (Elevator) Scheduling:
- The head starts at track 72.
- In SCAN, the head moves in one direction (either towards higher tracks or lower tracks) until it reaches the end, and then it reverses direction.
- We can find the nearest track in each direction from the current position and then alternate between the two directions.
- Head movements: |72-50| + |50-45| + |45-25| + |25-4| + |4-21| + |21-52| + |52-61| + |61-67| + |67-87| + |87-89| + |89-97| = 260
iii) SSTF (Shortest Seek Time First) Scheduling:
- The head starts at track 72.
- In SSTF, you choose the request that is closest to the current head position.
- Head movements (in the order of servicing requests):
- Move to track 67 (|72-67| = 5).
- Move to track 61 (|67-61| = 6).
- Move to track 52 (|61-52| = 9).
- Move to track 45 (|52-45| = 7).
- Move to track 50 (|45-50| = 5).
- Move to track 25 (|50-25| = 25).
- Move to track 21 (|25-21| = 4).
- Move to track 4 (|21-4| = 17).
- Move to track 87 (|4-87| = 83).
- Move to track 89 (|87-89| = 2).
- Move to track 97 (|89-97| = 8).
Total head movements in SSTF = 5 + 6 + 9 + 7 + 5 + 25 + 4 + 17 + 83 + 2 + 8 = 171
So, the number of head movements for each scheduling algorithm is: i) FCFS: 877 ii) SCAN: 260 iii) SSTF: 171
Q5)B)Assume that there are 15 blocks in the disk which are numbered from 1 to 150. A file contains 4 blocks as B1,B2,B3 and B4 which are stored randomly in the disk by using an algorithm in which ith block of file is stored in the blocks of storage blocks with the formula: i * 2+ (i+1). Apply the following methods to manage the spaces 1) Grouping (where n = 4) ii) Counting
Ans: To manage the spaces in the disk for the file containing 4 blocks (B1, B2, B3, and B4) stored using the formula i * 2 + (i+1), you can apply the following methods:
- Grouping (where n = 4): In grouping, we allocate a group of consecutive blocks to store the entire file. Since the file has 4 blocks, we’ll allocate a group of 4 consecutive blocks for each file.
- B1: Blocks 1 to 4 (i = 0, so 0 * 2 + (0+1) = 1)
- B2: Blocks 5 to 8 (i = 1, so 1 * 2 + (1+1) = 5)
- B3: Blocks 9 to 12 (i = 2, so 2 * 2 + (2+1) = 9)
- B4: Blocks 13 to 16 (i = 3, so 3 * 2 + (3+1) = 13)
This way, each file is stored in a group of 4 consecutive blocks.
- Counting: In counting, we maintain a count of the number of blocks allocated, and we allocate blocks sequentially without grouping.
- B1: Block 1 (i = 0, so 0 * 2 + (0+1) = 1)
- B2: Block 2 (i = 1, so 1 * 2 + (1+1) = 5)
- B3: Block 3 (i = 2, so 2 * 2 + (2+1) = 9)
- B4: Block 4 (i = 3, so 3 * 2 + (3+1) = 13)
In this method, each block of the file is stored in a different, consecutive block.
These methods ensure that the file’s blocks are stored in the disk as specified by the formula i * 2 + (i+1) while managing space efficiently.
For More Updates Join Our Channels :