02

Process Management

Chapter 2 • Intermediate

60 min

Process Management

Process management is one of the most important functions of an operating system. It involves creating, scheduling, and terminating processes, as well as managing process synchronization and communication.

Process Concept

What is a Process?

A process is a program in execution. It is an active entity that requires resources to execute.

Process vs Program:

  • Program: Passive entity stored on disk (executable file)
  • Process: Active entity loaded in memory and executing

Process States

A process can be in one of the following states:

  1. New: Process is being created
  2. Ready: Process is waiting to be assigned to a processor
  3. Running: Process instructions are being executed
  4. Waiting/Blocked: Process is waiting for some event (I/O, signal)
  5. Terminated: Process has finished execution

State Transitions:

  • New → Ready: Process admitted
  • Ready → Running: Process scheduled
  • Running → Ready: Interrupt or time slice expired
  • Running → Waiting: I/O request or event wait
  • Waiting → Ready: I/O complete or event occurred
  • Running → Terminated: Process finished

Process Control Block (PCB)

Each process has a Process Control Block containing:

  • Process ID (PID): Unique identifier
  • Process State: Current state (ready, running, etc.)
  • Program Counter: Address of next instruction
  • CPU Registers: Register values
  • CPU Scheduling Information: Priority, scheduling queue
  • Memory Management Information: Base/limit registers, page tables
  • Accounting Information: CPU time, time limits
  • I/O Status Information: List of I/O devices allocated

Process Scheduling

Process scheduling is the activity of selecting which process to run next when multiple processes are ready.

Scheduling Criteria

  1. CPU Utilization: Keep CPU busy
  2. Throughput: Number of processes completed per time unit
  3. Turnaround Time: Time from submission to completion
  4. Waiting Time: Time spent waiting in ready queue
  5. Response Time: Time from submission to first response

Scheduling Algorithms

1. First-Come, First-Served (FCFS)

Principle: Process that arrives first is served first.

Characteristics:

  • Non-preemptive
  • Simple to implement
  • Can cause convoy effect (short process behind long process)

Example:

Processes arrive at time 0:

  • P1: Burst time = 24
  • P2: Burst time = 3
  • P3: Burst time = 3

Gantt Chart:

code
|----P1----|--P2--|--P3--|
0         24     27     30

Waiting Times:

  • P1: 0
  • P2: 24
  • P3: 27
  • Average: (0 + 24 + 27) / 3 = 17

2. Shortest Job First (SJF)

Principle: Process with shortest burst time is scheduled first.

Characteristics:

  • Can be preemptive (Shortest Remaining Time First - SRTF) or non-preemptive
  • Optimal for minimizing average waiting time
  • Requires knowledge of burst times

Example (Non-preemptive):

  • P1: Arrival = 0, Burst = 7
  • P2: Arrival = 2, Burst = 4
  • P3: Arrival = 4, Burst = 1
  • P4: Arrival = 5, Burst = 4

Gantt Chart:

code
|--P1--|P3|--P2--|--P4--|
0      7  8    12     16

Waiting Times:

  • P1: 0
  • P2: 6 (12 - 4 - 2)
  • P3: 3 (8 - 1 - 4)
  • P4: 7 (16 - 4 - 5)
  • Average: (0 + 6 + 3 + 7) / 4 = 4

3. Priority Scheduling

Principle: Process with highest priority is scheduled first.

Characteristics:

  • Can be preemptive or non-preemptive
  • Priority can be internal (time limits, memory) or external (importance)
  • Can cause starvation (low priority processes may never execute)

4. Round Robin (RR)

Principle: Each process gets a small unit of CPU time (time quantum), then moves to end of ready queue.

Characteristics:

  • Preemptive
  • Fair scheduling
  • Good for time-sharing systems
  • Performance depends on time quantum size

Example:

Time quantum = 4

  • P1: Burst = 24
  • P2: Burst = 3
  • P3: Burst = 3

Gantt Chart:

code
|P1|P2|P3|P1|P1|P1|P1|P1|P1|
0 4 7 10 14 18 22 26 30

Waiting Times:

  • P1: 6 (10 - 4)
  • P2: 4 (4 - 0)
  • P3: 7 (7 - 0)
  • Average: (6 + 4 + 7) / 3 = 5.67

5. Multilevel Queue Scheduling

Principle: Ready queue is partitioned into separate queues (foreground, background). Each queue has its own scheduling algorithm.

6. Multilevel Feedback Queue

Principle: Allows processes to move between queues based on their behavior and CPU burst characteristics.

Process Synchronization

When multiple processes access shared data concurrently, we need synchronization to ensure data consistency.

Critical Section Problem

Critical Section: Code segment that accesses shared variables and must not be executed concurrently.

Requirements:

  1. Mutual Exclusion: Only one process in critical section at a time
  2. Progress: If no process in critical section, a waiting process should enter
  3. Bounded Waiting: Process should not wait indefinitely

Solutions to Critical Section

1. Peterson's Solution

Software-based solution for two processes:

c.js
int turn;
int flag[2];

// Process i
do {
    flag[i] = true;
    turn = j;
    while (flag[j] && turn == j);
    
    // critical section
    
    flag[i] = false;
    
    // remainder section
} while (true);

2. Semaphores

Semaphore: Integer variable that can only be accessed via two atomic operations: wait() and signal().

Binary Semaphore: Value is 0 or 1 (mutex lock)

Counting Semaphore: Value can range over unrestricted domain

c.js
wait(S) {
    while (S <= 0); // busy wait
    S--;
}

signal(S) {
    S++;
}

Example - Producer-Consumer Problem:

c.js
semaphore mutex = 1;
semaphore full = 0;
semaphore empty = n;

// Producer
do {
    // produce item
    wait(empty);
    wait(mutex);
    // add item to buffer
    signal(mutex);
    signal(full);
} while (true);

// Consumer
do {
    wait(full);
    wait(mutex);
    // remove item from buffer
    signal(mutex);
    signal(empty);
    // consume item
} while (true);

3. Monitors

High-level synchronization construct that encapsulates shared data and operations.

Deadlocks

A deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource held by another process.

Necessary Conditions for Deadlock

  1. Mutual Exclusion: Resources cannot be shared
  2. Hold and Wait: Process holds resource while waiting for another
  3. No Preemption: Resources cannot be forcibly taken
  4. Circular Wait: Circular chain of processes waiting for resources

Deadlock Handling Methods

1. Deadlock Prevention

Prevent one of the four necessary conditions:

  • Eliminate mutual exclusion (not always possible)
  • Eliminate hold and wait (request all resources at once)
  • Allow preemption
  • Eliminate circular wait (impose ordering on resources)

2. Deadlock Avoidance

Banker's Algorithm: System checks if granting a resource request would lead to unsafe state.

Safe State: System can allocate resources to each process in some order and avoid deadlock.

Unsafe State: May lead to deadlock, but not guaranteed.

3. Deadlock Detection

Allow deadlock to occur, then detect and recover.

Detection Algorithm:

  • Maintain wait-for graph
  • Periodically check for cycles
  • If cycle exists, deadlock detected

4. Deadlock Recovery

  • Process Termination: Kill one or more processes
  • Resource Preemption: Preempt resources from processes

GATE CS Important Points

  1. Scheduling Algorithms: Know FCFS, SJF, Priority, Round Robin thoroughly
  2. Gantt Charts: Practice drawing and calculating waiting/turnaround times
  3. Semaphores: Understand wait/signal operations and their applications
  4. Deadlocks: Know the four conditions and handling methods
  5. Banker's Algorithm: Practice solving problems

Practice Tips

  1. Draw Gantt Charts: Visual representation helps understand scheduling
  2. Calculate Metrics: Practice calculating waiting time, turnaround time, response time
  3. Trace Semaphore Operations: Understand how semaphore values change
  4. Solve Deadlock Problems: Practice identifying and resolving deadlocks
  5. Previous Year Questions: Solve GATE OS questions from last 10 years