First-Come, First-Served (FCFS) CPU Scheduling: A Simple Algorithm and its Performance Implications

Understand the First-Come, First-Served (FCFS) CPU scheduling algorithm and its impact on system performance. This tutorial explains its operation, highlights its simplicity, and discusses its limitations, including the convoy effect and potential for increased waiting times.



First-Come, First-Served (FCFS) CPU Scheduling

Understanding FCFS Scheduling

First-Come, First-Served (FCFS) is a basic CPU scheduling algorithm. Processes are scheduled in the order they arrive in the ready queue—the queue of processes ready to use the CPU. It's a non-preemptive algorithm, meaning a process runs to completion before the next process starts. While simple to understand and implement, FCFS can be inefficient and lead to long waiting times.

Preemptive vs. Non-Preemptive Scheduling

In CPU scheduling, there are two main approaches:

  • Preemptive: The OS can interrupt a running process and switch to another. This is commonly used in time-sharing systems and allows for better responsiveness.
  • Non-preemptive: A process runs to completion before another process begins. This is simpler but less responsive.

FCFS can be implemented using either a preemptive or non-preemptive approach. A preemptive FCFS is where tasks have a time quantum assigned to them, and they will be preempted after they finish using their assigned time quantum. A non-preemptive FCFS approach means that once a process begins running, it will continue until its execution is complete.

The Convoy Effect

The convoy effect is a significant drawback of non-preemptive FCFS scheduling. If a long-running process arrives early in the queue, it delays all subsequent processes, even if those processes require much less CPU time. This can lead to very long waiting times for shorter processes and even starvation (where a process waits indefinitely).

Characteristics of FCFS

  • Simple to implement.
  • Non-preemptive (unless a time quantum is used).
  • Processes are scheduled according to their arrival time.
  • Can lead to long waiting times and the convoy effect.
  • Does not consider process priority.

Advantages of FCFS

  • Simple and easy to understand.
  • Fair in terms of scheduling (if time quanta are used).
  • Avoids starvation (if time quanta are used).

Disadvantages of FCFS

  • Can be inefficient due to long waiting times.
  • Susceptible to the convoy effect.
  • Doesn't consider process priority or burst time.

Example FCFS Calculation (Non-Preemptive)

Consider six processes:

Process ID Arrival Time Burst Time
P1 0 9
P2 1 3
P3 1 2
P4 1 4
P5 2 3
P6 3 2

Using FCFS (non-preemptive), the processes would be completed in order of arrival. The Gantt chart would show the execution order and completion times for each process. This enables the calculation of turnaround time and waiting time.

Process ID Arrival Time Burst Time Completion Time Turnaround Time Waiting Time
P1 0 9 9 9 0
P2 1 3 12 11 8
P3 1 2 14 13 11
P4 1 4 18 17 13
P5 2 3 21 19 16
P6 3 2 23 20 18

Average Waiting Time: 11

Average Turnaround Time: 14.83

Average Completion Time: 16.17

Binary Semaphores and Mutexes

Understanding Binary Semaphores

A binary semaphore is a synchronization primitive—a tool used to control access to shared resources. It can only hold two values: 0 and 1. A value of 0 represents a locked state (the resource is in use), and 1 represents an unlocked state (the resource is available). It’s often used to implement mutual exclusion (only one process can access a shared resource at a time). Binary semaphores are very useful in controlling access to critical sections. A binary semaphore is essentially a simpler form of a mutex. The primary difference is that a mutex can only be unlocked by the thread that locked it, whereas a binary semaphore can be signaled (unlocked) by any thread.

Understanding Mutexes

A mutex (mutual exclusion) is a more specialized type of locking mechanism. It’s similar to a binary semaphore in that it provides mutual exclusion, but with the added restriction that the same thread that locks the mutex must also unlock it. This ensures that locks are released in a controlled and predictable manner.