CPU Scheduling Algorithms in OS: Complete Guide

Published: December 11, 2025 | Reading Time: 5 minutes

Table of Contents

Key Highlights of the Blog

CPU scheduling algorithms in OS decide which processes get the CPU and at what time. They ensure smooth multitasking, fast response times, and efficient resource use. This comprehensive guide covers:

Introduction

Every second, your computer is quietly doing the tricky work of choosing which of the tasks waiting in line deserves the CPU next. Applications, background updates, browser tabs, and system processes all compete for their fair share of CPU time, like a well-rehearsed orchestra.

The way in which this split-second decision-making works determines:

For students and developers, grasping this flow is a prerequisite for understanding performance and OS behavior.

This guide breaks down CPU scheduling algorithms in OS with clarity, explaining how processes are selected, how algorithms differ, and how scheduling shapes responsiveness, fairness, and system efficiency in modern computing.

What are CPU Scheduling Algorithms in OS?

In a multitasking operating system, several programs (called processes) want to use the CPU at the same time. However, since the CPU can only handle one process at a time, the operating system needs a way to decide which process gets to run next. This decision-making is handled by CPU scheduling algorithms.

CPU scheduling algorithms in OS are rules or strategies used by the system's scheduler to manage the order in which processes are allowed to use the CPU. The aim is to make the system efficient, responsive, and fair.

Key Terminologies in CPU Scheduling

Gaining knowledge about CPU scheduling necessitates knowing several key terms. These explanations are the basis of understanding how processes are controlled and assessed in the operating system:

Arrival Time

The moment a process enters the ready queue, where it waits for its turn to use the CPU.

Burst Time (Execution Time)

The total time a process requires on the CPU to complete its execution. Sometimes called execution time or running time.

Completion Time

The exact time when a process finishes all its CPU execution.

Turnaround Time

The total time taken from when a process arrives in the ready queue to when it completes execution.

Formula: Turnaround Time = Completion Time – Arrival Time

Waiting Time

The amount of time a process spends waiting in the ready queue before it gets CPU time.

Formula: Waiting Time = Turnaround Time – Burst Time

Response Time

The time from when a process enters the ready queue until it gets the CPU for the first time. This is especially important in interactive systems.

Priority

A value that indicates how important a process is. In priority-based scheduling, higher-priority processes are chosen first.

How do CPU Scheduling Algorithms in OS Work?

To understand how the CPU decides which task (or process) to run, it's important to first know how these processes move through different stages during their life inside the computer.

The Scheduling Process

  1. Pick a Process: The CPU scheduler looks at the ready queue and selects one process to run
  2. Switching Control (Dispatching): A small part of the system, called the dispatcher, hands control of the CPU to the chosen process. If another process was running before, this switch is called a context switch
  3. Run the Process: The process runs for a certain amount of time or until it finishes, whichever comes first
  4. Return to Scheduler: After the process finishes or its time is up, the control goes back to the scheduler to pick the next process

Process Life Cycle

Every process (think of it as a running program) goes through several steps from the moment it is created to the moment it finishes. Here are the main stages:

Example: How Scheduling Works

Let's say we have two processes:

Working:

  1. The scheduler selects Process A and gives it the CPU
  2. Process A runs for 5ms (or until it's interrupted)
  3. Then the CPU is returned to the scheduler
  4. Now, the scheduler picks Process B to run next

Types of CPU Scheduling Algorithms

CPU Scheduling Algorithms in OS are how an operating system decides which process gets to use the CPU next. This becomes important when many tasks are waiting to run, and the CPU can only handle one at a time. Scheduling helps ensure efficient use of the processor, improves performance, and keeps all processes running smoothly.

CPU scheduling in operating systems falls into two main categories:

1. Non-Preemptive Scheduling

In a non-preemptive scheduling system, when a process uses the CPU, it continues to run until it either terminates or makes a request (such as waiting for input or completing a task) for which it gives up control. The operating system does not interrupt it.

Example

Think of it as if you are at the barber shop. Once a customer has been selected to get a haircut, no one else is allowed to take their place until the haircut is finished.

Advantages

Disadvantages

2. Preemptive Scheduling

In preemptive scheduling, the operating system can interrupt a running process and switch to another. This allows better sharing of CPU time among all active processes.

Example

Think of a customer at a bank teller who gets interrupted when someone with urgent work arrives. The teller pauses the current work to help the urgent customer first.

Advantages

Disadvantages

Important CPU Scheduling Algorithms in OS

1. First-Come, First-Served (FCFS) Scheduling

First-Come, First-Served (FCFS) is one of the most basic CPU scheduling algorithms in operating system. Just like the name suggests, the CPU processes tasks in the exact order they arrive. It doesn't interrupt a running process until it's fully completed.

Once a process reaches the front of the queue, it gets the CPU, and no other process can take its place until it's done.

How It Works

Let's say we have three processes that arrive at different times and need different amounts of CPU time (called burst time):

Process Arrival Time Burst Time
P1 0 5
P2 1 3
P3 2 8

Execution Order:

Since P1 arrives first, it runs first. P2 is next, followed by P3. So the order of execution is: P1 → P2 → P3

Advantages of FCFS

Disadvantages of FCFS

2. Shortest Job First (SJF) Scheduling

Shortest Job First (SJF) scheduling picks the process that needs the least amount of CPU time next. In simple terms, it gives priority to the task that will finish the fastest. As one of the key CPU Scheduling Algorithms in OS, SJF ensures efficiency by selecting, among multiple waiting processes, the one with the shortest burst time (CPU time needed) first.

How It Works

Here's an example with three processes and their burst times:

Process Burst Time
P1 6
P2 2
P3 4

Execution Order:

Since P2 has the shortest burst time, it runs first. Then comes P3, and finally P1. So the execution sequence is: P2 → P3 → P1

Advantages of SJF

Disadvantages of SJF

3. Priority Scheduling

Priority Scheduling is a system where every process gets a priority level. As part of CPU scheduling in operating system design, the CPU selects the process with the highest priority to run first. A lower numeric value means a higher priority, so a process with priority 1 will be chosen over one with priority 3. This system is great for situations where certain tasks are more urgent or important than others.

How It Works

Let's look at a set of processes with their priority levels and burst times (how long they need the CPU):

Process Priority Burst Time
P1 2 5
P2 1 3
P3 3 4

Execution Order:

P2 has the highest priority (priority 1), so it runs first. Next comes P1 (priority 2), then P3 (priority 3). So the order is: P2 → P1 → P3

Advantages of Priority Scheduling

Disadvantages of Priority Scheduling

4. Round Robin (RR) Scheduling

Round Robin scheduling algorithm in OS is one of the CPU Scheduling Algorithms in OS, created to give every process a fair share of the CPU. Instead of letting one process run until it's finished, each process is given a fixed amount of CPU time, called a time quantum or time slice.

If a process doesn't finish during its allotted time, it's paused, and the CPU moves on to the next process in the queue. The paused process then waits its turn to resume in the next round.

How It Works

Let's take an example where the time quantum is 4 milliseconds (ms). Three processes enter the system, each with different CPU burst times:

Process Burst Time
P1 6 ms
P2 3 ms
P3 5 ms

Execution Order:

  1. P1 gets the CPU first and runs for 4 ms (still needs 2 ms more)
  2. P2 runs next for 3 ms (finishes execution)
  3. P3 runs for 4 ms (still needs 1 ms more)
  4. P1 gets its remaining 2 ms
  5. P3 finishes with its remaining 1 ms

Final Execution: P1 (4) → P2 (3) → P3 (4) → P1 (2) → P3 (1)

Advantages of Round Robin

Disadvantages of Round Robin

5. Multilevel Queue Scheduling (MLQ)

Multilevel queue scheduling in OS is a strategy where processes are sorted into different groups or "queues" based on their nature or priority level. Each queue can use a different scheduling algorithm that's best suited for the type of processes it holds.

Think of it like having separate waiting lines for different types of tasks: critical system tasks, user-interactive programs, and background batch jobs, where each line follows its rules, a concept central to CPU scheduling in operating system design.

How It Works

Processes are identified and divided according to their different priorities in various queues. Here's a detailed example:

System Queue:

Interactive Queue:

Batch Queue:

Advantages of MLQ

Disadvantages of MLQ

6. Multilevel Feedback Queue (MFQ) Scheduling

The Multilevel Feedback Queue (MFQ) is an advanced CPU scheduling method designed to handle multiple processes efficiently. As one of the key CPU Scheduling Algorithms in OS, it builds upon the Multilevel Queue (MLQ) system, but with a major improvement: processes can move between queues based on how they behave during execution.

This movement helps the system adapt to different process needs, giving preference to interactive or short tasks without completely ignoring long-running ones.

How It Works

Advantages of MFQ

Disadvantages of MFQ

Comparison and Analysis of Scheduling Algorithms

Various CPU scheduling algorithms may vary their functionalities based on factors such as system workload, CPU burst times, and the ratio of CPU-bound to I/O-bound jobs. Understanding their comparative strengths and weaknesses helps determine which algorithm is best suited for a particular environment.

Below is a detailed analysis of the major scheduling algorithms, focusing on their performance, fairness, complexity, and impact on waiting times and response times.

Algorithm How It Works Strengths (Pros) Weaknesses (Cons) Best Use Cases
First-Come, First-Served (FCFS) Takes processes one by one in the order of their arrival in the ready queue and gives each process control of the CPU till it terminates Simple and easy to implement; follows first-come first-served principle Performs poorly when long CPU-bound jobs arrive first, causing short I/O-bound jobs to wait longer and increasing overall waiting time Best suited for batch-processing systems where predictability is more important than responsiveness
Shortest Job First (SJF) Selects the process with the shortest CPU burst time, giving priority to tasks that finish quickly Provides minimum average waiting time and maximizes throughput by quickly finishing short jobs Long processes can be starved; need for correct prediction of CPU burst times makes this situation hard to be found in real systems Ideal for workloads where short tasks dominate, and efficiency is the primary goal
Shortest Remaining Time First (SRTF) Preemptive version of SJF, where the OS can interrupt the current process if a newly arrived process has a shorter remaining burst time Improves response time for short tasks and reduces waiting time for interactive processes Increases implementation complexity and can frequently preempt long CPU-bound jobs, causing starvation Useful in systems requiring fast responses to short tasks, such as interactive applications
Priority Scheduling Processes are assigned priority values, and the OS always selects the process with the highest priority for execution Allows urgent or critical tasks to receive immediate CPU attention, improving system responsiveness for important jobs May cause starvation for low-priority processes unless aging techniques are used to gradually increase their priority Best used in real-time systems or scenarios where certain tasks must be executed immediately
Round Robin (RR) Assigns each process a fixed CPU time slice (quantum) and cycles through the ready queue, preempting tasks when their time expires Provides fairness because every process gets an equal share of CPU time and ensures good response times for interactive jobs Too many context switches occur if the time slice is too small, while a large quantum can make the algorithm behave like FCFS Ideal for time-sharing systems, multi-user environments, and interactive applications
Multilevel Queue (MLQ) Separates processes into several queues based on priority or type (system, interactive, batch), with each queue using a different scheduling algorithm Provides efficient resource allocation because each queue is optimized for the type of work it handles Lower-priority queues may suffer starvation if higher-priority queues remain busy for long periods Best suited for operating systems that manage a wide range of process types, such as system processes and background jobs
Multilevel Feedback Queue (MLFQ) Allows processes to move between queues based on their behavior, giving preference to short and I/O-bound jobs while gradually lowering CPU-bound jobs Highly flexible and adaptive, offering excellent responsiveness for interactive tasks and fair CPU usage for long-running processes Complex to configure because it requires careful tuning of queue levels, time slices, and promotion/demotion rules Ideal for modern general-purpose operating systems that require adaptability and fast response times

Summary

Each CPU scheduling algorithm has its pros and cons depending on the type of tasks:

The main criteria for selecting the right algorithm are fairness, speed, low waiting times, and predictable performance.

Scheduling Criteria in OS Scheduling Algorithms

When an operating system decides which process should use the CPU next, it uses specific criteria to make the best choice. These criteria help ensure that the system runs efficiently and fairly.

1. CPU Utilization

This refers to how effectively the CPU is being used. The goal is to keep the CPU busy doing useful work as much as possible, instead of letting it sit idle. A well-performing scheduling algorithm tries to maximize CPU usage, ideally keeping it close to 100%.

2. Throughput

Throughput measures how many processes are completed in a given time. A higher throughput means more work is getting done. Efficient scheduling leads to higher throughput, which is especially important in systems where performance and speed are crucial.

3. Turnaround Time

Turnaround time is the total time taken for a process to complete after it enters the system. This includes all stages: waiting in the queue, getting processed, and finishing. Lower turnaround time means processes are finishing faster, which is better for overall performance.

4. Waiting Time

This is the amount of time a process spends waiting in the ready queue before it gets the CPU. It's the "idle time" for a process. A good scheduling algorithm aims to reduce this waiting time for shorter or more urgent tasks.

5. Response Time

Response time is the time it takes for a process to get its first response after being submitted. It doesn't mean the process is done, just that it started doing something (like outputting a message or beginning a calculation). For interactive systems, fast response time is critical so users don't experience delays.

6. Fairness

Fairness means that no process is left out of CPU use for too long. If a process were to be starved of the CPU, it would be called unfair. A fair algorithm keeps the processes from taking the CPU for too long and thus ensures that the performance of the system remains balanced.

Bottom Line

Scheduling criteria guide the OS in choosing the most efficient, fair, and responsive process execution order. By balancing CPU utilization, throughput, turnaround time, waiting time, response time, and fairness, the system ensures smoother multitasking and better overall performance.

Preemptive vs. Non-Preemptive Scheduling

Feature Preemptive Scheduling Non-Preemptive Scheduling
Who Controls the CPU The operating system (OS) can pause an ongoing process to assign the CPU to another task Once a process starts, it keeps the CPU until it finishes or switches voluntarily
Responsiveness Very responsive and good for systems where quick reaction is needed Less responsive and not ideal for systems needing fast interactions
Flexibility Highly flexible and allows better CPU sharing among multiple tasks Less flexible and processes are handled one at a time until completion
System Overhead Higher overhead due to frequent context switching between tasks Low overhead since switching happens rarely
Regular Usage Used in real-time systems, operating systems with multitasking, and user-interactive apps Common in simpler systems, like batch jobs or where tasks run in sequence
Efficiency Can improve overall efficiency, especially in multitasking environments Can cause inefficiencies if a long task blocks others from running
Complexity More complex to implement and manage Easier to implement and manage

Conclusion

CPU scheduling plays a central role in how an operating system manages multitasking, responsiveness, and overall performance. Each algorithm, whether FCFS, SJF, Round Robin, Priority, MLQ, or MLFQ, offers different strengths depending on workload type, fairness requirements, and system goals.

Knowing what these algorithms do, their strengths and weaknesses, enables students to comprehend the efficiency, delayed reduction, and CPU usage balancing in real systems.

Advice for Learners

Frequently Asked Questions

What are the main types of CPU scheduling algorithms?

The main types are FCFS, SJF, Round Robin, Priority Scheduling, and Multilevel Queue. Each of them operates differently in selecting the next process.

What is the difference between preemptive and non-preemptive scheduling?

Preemptive scheduling can pause a running task to start another. In non-preemptive scheduling, a task runs until it's done or gives up the CPU.

How does the round-robin scheduling algorithm work?

Each process gets a fixed time to run in turns. If it's not done in time, it goes to the back of the line.

What is the Shortest Job First (SJF) scheduling algorithm?

The Shortest Job First scheduling algorithm executes the process that requires the minimum amount of time. Shortest Job First scheduling can be preemptive or non-preemptive, and it is considered a very efficient way of reducing waiting time.

What are the key performance metrics for evaluating CPU scheduling algorithms?

The most crucial metrics are CPU utilization, throughput, waiting time, and system response or turnaround time.

How does Priority Scheduling work in operating systems?

Each task is given a priority, and higher ones go first. It can switch tasks based on priority to avoid long waits.


Related Articles


Source: NxtWave (CCBP.in)

Original URL: https://www.ccbp.in/blog/articles/cpu-scheduling-algorithms-in-os