What are CPU Scheduling Algorithms in OS?
In a multitasking operating system, several programs (called processes) want to use the CPU at the same time. However, since the CPU can only handle one process at a time, the operating system scheduling algorithms need a way to decide which process gets to run next. This decision-making is handled by CPU scheduling algorithms.
CPU scheduling algorithms in OS are rules or strategies used by the system’s scheduler to manage the order in which processes are allowed to use the CPU. The aim is to make the system efficient, responsive, and fair.
Key Terminologies in CPU Scheduling
Gaining knowledge about CPU scheduling necessitates knowing the several terms which are key terms. These explanations are the basis of understanding how processes are controlled and assessed in the operating system:
- Arrival Time:
The moment a process enters the ready queue, it waits for its turn to use the CPU. - Burst Time (Execution Time):
The total time a process requires on the CPU to complete its execution. Sometimes called execution time or running time. - Completion Time:
The exact time when a process finishes all its CPU execution. - Turnaround Time:
The total time taken from when a process arrives in the ready queue to when it completes execution.
Formula:
Turnaround Time = Completion Time – Arrival Time - Waiting Time:
The amount of time a process spends waiting in the ready queue before it gets CPU time.
Formula:
Waiting Time = Turnaround Time – Burst Time - Response Time:
The time from when a process enters the ready queue until it gets the CPU for the first time. This is especially important in interactive systems. - Priority
A value that indicates how important a process is. In priority-based scheduling, higher-priority processes are chosen first.
How do CPU Scheduling Algorithms in OS Work?
To understand how the CPU decides which task (or process) to run, it's important to first know how these processes move through different stages during their life inside the computer. Here is how it works:
- Pick a Process: The CPU scheduler looks at the ready queue and selects one process to run.
- Switching Control (Dispatching): A small part of the system, called the dispatcher, hands control of the CPU to the chosen process. If another process was running before, this switch is called a context switch.
- Run the Process: The process runs for a certain amount of time or until it finishes, whichever comes first.
- Return to Scheduler: After the process finishes or its time is up, the control goes back to the scheduler to pick the next process.
Process Life Cycle
Every process (think of it as a running program) goes through several steps from the moment it is created to the moment it finishes. Here are the main stages:
- New: The process has just been created and is getting set up.
- Ready: The process is fully prepared to run but is waiting for its turn on the CPU.
- Running: The CPU is actively executing the process.
- Waiting: The process is paused because it's waiting for something else to finish like reading from a file or user input.
- Terminated: The process has finished running and is now closed.
Example
Let’s say we have two processes:
- Process A needs the CPU for 5 milliseconds.
- Process B is waiting in the ready queue.
Working:
- The scheduler selects Process A and gives it the CPU.
- Process A runs for 5ms (or until it’s interrupted).
- Then the CPU is returned to the scheduler.
- Now, the scheduler picks Process B to run next.
Types of CPU Scheduling Algorithms
CPU Scheduling Algorithms in OS are how an operating system decides which process gets to use the CPU next. This becomes important when many tasks are waiting to run, and the CPU can only handle one at a time. Scheduling helps ensure efficient use of the processor, improves performance, and keeps all processes running smoothly. CPU scheduling in operating systems falls into two main categories:
1. Non-Preemptive Scheduling
In a non-preemptive scheduling system, when a process uses the CPU, it continues to run until it either terminates or makes a request (such as waiting for input or completing a task) for which it gives up control. The other operating system scheduling algorithms do not interrupt it.
Example:
Think of it as if you are at the barber shop. Once a customer has been selected to get a haircut, no one else is allowed to take their place until the haircut is finished.
Advantages:
- It is very easy to implement and understand.
- There is no overhead resulting from switching between processes.
- Execution is predictable and stable.
Disadvantages:
- This can lead to long waiting times for shorter processes.
- The system is less flexible; a slow or long process can hinder others.
2. Preemptive Scheduling
In preemptive scheduling, the operating system scheduling algorithms can interrupt a running process and switch to another. This allows better sharing of CPU time among all active processes.
Example:
Think of a customer at a bank teller who gets interrupted when someone with urgent work arrives. The teller pauses the current work to help the urgent customer first.
Advantages:
- More responsive to high-priority or short tasks.
- Allows better use of CPU time.
- Improves overall system performance in multi-user environments.
Disadvantages:
- More complex to implement.
- Frequent switching can cause overhead and reduce efficiency.
- This may lead to inconsistency if not handled properly.
Important CPU Scheduling Algorithms in OS
1. First-come, First-Served (FCFS) Scheduling
First-Come, First-Served (FCFS) is one of the most basic CPU scheduling algorithms in operating system. Just like the name suggests, the CPU processes tasks in the exact order they arrive. It doesn’t interrupt a running process until it's fully completed.
Once a process reaches the front of the queue, it gets the CPU, and no other process can take its place until it's done.
How it works
Let’s say we have three processes that arrive at different times and need different amounts of CPU time (called burst time):
| Process |
Arrival Time |
Burst Time |
| P1 |
0 |
5 |
| P2 |
1 |
3 |
| P3 |
2 |
8 |
Execution Order:
Since P1 arrives first, it runs first. P2 is next, followed by P3. So the order of execution is: P1 → P2 → P3
Advantages of FCFS:
- Simple and Easy to Use: It’s one of the easiest scheduling methods to understand and implement.
- Fair Based on Arrival Time: Processes are handled in the order they come in, which feels fair to users.
Disadvantages of FCFS:
- Convoy Effect: This happens when a long operation is at the front, so all the short ones have to wait, even if they could be finished quickly. As a result, delays are caused for everyone else.
- Inefficient Average Waiting Time: Since shorter tasks can get stuck behind longer ones, the overall waiting time tends to be higher.
2. Shortest Job First (SJF) Scheduling
Shortest Job First (SJF) scheduling picks the process that needs the least amount of CPU time next. In simple terms, it gives priority to the task that will finish the fastest. As one of the key CPU Scheduling Algorithms in OS, SJF ensures efficiency by selecting, among multiple waiting processes, the one with the shortest burst time (CPU time needed) first.
- In the non-preemptive version, once a process starts, it runs until it's done.
- In the preemptive version (SRTF), if a new process arrives with a shorter burst time than the currently running one, it interrupts and takes over.
How It Works
Here's an example with three processes and their burst times:
| Process |
Burst Time |
| P1 |
6 |
| P2 |
2 |
| P3 |
4 |
Execution Order:
Since P2 has the shortest burst time, it runs first. Then comes P3, and finally P1. So the execution sequence is: P2 → P3 → P1
Advantages of SJF
- Very Efficient: It gives the best average waiting time compared to many other algorithms.
- High Throughput: More processes can be finished in less time, especially if many short tasks are waiting.
Disadvantages of SJF
- Starvation Risk: Long processes may keep getting pushed to the back if shorter ones keep arriving. This means some tasks might never get executed.
- Burst Time is Hard to Predict: SJF needs to know how long each process will take before it starts, which is often not realistic in practice.
3. Priority Scheduling
Priority Scheduling is a system where every process gets a priority level. As part of CPU scheduling in operating system design, the CPU selects the process with the highest priority to run first. A lower numeric value means a higher priority, so a process with priority 1 will be chosen over one with priority 3. This system is great for situations where certain tasks are more urgent or important than others.
How It Works
Let’s look at a set of processes with their priority levels and burst times (how long they need the CPU):
| Process |
Arrival Time |
Burst Time |
| P1 |
2 |
5 |
| P2 |
1 |
3 |
| P3 |
3 |
4 |
Execution Order:
P2 has the highest priority (priority 1), so it runs first. Next comes P1 (priority 2), then P3 (priority 3). So the order is: P2 → P1 → P3
Advantages of Priority Scheduling
- Ideal for Time-Sensitive Tasks: Critical tasks can be assigned a higher priority so they’re handled right away.
- Customizable: You can adjust priority levels based on task importance, user roles, or other factors.
Disadvantages of Priority Scheduling
- Starvation Risk: Low-priority processes never get CPU time if higher-priority ones keep arriving; this is known as starvation.
- Needs Aging to Fix Starvation: To avoid starvation, a technique called ageing is used, where the priority of waiting processes gradually increases over time so they eventually get executed.
4. Round Robin (RR) Scheduling
Round Robin scheduling algorithm in OS is one of the CPU Scheduling Algorithms in OS, created to give every process a fair share of the CPU. Instead of letting one process run until it’s finished, each process is given a fixed amount of CPU time, called a time quantum or time slice.
If a process doesn't finish during its allotted time, it's paused, and the CPU moves on to the next process in the queue. The paused process then waits its turn to resume in the next round.
How It Works
Let’s take an example where the time quantum is 4 milliseconds (ms). Three processes enter the system, each with different CPU burst times:
| Process |
Burst Time |
| P1 |
6 ms |
| P2 |
3 ms |
| P3 |
5 ms |
Execution Order:
- P1 gets the CPU first and runs for 4 ms (still needs 2 ms more)
- P2 runs next for 3 ms (finishes execution)
- P3 runs for 4 ms (still needs 1 ms more)
- P1 gets its remaining 2 ms
- P3 finishes with its remaining 1 ms
- Final Execution: P1 (4) → P2 (3) → P3 (4) → P1 (2) → P3 (1)
Advantages of Round Robin
- Fair for All Processes: Every process gets an equal opportunity to use the CPU.
- Prevents Starvation: No process gets ignored for too long, unlike some other os scheduling algorithms.
- Responsive: Good for systems that require quick responses from many users, like interactive or time-sharing systems.
Disadvantages of Round Robin
- Quantum Size Matters: If the time quantum is too short, too many context switches slow down the system. If it’s too long, it behaves like First-Come, First-Served.
- High Context Switching Overhead: Frequently pausing and resuming processes can waste CPU time.
5. Multilevel Queue Scheduling (MLQ)
Multilevel queue scheduling in OS is a strategy where processes are sorted into different groups or “queues” based on their nature or priority level. Each queue can use a different scheduling algorithm that's best suited for the type of processes it holds.
Think of it like having separate waiting lines for different types of tasks: critical system tasks, user-interactive programs, and background batch jobs, where each line follows its rules, a concept central to CPU scheduling in operating system design.
How It Works
Processes are identified and divided according to their different priorities in various queues. Let me show you a detailed example:
System Queue:
- Highest priority
- Handles critical system-level tasks (like the operating system’s own processes)
- Uses preemptive scheduling for speed and responsiveness
Interactive Queue:
- Medium priority
- Includes user-facing applications that need quick responses (e.g., typing in a text editor or using a web browser)
- Use the Round Robin scheduling algorithm in OS for fairness
Batch Queue:
- Lowest priority
- For long, background jobs that don't need immediate attention (like backups or large calculations)
- Uses First-Come, First-Served (FCFS)
Advantages of MLQ
- Custom Scheduling for Each Process Type: Each queue can use the most suitable algorithm for the type of work it handles.
- Well-Organized and Efficient: Separating processes by type leads to better system structure and more consistent performance.
Disadvantages of MLQ
- Risk of Starvation: If higher-priority queues stay busy, low-priority queues (like the batch queue) rarely get CPU time.
- More Complex to Implement: The management of various queues and different algorithms necessitates careful configuration and adjustment.
6. Multilevel Feedback Queue (MFQ) Scheduling
The Multilevel Feedback Queue (MFQ) is an advanced CPU scheduling method designed to handle multiple processes efficiently. As one of the key CPU Scheduling Algorithms in OS, it builds upon the Multilevel Queue (MLQ) system, but with a major improvement: processes can move between queues based on how they behave during execution.
This movement helps the system adapt to different process needs, giving preference to interactive or short tasks without completely ignoring long-running ones.
How It Works
- The CPU is divided into multiple priority queues, each with its own scheduling algorithm and time slice (quantum).
- New or short jobs (like interactive tasks) start in a higher-priority queue, where they get faster response times.
- If a process uses too much CPU time (indicating it’s CPU-bound), it’s moved down to a lower-priority queue.
- I/O-bound or quick processes that frequently yield to the CPU stay in higher-priority queues.
- Over time, the system "learns" how to handle each process by observing its behavior and adjusting its queue level.
Advantages of MFQ
- Extremely Flexible: Can handle a wide variety of tasks, from quick user commands to long background processes.
- Great for User Experience: Interactive or short tasks get quick responses, which is ideal for modern systems.
Disadvantages of MFQ
- Complicated to Set Up: Requires defining the number of queues, time slices, promotion/demotion rules, and more.
- Needs Careful Tuning: If not configured properly, some processes may be unfairly delayed, or the system may become inefficient.
Comparison and Analysis of Scheduling Algorithms
Various CPU scheduling algorithms may vary their functionalities based on factors such as system workload, CPU burst times, and the ratio of CPU-bound to I/O-bound jobs. Understanding their comparative strengths and weaknesses helps determine which algorithm is best suited for a particular environment. Below is a detailed analysis of the major scheduling algorithms, focusing on their performance, fairness, complexity, and impact on waiting times and response times.
| Algorithm |
How It Works (Explanation) |
Strengths (Pros) |
Weaknesses (Cons) |
Best Use Cases |
| First-Come, First-Served (FCFS) |
FCFS takes the processes one by one in the order of their arrival in the ready queue and gives each process control of the CPU till it terminates. |
Its main advantage is that the algorithm is quite simple and implementation-wise, it is very simple as it hardly ever violates the first-come first-served principle. |
It performs poorly when long CPU-bound jobs arrive first, causing short I/O-bound jobs to wait longer and increasing overall waiting time. |
Best suited for batch-processing systems where predictability is more important than responsiveness. |
| Shortest Job First (SJF) |
SJF selects the process with the shortest CPU burst time, giving priority to tasks that finish quickly. |
It provides the minimum average waiting time and maximizes throughput by quickly finishing short jobs. |
It can, however, happen that long processes are starved, and the need for correct prediction of CPU burst times makes this situation hard to be found in real systems. |
Ideal for workloads where short tasks dominate, and efficiency is the primary goal. |
| Shortest Remaining Time First (SRTF) |
SRTF is the preemptive version of SJF, where the OS can interrupt the current process if a newly arrived process has a shorter remaining burst time. |
It improves response time for short tasks and reduces waiting time for interactive processes. |
It increases implementation complexity and can frequently preempt long CPU-bound jobs, causing starvation. |
Useful in systems requiring fast responses to short tasks, such as interactive applications. |
| Priority Scheduling |
Processes are assigned priority values, and the OS always selects the process with the highest priority for execution. |
It allows urgent or critical tasks to receive immediate CPU attention, improving system responsiveness for important jobs. |
It may cause starvation for low-priority processes unless aging techniques are used to gradually increase their priority. |
Best used in real-time systems or scenarios where certain tasks must be executed immediately. |
| Round Robin (RR) |
RR assigns each process a fixed CPU time slice (quantum) and cycles through the ready queue, preempting tasks when their time expires. |
It provides fairness because every process gets an equal share of CPU time and ensures good response times for interactive jobs. |
Too many context switches occur if the time slice is too small, while a large quantum can make the algorithm behave like FCFS. |
Ideal for time-sharing systems, multi-user environments, and interactive applications. |
| Multilevel Queue (MLQ) |
MLQ separates processes into several queues based on priority or type (system, interactive, batch), with each queue using a different scheduling algorithm. |
It provides efficient resource allocation because each queue is optimized for the type of work it handles. |
Lower-priority queues may suffer starvation if higher-priority queues remain busy for long periods. |
Best suited for operating systems that manage a wide range of process types, such as system processes and background jobs. |
| Multilevel Feedback Queue (MLFQ) |
MLFQ allows processes to move between queues based on their behavior, giving preference to short and I/O-bound jobs while gradually lowering CPU-bound jobs. |
It is highly flexible and adaptive, offering excellent responsiveness for interactive tasks and fair CPU usage for long-running processes. |
It is complex to configure because it requires careful tuning of queue levels, time slices, and promotion/demotion rules. |
Ideal for modern general-purpose operating systems that require adaptability and fast response times. |
Summary
Each CPU scheduling algorithms have their pros and cons depending on the type of tasks. FCFS is easy to understand. SJF and SRTF have the least average waiting time, Priority Scheduling takes care of the most urgent tasks, Round Robin is the fairest method, and MLFQ is an efficient and dynamic algorithm. The main criteria for selecting the right algorithm are fairness, speed, low waiting times, and predictable performance.
Scheduling Criteria in OS Scheduling Algorithms
When an operating system decides which process should use the CPU next, it uses specific criteria to make the best choice. These criteria help ensure that the system runs efficiently and fairly. Here's an analysis of the os scheduling algorithms factors:
1. CPU Utilization
This refers to how effectively the CPU is being used. The goal is to keep the CPU busy doing useful work as much as possible, instead of letting it sit idle. A well-performing scheduling algorithm tries to maximize CPU usage, ideally keeping it close to 100%.
2. Throughput
Throughput measures how many processes are completed in a given time. A higher throughput means more work is getting done. Efficient scheduling leads to higher throughput, which is especially important in systems where performance and speed are crucial.
3. Turnaround Time
Turnaround time is the total time taken for a process to complete after it enters the system. This includes all stages: waiting in the queue, getting processed, and finishing. Lower turnaround time means processes are finishing faster, which is better for overall performance.
4. Waiting Time
This is the amount of time a process spends waiting in the ready queue before it gets the CPU. It's the "idle time" for a process. A good scheduling algorithm aims to reduce this waiting time for shorter or more urgent tasks.
5. Response Time
Response time is the time it takes for a process to get its first response after being submitted. It doesn't mean the process is done, just that it started doing something (like outputting a message or beginning a calculation). For interactive systems, fast response time is critical so users don’t experience delays.
6. Fairness
Fairness means that no process is left out of CPU use for too long. If a process were to be starved of the CPU, it would be called unfair. A fair algorithm keeps the processes from taking the CPU for too long and thus ensures that the performance of the system remains balanced.
Bottom Line
Scheduling criteria guide the OS in choosing the most efficient, fair, and responsive process execution order. By balancing CPU utilization, throughput, turnaround time, waiting time, response time, and fairness, the system ensures smoother multitasking and better overall performance.
Preemptive vs. Non-Preemptive Scheduling
| Feature |
Preemptive Scheduling |
Non-Preemptive Scheduling |
| Who Controls the CPU |
The operating system (OS) can pause an ongoing process to assign the CPU to another task. |
Once a process starts, it keeps the CPU until it finishes or switches voluntarily. |
| Responsiveness |
Very responsive and good for systems where quick reaction is needed. |
Less responsive and not ideal for systems needing fast interactions. |
| Flexibility |
Highly flexible and allows better CPU sharing among multiple tasks. |
Less flexible and processes are handled one at a time until completion. |
| System Overhead |
Higher overhead due to frequent context switching between tasks. |
Low overhead since switching happens rarely. |
| Regular Usage |
Used in real-time systems, operating systems with multitasking, and user-interactive apps. |
Common in simpler systems, like batch jobs or where tasks run in sequence. |
| Efficiency |
Can improve overall efficiency, especially in multitasking environments. |
It can cause inefficiencies if a long task blocks others from running. |
| Complexity |
More complex to implement and manage. |
Easier to implement and manage. |
Conclusion
CPU scheduling plays a central role in how an operating system manages multitasking, responsiveness, and overall performance. Each algorithm, whether FCFS, SJF, Round Robin, Priority, MLQ, or MLFQ, offers different strengths depending on workload type, fairness requirements, and system goals. Knowing what these algorithms do, their strengths and weaknesses, enables students to comprehend the efficiency, delayed reduction, and CPU usage balancing in real systems.
Advice for Learners
- Master the core metrics: CPU utilization, waiting time, turnaround time, and response time- that define how scheduling actually affects system performance.
- Study algorithm behavior under different workloads, especially the difference between CPU-bound and I/O-bound jobs; this builds strong intuition for exam and interview questions.
- Delve into the actual trade-offs that are made in the real world: to have a quicker response time will, in most cases, increase the overhead; to have a lower waiting time may, however, increase starvation. There is no perfect algorithm.
- Practice with Gantt charts and examples to visualize execution order, as this strengthens your problem-solving ability for scheduling scenarios.
- Have a thorough understanding of the preemptive vs the non-preemptive scheduling because this difference decides the system's responsiveness, fairness, and complexity in real operating systems.
Frequently Asked Questions
1. What are the main types of CPU scheduling algorithms?
The main types are FCFS, SJF, Round Robin, Priority Scheduling, and Multilevel Queue. Each of them operates differently in selecting the next process.
2. What is the difference between preemptive and non-preemptive scheduling?
Preemptive scheduling can pause a running task to start another. In non-preemptive scheduling, a task runs until it’s done or gives up the CPU.
3. How does the round-robin scheduling algorithm work?
Each process gets a fixed time to run in turns. If it’s not done in time, it goes to the back of the line.
4. What is the Shortest Job First (SJF) scheduling algorithm?
The Shortest Job First scheduling algorithm executes the process that requires the minimum amount of time. Shortest Job First scheduling can be preemptive or non-preemptive, and it is considered a very efficient way of reducing waiting time.
5. What are the key performance metrics for evaluating CPU scheduling algorithms?
The most crucial metrics are CPU utilization, throughput, waiting time, and system response or turnaround time.
6. How does Priority Scheduling work in operating systems?
Each task is given a priority, and higher ones go first. It can switch tasks based on priority to avoid long waits.