Understand Process Scheduling in OS with Simple Examples

Published: 11 Dec 2025 | Reading Time: 6 min read

Key Highlights of the Blog

Introduction

Have you ever thought about how your laptop is able to stream videos, open apps, download files, and update in the background, all at the same time, and still not crash? This would all be impossible if not for one silent hero: process scheduling.

If you are building systems, studying for OS exams, or writing software that requires high performance, knowing how the operating system selects the next process to execute can profoundly impact your work. The scheduling mechanism is at the core of everything: speed, responsiveness, battery life, and even the smoothness of your apps.

In this blog, you'll learn exactly how process scheduling works, why it matters, how algorithms like FCFS, SJN, RR, and MLFQ shape system behavior, and how schedulers coordinate multitasking under the hood. You'll see OS concepts come alive through simple explanations, real-world analogies, and practical insights you can use in projects and interviews.

What is Process Scheduling?

Process scheduling in OS is an essential function of an operating system (OS) is an essential function that manages how different programs (processes) share the CPU. In a multitasking environment where several processes are generally executed simultaneously, the operating system has to choose which process will be granted CPU time and for how long. Such a procedure keeps the system running smoothly and makes sure that resources are utilized efficiently.

Why Process Scheduling is Important?

The objectives of process scheduling in an OS mainly revolve around maintaining system efficiency and providing fair access to execution time to all processes. The key points to mention are:

What is Process Scheduling in an Operating System?

Process scheduling in OS is the method by which the operating system decides which process gets to use the CPU and for how long. As modern computers are capable of running several processes simultaneously, the OS makes sure that each process is allotted a certain amount of CPU time, and at the same time, efficiency is maintained. This is the work of the Process Manager who, by organizing the execution of various processes, is able to avoid waiting times and make the most of the system's performance.

Components of Process Scheduling

1. Scheduler

The scheduler decides which process will be executed next. To decide, it uses certain scheduling algorithms like First-Come, First-Served (FCFS), Shortest Job Next (SJN), or Round Robin so that the decisions are both fair and efficient.

2. Dispatcher

Once the scheduler picks a process, the dispatcher transfers control from the current process to the next one. It handles:

3. Queues in Process Scheduling

Processes in Process Scheduling In OS are stored in different queues based on their current execution state.

Process States in Scheduling

A process moves through several states during its execution:

Operations on Processes

Process scheduling in an operating system is practically dependent on the basic operations that can be done on processes during their lifecycle. Such operations are:

1. Process Creation

2. Process Termination

3. Process Suspension and Reintroduction

4. Parent and Child Relationships

5. Special Process States

Why These Operations Matter in Scheduling

What are the Categories of Scheduling?

1. Non-Preemptive Scheduling

Non-preemptive scheduling is a CPU scheduling method where, once a process starts executing, it continues running until it either completes or voluntarily releases the CPU (e.g., by waiting for I/O). The operating system does not interrupt a running process to switch to another.

Example: First-Come-First-Serve (FCFS)

2. Preemptive Scheduling

Preemptive scheduling in an operating system interrupts a process that is currently running and, most of the time, based on priority or fairness, assigns CPU time to another process. This improves system responsiveness, especially in real-time or multi-user environments.

Example: Round Robin (RR)

Example: Shortest Remaining Time First (SRTF)

Differences Between Non-Preemptive and Preemptive Scheduling

Feature Non-Preemptive Scheduling Preemptive Scheduling
Process Interruption In non-preemptive scheduling, a running process cannot be interrupted and continues execution until it finishes or voluntarily releases the CPU. When preemptive scheduling is used, the execution of a process that is currently running may be forcibly stopped so that the CPU can be given to a higher-priority or more urgent process.
Overhead This approach has low overhead because the operating system does not need to perform frequent context switching. This approach has higher overhead due to the frequent context switching required when processes are interrupted and resumed.
Risk of Starvation Starvation is possible because shorter or lower-priority processes may wait indefinitely behind long-running tasks. The risk of starvation is minimized because the scheduler can interrupt tasks and allocate CPU time based on priority or fairness.
System Responsiveness System responsiveness is lower because once a process starts execution, other processes must wait until it completes. System responsiveness is higher because the OS can quickly switch to more urgent tasks, making the system feel faster and more interactive.

What are Scheduling Queues in an Operating System?

Process Scheduling in OS is a source of major inefficiencies that contribute to a negative performance feedback loop. The queues that Process Scheduling in OS refers to are the core of the whole mechanism. These queues guarantee that processes are performed in a regulated way; thus, the efficiency of the entire system is elevated. There are four principal scheduling queues:

1. Job Queue

It is a queue that contains all processes of a system, also those that are waiting for memory allocation in order to be executed. A freshly made process is initially placed in the job queue.

2. Ready Queue

This queue contains processes that are loaded into main memory (RAM) and are ready to execute but are waiting for CPU time. The CPU scheduler selects processes from the ready queue and assigns them CPU resources.

3. Device Queue

When a process needs to perform an input/output (I/O) operation, it moves to a device queue. Each I/O device (such as a printer, disk, or keyboard) has its own queue for processes waiting to use it.

Process Scheduling Strategies

To decide the order in which processes are executed, operating systems use different scheduling strategies. Process Scheduling in OS ensures efficient task management by allocating CPU time based on priority, fairness, and system performance.

1. First-In-First-Out (FIFO)

Operations are carried out in the exact sequence of their arrival; the oldest process in the queue is given CPU time first. While this method is straightforward, it can result in lengthy waiting times for the processes that come later.

2. Priority Scheduling

Each process is assigned a priority level based on importance or urgency. Processes with higher priority are executed before lower-priority ones. If priorities are not managed well, low-priority tasks may be delayed indefinitely (a problem known as starvation).

3. Round Robin (RR)

Each process is given a fixed time slice (also called a time quantum) to execute. After its time expires, the process moves to the back of the ready queue, and the next process gets CPU time. This ensures fairness and prevents any single process from monopolizing the CPU.

Types of Process Schedulers

Process schedulers are responsible for managing how processes move through a system, ensuring efficient CPU usage and system responsiveness. Process Scheduling in OS revolves around three main types of schedulers: short-term, long-term, and medium-term ones. Each of them is a vital cog in the wheel of process management.

1. Long-Term Scheduler (Job Scheduler)

The long-term scheduler or the job scheduler is the one that controls the level of multiprogramming in a system. It selects which processes should be loaded into memory for execution. Since this scheduler works at a higher level, it runs infrequently when a new process is created or when system resources are freed up.

One of its essential functions is maintaining a balance between CPU-bound and I/O-bound processes. A well-balanced mix guarantees that the CPU remains efficiently utilized while preventing bottlenecks in input and output operations.

If too many CPU-bound processes are loaded, the CPU may become overwhelmed. On the other hand, if there are too many I/O-bound processes, the CPU may remain idle, waiting for input/output operations to complete.

2. Short-Term Scheduler (CPU Scheduler)

The short-term scheduler, or CPU scheduler, is responsible for selecting which process from the ready queue should be assigned to the CPU for execution. Unlike the long-term scheduler, it operates frequently, often every few milliseconds, so that processes are executed promptly.

This component is very important for the entire system to be fast and efficient. The scheduler decides how to distribute CPU time between different processes, thus it guarantees that tasks with high priority will be performed without delay and, at the same time, all processes will be treated fairly. Depending on which scheduling algorithm is used, for instance, First-Come, First-Served (FCFS), Round Robin or Priority Scheduling, the next process to be given CPU time is determined.

3. Medium-Term Scheduler

The responsibilities of the medium-term scheduler revolve around the overall control of process swapping in Process Scheduling in OS. It manages the going and coming of processes from memory to make the best use of the resources available. It is a very useful tool in a system memory-constrained environment, as it temporarily swaps out processes to the disk and returns them when they are required again.

This scheduler performs its functions at regular intervals and is very important for the elimination of thrashing, a condition in which overly frequent swapping leads to a system slowing down drastically. The medium-term scheduler is instrumental in keeping the system running at a steady pace, and it makes sure that the processes in the foreground get sufficient resources to continue operating effectively.

Comparison of Process Schedulers

Process scheduling is based on three main schedulers, the Long-Term Scheduler, Short-Term Scheduler, and Medium-Term Scheduler, that are hierarchically controlling different stages of process execution. Even though they operate conjointly to keep the system running efficiently, each scheduler has a specific function that is related to CPU performance, multiprogramming levels, and the ratio of CPU-bound and I/O-bound processes.

Feature Long-Term Scheduler (Job Scheduler) Short-Term Scheduler (CPU Scheduler) Medium-Term Scheduler (Swapping Scheduler)
Main Role The long-term scheduler decides which processes should be admitted into the system for execution, controlling the degree of multiprogramming. The short-term scheduler selects which ready process should be executed next by the CPU, ensuring fast and efficient task switching. The medium-term scheduler manages swapping by temporarily removing processes from memory and bringing them back when resources are available.
Frequency of Execution It runs infrequently because it only makes decisions when new jobs enter the system. It runs very frequently, often every few milliseconds, because it constantly selects the next process for the CPU. It runs periodically when memory needs to be optimized or when the system experiences a heavy load.
Key Function It balances the mix of CPU-bound and I/O-bound processes to maintain smooth overall system performance. It allocates CPU time using scheduling algorithms to maximize responsiveness and CPU efficiency. It reduces memory pressure by swapping processes in and out, preventing thrashing and maintaining stability.
Impact on System Performance Its decisions affect long-term stability and overall multiprogramming levels. Its decisions directly impact CPU performance, user responsiveness, and context-switch rate. Its decisions improve memory utilization and ensure active processes have enough resources to run efficiently.
Relation to Preemptive vs Non-Preemptive Scheduling It is not directly involved in preemptive or non-preemptive decisions because it operates at a higher level. It is heavily involved in preemptive scheduling because it decides when to interrupt a running process. It is not involved in preemption decisions but influences process availability through swapping.
Type of Processes Managed It manages all new processes entering the system before they are ready to run. It manages only the processes in the ready queue that are prepared to execute. It manages suspended processes that have been removed from memory due to resource constraints.
Advantage It prevents system overload by controlling how many processes can enter the ready queue at once. It improves responsiveness by ensuring that the CPU always works on the most appropriate process. It optimizes memory usage and prevents thrashing during heavy multitasking or virtual machine operations.
Limitation Poor decisions can lead to too many CPU-bound or I/O-bound tasks, reducing system efficiency. Frequent context switching can increase overhead and reduce overall throughput. Swapping creates latency and can slow down the system if performed too often.
Used In It is used in job scheduling systems and environments requiring controlled multiprogramming. It is used in interactive systems, real-time computing, and environments that require rapid task switching. It is used in virtual memory systems, virtual machines, and systems with limited physical memory.

Process Scheduling Criteria and Performance

The performance of process scheduling in OS is evaluated by several essential criteria. These criteria are tools that show how effectively a scheduling algorithm manages the system's resources and satisfies the users' needs:

1. CPU Utilization

2. Throughput

3. Turnaround Time

4. Waiting Time

5. Response Time

6. Fairness

How Criteria Relate to Algorithms

Why These Criteria Matter

A good scheduling algorithm balances these criteria to maximize efficiency, minimize delays, and provide a fair and responsive system for all users.

Common CPU Scheduling Algorithms

Process Scheduling in OS is dependent on various algorithms that are designed to determine how processes are to be given CPU time in order to have a smooth and efficient performance. The following are some of the most commonly used scheduling algorithms:

1. First-Come, First-Served (FCFS)

This algorithm treats the input of processes as it is, executing each in the order of the arrival time, which is similar to the way a line at the ticket counter works. It is very specific, easy to implement, and in theory, works well when the execution time of the tasks is roughly the same. However, it can lead to inefficiency when shorter tasks get stuck behind longer ones, causing unnecessary delays, a problem known as the convoy effect.

2. Shortest Job Next (SJN)

This is also called Shortest Job First (SJF); this approach prioritizes tasks that require the least processing time. This reduces the overall waiting time, making it an efficient scheduling technique. On the other hand, the biggest limitation or stumbling block it faces is that it requires detailed info on the execution time of each task, which is, most of the time, completely unpredictable away from theoretical scenarios.

3. Round Robin (RR)

With this technique, each process is allotted the CPU for a fixed time slice or period called a time quantum and after that, the process goes to the end of the queue. It is ensured by this that a particular process does not get to use the system resources fully for a long time, and thus, RR is considered to be the perfect multitasking technique. Nevertheless, if the time slice is too short, the frequent switching between tasks (context switching) can lead to efficiency loss.

4. Priority Scheduling

Here, each task is assigned a priority, and higher-priority tasks execute before lower-priority ones. This provides critical operations running without delay. However, if lower-priority tasks are constantly surpassed by higher-priority ones, they may experience indefinite delays, a problem known as starvation. To counter this, priority ageing techniques can be used to gradually increase the priority of long-waiting tasks.

5. Multilevel Feedback Queue (MLFQ)

This powerful yet complicated algorithm divides the jobs into many categories of different levels of importance and based on how long they have been executed it alters their priority. Those tasks that are shorter or are interactive processes are given a higher priority, and on the contrary, longer ones are pushed to lower-priority queues gradually. The model is highly efficient due to its equilibrium between fairness and efficiency, nonetheless, the difficulty lies in the implementation because of its sophisticated management requirements.

Comparison of CPU Scheduling Algorithms

Algorithm Fairness Responsiveness Complexity Starvation Risk
FCFS Low Low Low High
SJN Medium Medium Medium Low
RR High High Medium Low
Priority Low High Medium High
MLFQ High High High Low

What is Context Switching?

Context switching is the procedure by which the operating system pauses one running process and resumes another. It is the mechanism that makes multitasking possible. Whenever the CPU shifts its attention from one process to another, the OS must save the current process state and load the next one. This entire activity is called a context switch.

Context switching happens frequently in modern operating systems, especially in preemptive scheduling, where processes can be interrupted to ensure fairness, responsiveness, and efficient CPU utilization.

Why Does Context Switching Happen?

Context switching can occur due to several reasons:

1. Hardware Interrupts

Devices such as keyboards, disks, or network cards send interrupts that require the CPU to give them attention immediately. The CPU, therefore, ceases the work it is doing, records its state, and processes the interrupt. After it is done, it goes back to normal execution.

2. Multitasking and Scheduling Decisions

Schedulers may decide to suspend a process so that another one, often one with a higher priority or shorter burst time, can run. This is key to preemptive scheduling strategies like Round Robin and SRTF.

3. Switching Between User Mode and Kernel Mode

When a process requests access to system resources (like memory or I/O), it switches to kernel mode. To do this, the CPU state must be saved and restored, which is part of the context switching overhead.

Steps Involved in Context Switching

Context switching involves a series of steps to ensure a smooth transition between processes:

  1. Saving the Current Process State: The CPU saves information about the running process, including its program counter, register values, and other necessary data. This ensures that the process can resume from the exact point it was paused.
  2. Loading the Next Process State: The state of the next scheduled process is recovered from memory and loaded into the CPU, allowing it to continue execution.
  3. Updating Scheduling Queues: The operating system updates scheduling information, moving the paused process to the appropriate queue (such as the ready queue or waiting queue) and selecting the next process to run.

Why Context Switching Matters

Context switching is essential for:

However, it is not free; each context switch adds overhead and slows down the system slightly. Those schedulers that are efficient are always trying to avoid unnecessary switches in order to keep up the system's performance.

Bottom Line

Context switching is the bridge between scheduling decisions and actual CPU execution. It enables multitasking, supports preemptive scheduling, and keeps systems responsive, but comes with performance overhead, making efficient scheduling essential.

Conclusion

Process scheduling in OS is the main function that permits multiple processes to run smoothly by effectively managing CPU time. The OS, through various scheduling strategies and coordination among different schedulers, ensures fairness, responsiveness, and resource utilization at the optimum level. In a stable and predictable multitasking environment, concepts such as context switching, ready queues, and preemption are of vital importance. Understanding these mechanisms equips learners with a solid base for systems programming, OS design, and technical interviews.

Key Points to Remember

  1. Choosing a scheduling algorithm is a trade-off; improving waiting time may worsen response time, and increasing fairness may reduce throughput. No single algorithm is perfect for all workloads.
  2. CPU-bound and I/O-bound processes must be balanced; otherwise, the CPU may sit idle or become overloaded. Good schedulers maintain this balance automatically.
  3. Starvation and convoy effects are real system issues, not just theory; poor scheduling choices can slow down an entire system even if the hardware is powerful.
  4. Time quantum in Round Robin determines system behavior; too small increases context switching overhead, too large makes RR behave like FCFS.
  5. Scheduling influences energy usage and battery life in modern systems; unnecessary context switching or poor CPU utilization drains power significantly.

Frequently Asked Questions

1. What is process scheduling in an operating system?

Process scheduling is the function of the OS that is responsible for changing the order of the processes to be executed on the processor. In order to offer multitasking, the technique deals with the usage of different process states, and scheduling algorithms, such as round-robin or Priority Scheduling, are applied.

2. What are the main types of schedulers in an OS?

3. What are scheduling queues, and why are they important?

Scheduling queues are groups of processes that share the same state:

4. What is context switching, and how does it affect performance?

Context switching happens when the CPU switches between processes, saving and loading their states. As the situation occurs in multitasking, the switching times become performance killers because of the overhead.

5. How does preemptive scheduling differ from non-preemptive scheduling?

6. What are the advantages of process scheduling?

Process scheduling increases the efficiency of the CPU, guarantees that all processes will be treated fairly, raises the system's responsiveness, and makes the CPU busy most of the time, it reducing the CPU idle time, which actually can be done better by some other peripherals.

7. What challenges exist in designing scheduling algorithms?

The difficulties deal with the necessity of the solution to be just and efficient at the same time, the problem of overhead created by the context switch, the issue of the so-called starvation of the fanatical low-priority processes, and the problem of forecasting sudden changes in real-time workloads. The Multilevel Feedback Queue method represents one of the advanced solutions for such problems.


Related Articles


Source: NxtWave (CCBP.in)

Original URL: https://www.ccbp.in/blog/articles/process-scheduling-in-os