Summarise With AI
Back

Preemptive Scheduling in OS: Concepts, Types, and Examples

8 Jan 2026
5 min read

What This Blog Covers

  • Explains process scheduling in operating systems and why it is needed
  • Introduces preemptive scheduling and how it differs from non-preemptive scheduling
  • Breaks down how preemptive scheduling works step by step
  • Covers common preemptive scheduling algorithms like Round Robin, SRTF, and Priority Scheduling
  • Uses real-world examples to show where and why preemptive scheduling is used

Introduction

Have you ever wondered how your operating system manages to run several programs simultaneously without experiencing any lag? How does it decide which task gets the CPU first, and for how long? This is where process scheduling comes in. Acting like a traffic controller for the CPU, it determines which program should run, when it should run, and how long it can use the processor. Whether you’re switching between apps on your phone or running time-critical programs in real-time systems, scheduling ensures everything works smoothly without conflicts.

So how does an operating system manage this? It mainly relies on two core approaches: preemptive scheduling, where a running task can be interrupted if needed, and non-preemptive scheduling, where a task runs until it finishes. These approaches form the foundation of primitive scheduling, which lies at the heart of multitasking in every operating system.

What is Preemptive Scheduling?

Preemptive scheduling represents a fundamental concept in operating system design. In essence, it gives the CPU the ability to temporarily halt an ongoing activity in order to focus on another that requires immediate attention. Such an interruption might occur almost at any moment, although it usually occurs when a higher-priority process is ready or when a process has consumed its allotted time slice (also known as time quantum).

In a preemptive scheduling system, the operating system uses a scheduler, a dedicated component responsible for deciding which process should use the CPU at any given moment. The scheduler constantly monitors the state of all processes and can force the currently running process to pause, moving it back to the "ready" queue, so that another process (often with higher priority or shorter remaining time) can run.

Key elements involved in preemptive scheduling:

  • Interrupts: The mechanism that allows the CPU to stop a running process. Interrupts can be triggered by hardware (such as a timer expiring) or by the arrival of a higher-priority process.
  • Context Switching: When a process is interrupted, the operating system must save its current state and load the state of the next process to run. This operation is known as context switching.
  • Ready Queue: A list of all processes that are ready to run but waiting for CPU time.
  • Time Slice (Time Quantum): To ensure that every process receives an equal amount of CPU time, certain preemptive algorithms limit the amount of time that each process can run before being preempted.

Preemptive scheduling is widely used in modern operating systems, such as Windows, Linux, and real-time operating systems (RTOS), because it allows the system to remain responsive and ensures that critical or time-sensitive tasks are handled promptly.

How Preemptive Scheduling Works:

  1. From the ready queue, the scheduler chooses a process to execute on the CPU.
  2. If another process with a higher priority becomes ready, or if the current process’s time slice expires, the scheduler interrupts the running process.
  3. The interrupted process is moved back to the ready queue, and the higher-priority (or next-in-line) process starts running.
  4. This cycle continues, allowing the operating system to rapidly switch between tasks and respond to changing demands.

Common Preemptive Scheduling Algorithms:

  • Round Robin (RR): Each process receives a predetermined time slice and takes turns.
  • Shortest Remaining Time First (SRTF): The next process to run is the one with the least amount of time left to finish.
  • Preemptive Priority Scheduling: If required, lower-priority processes are preempted while the highest-priority process is carried out.

Preemptive scheduling helps maintain a responsive system and keeps it well-optimised for multitasking. Therefore, it is a bedrock technique for any computing environment.

Preemptive Scheduling Algorithm

The fundamental concept of preemptive scheduling algorithms is that, depending on specific parameters (such as priority or remaining execution time), the operating system can halt one active task and transfer the CPU to another. Here’s a generalized step-by-step outline of how preemptive scheduling typically works in an operating system:

  1. Initialize the Ready Queue
    • Collect all processes that are ready to execute and place them in the ready queue.
  2. Select a Process to Run
    • The scheduler selects the next process to run. This choice is based on the specific preemptive algorithm in use (e.g., highest priority, shortest remaining time, or round robin order).
  3. Allocate the CPU
    • The selected process is assigned the CPU and starts or resumes execution.
  4. Monitor for Preemption Conditions
    • While the process is running, the scheduler continually checks for events that may require preemption, such as:
      • A new, higher-priority process enters the ready queue.
      • The running process’s allotted time slice (time quantum) expires.
      • An interrupt or I/O event occurs.
  5. Preempt if Necessary
    • If a preemption condition is met:
      • The current process is interrupted.
      • Its state is saved (context switching).
      • It is placed back into the ready queue (unless it is waiting for I/O or is finished).
  6. Repeat the Selection
    • The scheduler selects the next appropriate process from the ready queue.
    • Resume from step 3 until all processes are complete.

Key Points:

  • The exact selection criteria (priority, remaining time, round robin) depend on the specific preemptive scheduling algorithm implemented.
  • Context switching is a fundamental operation in preemptive scheduling, ensuring smooth transitions between processes.

Types of Preemptive Scheduling

The operating system can suspend an active process to free up the CPU for a more crucial or ready-to-run task thanks to preemptive scheduling. Here are a few popular types:

1. Preemptive Shortest Job First

Processes are ranked according to the smallest remaining execution time using the Preemptive smallest Job First (SJF) algorithm, sometimes called Shortest Remaining Time First (SRTF). In this approach, if a new process arrives with a burst time shorter than the remaining time of the currently running process, it will preempt the CPU. This ensures that the processor is always allocated to the task with the least time left for completion. Efficient task management using this method can be achieved by implementing it through a preemptive SJF scheduling program in C.

2. Round Robin (RR)

This approach divides CPU time among processes by giving each process a fixed duration to execute, called a "time quantum. " When a process's time is up, and it is not yet done, it is placed at the end of the queue to wait for the next time slot. This ensures that no process is left unattended and is typically used in environments that support multiple users or processes sharing the CPU, such as in time-sharing systems.

3. Shortest Remaining Time First (SRTF)

This is an advanced version of the Shortest Job First (SJF) method, but it's preemptive. It always picks the process that has the smallest amount of time left to finish. If a new process arrives with less remaining time than the one currently running, the CPU switches to the new one, an approach well illustrated by any typical SJF preemptive scheduling example.

4. Preemptive Priority Scheduling

Every process is assigned a priority in this method. The process with the greatest priority is always executed by the CPU. The current process is interrupted if a new one with a higher priority arises. Although this is effective, it might result in "starvation," where low-priority jobs might never be completed unless a method known as "aging" is employed to gradually raise their importance over time.

5. FreeRTOS Preemptive Scheduling

Microcontrollers, sensors, and smart devices are often equipped with FreeRTOS Preemptive Scheduling, a real-time operating system. Higher-priority jobs can be prioritized over lower-priority ones by using preemptive scheduling. This ensures that delicate, crucial tasks are finished on time, which is essentially the basis of real-time systems.

6. Preemptive Operating System

The CPU can stop one activity and transition to another if necessary, particularly if the new task is more urgent, thanks to a preemptive operating system. Popular operating systems like Windows, Linux, and real-time systems such as FreeRTOS support this feature. It helps ensure better responsiveness and multitasking.

7. Preemptive Scheduler in Golang (Go)

A scheduler that integrates cooperative and preemptive techniques is part of the Go runtime. The Golang preemptive scheduler, which is well-known for its effectiveness, guarantees equitable CPU sharing among goroutines, which are lightweight threads. 

It actively monitors goroutines for excessive CPU usage and can interrupt them to allow others to run, maintaining balanced execution. Performance bottlenecks may be avoided using this hybrid scheduling technique, particularly in large-scale applications.

Recap So Far:

  • Preemptive scheduling includes multiple algorithms, each suited to different system needs.
  • By allocating a predetermined time slice to each activity, round robin assures fairness.
  • By executing the shortest remaining task, Shortest Remaining Time First (SRTF) reduces waiting times.
  • Higher-priority activities are carried out first by preemptive priority scheduling.
  • The necessity for real-time behavior, fairness, or responsiveness determines the algorithm to use.

Examples of Preemptive Scheduling in Real-World Scenarios

One crucial component that enables a multitasking system to react fast is preemptive scheduling. Preemptive scheduling actually makes sense in the following situations:

1. Step-by-Step Example: Preemptive Priority Scheduling

Let’s look at a simplified scenario using process IDs, arrival times, burst times, and priorities. This will help you see exactly how the scheduler decides which task runs next.

Process Table:

Timeline:

  1. At 0 ms, only P1 has arrived, so it starts running.
  2. At 1 ms, P2 arrives with a higher priority. The scheduler interrupts P1 and gives the CPU to P2.
  3. P2 completes after 3 ms of burst time (finishes at 4 ms).
  4. The scheduler checks the queue: P1, P3, and P4 are waiting. P1 has the next highest priority, so it resumes.
  5. As new processes arrive, the scheduler always checks priorities and remaining burst times, interrupting lower-priority tasks when needed.

This approach ensures that urgent or high-priority tasks are handled immediately, even if it means pausing a task that’s already running.

2. Real-Time System Example: Aircraft Control Systems

In aircraft control systems, timing is critical. The operating system uses preemptive scheduling to guarantee that high-priority tasks, like adjusting control surfaces or responding to sensor data, are executed as soon as they’re needed.

How it works:

  • If a flight control process (e.g., adjusting the elevator for turbulence) needs the CPU, the scheduler can immediately interrupt less critical background tasks (like logging data) to handle the urgent request.
  • This ensures that safety-critical operations are never delayed, which is essential in aviation.

3. Health Monitoring Devices

Wearable fitness trackers and heart monitors are examples of modern health equipment that rely on preemptive scheduling to react fast to important occurrences.

How it works:

  • If the device notices a strange heart rhythm, the scheduler can instantly pause routine monitoring or data syncing tasks to trigger an alert or record the critical event.
  • This rapid response is only possible because preemptive scheduling allows urgent tasks to take precedence over less important ones.

4. Round Robin in Multi-User Systems

In time-sharing systems (like shared servers or multi-user computers), preemptive scheduling with the Round Robin algorithm ensures fairness.

How it works:

  • Each user’s process gets a fixed time slice. If it doesn’t finish, it’s paused, and the next process runs.
  • This keeps the system responsive for all users, preventing any single task from monopolizing the CPU.

Bottom Line

Preemptive scheduling is essential for systems that demand quick responses and efficient multitasking. It makes sure that essential activities, whether in flight control, health monitoring, or daily computing, are addressed promptly by permitting high-priority or urgent tasks to interrupt current processes. These real-world examples highlight how preemptive scheduling delivers the responsiveness and reliability needed in modern technology, making it a cornerstone of effective operating system design.

Advantages and Disadvantages of Preemptive Scheduling

Although preemptive scheduling has several advantages that make it a popular option in many contemporary operating systems, it also has some significant disadvantages. This is a fair summary:

Advantages of Preemptive Scheduling

  • Improved System Responsiveness:
    Preemptive scheduling allows high-priority or time-sensitive tasks to interrupt ongoing processes. This ensures that important tasks are attended to right away, which speeds up reaction times for essential procedures.
  • Efficient CPU Utilization:
    Preemptive scheduling keeps the system active and productive by preventing any one task from controlling the CPU. The CPU can swiftly transition to ready activities, reducing idle time and increasing overall efficiency.
  • Supports Multiprogramming Environments:
    Preemptive scheduling makes it easier to manage multiple processes simultaneously. The operating system can dynamically adjust which tasks are running, providing flexibility and scalability as the workload increases.
  • Better Handling of Priority Processes:
    The scheduler can always prioritize more important tasks. If a critical process arrives, it can preempt less urgent ones, ensuring that essential operations are not delayed.
  • Helps Avoid Deadlocks:
    Since resources and the CPU can be preempted from processes, the system is less likely to enter a deadlock state where two or more processes are stuck waiting indefinitely for resources.

Disadvantages of Preemptive Scheduling

  • Increased Overhead from Context Switching:
    The system must often save and restore process states as a result of frequent disruptions. Particularly in systems with several active activities, this context switching can lower overall performance and use CPU resources.
  • Risk of Starvation for Low-Priority Processes:
    If higher-priority processes keep arriving, lower-priority tasks may be postponed indefinitely. This phenomenon, known as starvation, can prevent some jobs from ever completing unless additional safeguards (like "aging" to gradually increase a process’s priority) are implemented.
  • Greater Complexity in System Design:
    More complex algorithms and mindful resource management are needed to implement preemptive scheduling. When many processes access shared data, developers may encounter timing concerns like race conditions.
  • Potential for Timing Bugs and Data Inconsistencies:
    Because processes can be interrupted at almost any point, there’s a higher risk of timing-related bugs. Unpredictable behavior might result from shared resources being inconsistent due to improper synchronization.

By weighing these advantages and disadvantages, system designers can choose whether preemptive scheduling fits their needs, balancing responsiveness and efficiency against complexity and resource demands.

What is Non-emptive Scheduling?

Non-preemptive scheduling is a type of CPU scheduling method where once a process starts running, it keeps control of the CPU until it finishes its task or pauses on its own (like waiting for input/output). No other process can interrupt it during this time, even if a higher-priority or shorter task comes in.

This approach is used in systems where it's more important for tasks to run in an orderly, predictable manner rather than reacting quickly to new tasks. It works well in environments like batch processing systems or basic operating systems that don’t require real-time performance.

Example: How Non-Preemptive Scheduling Works

Process Arrival Time (ms) Burst Time (ms) Priority (Lower = Higher)
P1 0 4 2
Process Arrival Time Burst Time
P1 0 ms 6 ms
P2 2 ms 4 ms
P3 4 ms 8 ms
P4 5 ms 3 ms
P5 6 ms 5 ms

Timeline of What Happens:

  • At 0 ms, P1 arrives and starts immediately since nothing else is waiting. It runs from 0 to 6 ms without interruption.
  • After P1 finishes, P2 gets the CPU and runs from 6 to 10 ms.
  • Even though P4 arrives before P2 finishes and has a shorter burst time, it must wait because P2 is already running.

Advantages of Non-Preemptive Scheduling

  • Less Overhead: Since processes aren't being interrupted, there’s no need for frequent context switching (saving and loading process states).
  • Simple to Implement: Because of the simple logic, developers can code and maintain it with ease.
  • Smooth Execution: Each task runs completely without being stopped, leading to fair treatment of processes in the order they’re scheduled.

Disadvantages of Preemptive Non-Scheduling

  • Delays for Short or Urgent Tasks: When a short or high-priority job arrives during the execution of a longer one, it has to wait.
  • Convoy Effect: A single lengthy process can stall the whole system by blocking other tasks.
  • Not Ideal for Real-Time Systems: Since it does not allow quick responses and flexibility, which are necessary in time-critical applications, it is not suitable for real-time systems.

Types of Non-Preemptive Scheduling

With non-preemptive scheduling, a process can begin running and continue uninterrupted until it is finished. The primary kinds are as follows:

1. First Come, First Serve (FCFS)

This is the most basic type of scheduling. Processes are handled in the order they arrive, just like waiting in line. There’s no priority given to any task; the one that comes first gets executed first. It's simple to implement, but it can lead to long waiting times if a big task comes in before a smaller one.

2. SJF Non Preemptive Scheduling

In this method, the CPU always runs the process that has the least amount of time left to finish, as seen in an SJF preemptive scheduling example. Unlike SJF Non Preemptive Scheduling, where a running process cannot be interrupted, the preemptive version allows a newly arrived process with a shorter burst time to take over the CPU. For example:

  • Process P1 takes 8 milliseconds
  • Process P2 takes 4 milliseconds
  • While P1 is running, a new process, P3, arrives and only needs 2 milliseconds

Since P3 has the shortest remaining time, the CPU immediately switches to it. This approach can be very efficient because it reduces the average waiting time. However, it causes a problem called starvation, where longer processes never get a chance to run if shorter ones keep arriving.

3. Non-Preemptive Priority Scheduling

Here, each process is assigned a priority. The one with the highest priority is picked first. If two processes have the same priority, the one that came first is chosen. Once a process starts running, it won't stop until it's done, even if a higher-priority process arrives during that time.

Differences Between Preemptive and Non-Preemptive Scheduling

Preemptive and non-preemptive scheduling differ mainly in how the CPU is allocated, how interruptions are handled, and how quickly the system can respond to new tasks.

Feature Preemptive Scheduling Non-Preemptive Scheduling
Interruptions A process can be paused in the middle so another one can run. Once a process starts, it runs until it finishes. No other process can interrupt it.
Flexibility Very flexible as the CPU can switch between tasks as needed. Less flexible as the CPU must wait until a task finishes before moving on.
System Overhead Higher, because the system needs extra work to save and switch between processes. Lower, since there's no switching during process execution.
Response Time Faster, especially for short or urgent tasks, as they can quickly take over the CPU. Slower, because long tasks can delay the start of others.
Risk of Starvation More likely, since longer tasks might be interrupted too often and wait too long. Less likely, as each task gets full CPU time once started.
Best Use Case Works best in systems that need quick responses, like real-time or multitasking systems. Ideal for simpler setups like batch systems, where speed is less critical.
Implementation Complexity More complex to build and manage because of frequent switching and state saving. Easier to implement as tasks run one by one without interruptions.

Bottom Line:

Preemptive scheduling focuses on responsiveness and multitasking, while non-preemptive scheduling prioritizes simplicity and predictable execution.

Conclusion

Preemptive and non-preemptive scheduling are two basic methods used in operating systems to manage how tasks run. Each has advantages, disadvantages, and ideal applications. When real-time performance, multitasking, or fast reaction times are crucial, preemptive scheduling is excellent.

When you want a simpler configuration with lower overhead and a more consistent job flow, non-preemptive scheduling works well. Developers may create systems that function more efficiently, maintain fairness, and provide a better experience by understanding when and how to employ each kind.

Points to Remember

  1. Preemptive scheduling allows for the termination of an active process before it completes.
  2. The CPU is redistributed according to time quantum, remaining time, or priority.
  3. Context flipping is common and crucial.
  4. It enhances the responsiveness of the system, particularly for critical activities.
  5. It is extensively utilized in contemporary real-time and multitasking systems.

Frequently Asked Questions

1. What are the benefits of preemptive scheduling?

Preemptive scheduling prevents any one process from using up all of the CPU and speeds up system response times. It works well for systems like those in hospitals or airlines where time is crucial.

2. What are the downsides of preemptive scheduling?

It can cause a lot of switching between tasks, which adds extra work for the system. Also, low-priority tasks might get ignored for too long if higher-priority ones keep taking over.

3. How does preemptive priority scheduling work?

In this method, tasks with higher priority run first. If two tasks have the same priority, the one that arrived first gets to run before the other.

4. What are some common types of preemptive scheduling?

Some well-known types are Round Robin, Shortest Remaining Time Next (SRTN), and Preemptive Priority Scheduling.

5. Why is preemptive scheduling useful in real-time systems?

It makes sure important tasks are finished on time, which is a must in systems like factory controls or health monitoring equipment.

Summarise With Ai
ChatGPT
Perplexity
Claude
Gemini
Gork
ChatGPT
Perplexity
Claude
Gemini
Gork
Chat with us
Chat with us
Talk to career expert