Optimizes minimum average weight but cannot be implemented at the CPU level because you don’t know how long the CPU burst will be
Predicted as an exponential average
Round robin - each process gets the CPU for a given time unit, then an interrupt returns control to the CPU
most modern systems have this time unit, the time quanta on the order of magnitude of 10-100ms
the context switch duration is on the order of magnitude of 10ms
Priority Scheduling
Each process is assigned a priority and the process with the highest priority is selected to run
Has problems with indefinite blocking/starvation (e.g. low priority processes waiting indefinitely)
Aging: gradually increasing process priority with time
Implemented with a multilevel queue, one queue for each priority level, and pulls processes from the highest priority first (can be used for foreground/background queues, etc.)
Tasks must be serviced by the end of their deadlines
Event Latency:
The time for the system to finish executing its current instruction, read the interrupt, and context switch to the interrupt service routine
Real-time systems must bound this latency to service the tasks
Dispatch Latency:
The time for the dispatcher to stop one process and start another
Since RTOS’s (real time operating systems) must immediately respond to events requiring the CPU their scheduling algorithms are priority-based and preemptive
Admission control: A RTOS scheduling algorithm will wither admit or reject a process if it knows that it cannot possibly finish servicing the process by the deadline