Skip to content
Advertisement

Who schedules threads?

I have a question about scheduling threads. on the one hand, I learned that threads are scheduled and treated as processes in Linux, meaning they get scheduled like any other process using the conventional methods. (for example, the Completely Fair Scheduler in linux)

On the other hand, I also know that the CPU might also switch between threads using methods like Switch on Event or Fine-grain. For example, on cache miss event the CPU switches a thread. but what if the scheduler doesn’t want to switch the thread? how do they agree on one action?

I’m really confused between the two: who schedules a thread? the OS or the CPU?

thanks alot 🙂

Advertisement

Answer

The answer is both.

What happens is really fairly simple: on a CPU that supports multiple threads per core (e.g., an Intel with Hyperthreading) the CPU appears to the OS as having some number of virtual cores. For example, an Intel i7 has 4 actual cores, but looks to the OS like 8 cores.

The OS schedules 8 threads onto those 8 (virtual) cores. When it’s time to do a task switch, the OS’s scheduler looks through the threads and finds the 8 that are…the most eligible to run (taking into account things like thread priority, time since they last ran, etc.)

The CPU has only 4 real cores, but those cores support executing multiple instructions simultaneously (and out of order, in the absence of dependencies). Incoming instructions get decoded and thrown into a “pool”. Each clock cycle, the CPU tries to find some instructions in that pool that don’t depend on the results of some previous instruction.

With multiple threads per core, what happens is basically that each actual core has two input streams to put into the “pool” of instructions it might be able to execute in a given clock cycle. Each clock cycle it still looks for instructions from that pool that don’t depend on the results of previous instructions that haven’t finished executing yet. If it finds some, it puts them into the execution units and executes them. The only major change is that each instruction now needs some sort of tag attached to designate which “virtual core” will be used to store results into–that is, each of the two threads has (for example) its own set of registers, and instructions from each thread have to write to the registers for that virtual core.

It is possible, however, for a CPU to support some degree of thread priority so that (for example) if the pool of available instructions includes some instructions from both input threads (or all N input threads, if there are more than two) it will prefer to choose instructions from one thread over instructions from another thread in any given clock cycle. This can be absolute, so it runs thread A as fast as possible, and thread B only with cycles A can’t use, or it can be a “milder” preference, such as attempting to maintain a 2:1 ratio of instructions executed (or, of course, essentially any other ratio preferred).

Of course, there are other ways of setting up priorities as well (such as partitioning execution resources), but the general idea remains the same.

An OS that’s aware of shared cores like this can also modify its scheduling to suit, such as scheduling only one thread on a pair of cores if that thread has higher priority.

User contributions licensed under: CC BY-SA
2 People found this is helpful
Advertisement