- Number of concurrent flows of control: generally, threads may potentially make use of multiple processors in order to allow several to execute concurrently. That is, the model usually takes into consideration the possibility that there may be more than one flow of control active at any time.
- Scheduling policy: a thread scheduler may be pre-emptive, in which case a thread is put to sleep either when it waits upon some resource or runs for the full duration of its time quantum, or non-pre-emptive, in which case individual threads continue to run until they relinquish the processor themselves (either through waiting on a resource or calling the analogue of a sleep() function).
What is a thread?
A thread is an encapsulation of the flow of control in a program. Most people are used to writing single-threaded programsâ€”that is, programs that only execute one path through their code â€œat a timeâ€. Multithreaded programs may have several threads running through different code paths â€œsimultaneouslyâ€. Why are some phrases above in quotes? The exact meaning of the term â€œthreadâ€ is not generally agreed upon. One of the more common usages denotes a â€œlightweightâ€ process (sequential flow of control) which shares an address space and some other resources with others, and for which context switching time is lower than for â€œheavyweightâ€ (i.e. kernel-supported) processes. In a typical process in which multiple threads exist, the system’s CPUs may be executing instruction for zero or more threads at any one time. The number of executing threads depends on the number of CPUs in the system, and also on how the threading subsystem is implemented. A machine with n processing units can, intuitively enough, run no more than n threads in parallel, but it may give the appearance of running many more than n â€œsimultaneouslyâ€, by sharing the CPUs among threads. Some of the features which distinguish different approaches to threading are listed below: