A glossary of terms used in threaded programming
The terms here refer to each other in a myriad of ways, so the best way to navigate through this section is to read it, and then read it again. Don’t be afraid to skip forwards or backwards as the need appears.
Binding Many operating systems allow threads to be “bound” to particular CPUs, or sets of CPUs; this guarantees that those threads will only execute on the specified CPUs. Context switch A context switch is the action of switching a CPU between executing one thread and another, transferring from one to the other. This may involve crossing one or more protection boundary. Critical section A critical section of code is one in which data that may be seen by other threads are inconsistent. At a higher level, a critical section can be viewed as a section of code in which a guarantee you make to other threads about the state of some data may not be true.
- Async safety (POSIX)
- Some library routines can be safely called from within signal handlers; these are referred to as async-safe. A thread that is executing some async-safe code will not deadlock if it is interrupted by a signal. If you want to make some of your own code async-safe, you should block signals before you obtain any locks.
- Asynchronous, blocking and non-blocking system calls
- Most system calls, whether on Unix or other platforms, block (or “suspend”) the calling thread until they complete, and continue its execution immediately following the call. A non-blocking call is typically the same as a blocking call, but will return immediately if it would need to block to complete its work. Some systems also provide asynchronous forms of some calls; the kernel notifies the caller through some kind of out-of-band method when such a system call has completed.
Deadlock A deadlock occurs when multiple threads each attempt to acquire multiple locks, and the system cannot make progress due to dependencies. The simplest case is that thread _A_ has lock _U_ and tries to acquire lock _V_, while at the same time thread _B_ has lock _V_ and tries to acquire lock _U_. Hazard A hazard is a threat to the correctness of a threaded program. For example, a deadlock hazard is the potential for a program to freeze up due to a deadlock. Lightweight process A lightweight process (also known in some implementations, confusingly, as a kernel thread) is a schedulable entity that the kernel is aware of. On most systems, it consists of some execution context and some accounting information (i.e. much less than a full-blown process). MT (multithread) safety If some piece of code is described as MT-safe, this indicates that it can be used safely within a multithreaded program, and (only in the context of POSIX thread jargon) that it supports a “reasonable” level of concurrency. This isn’t very interesting; what you, as a programmer using threads, need to worry about is code that is not MT-safe. MT-unsafe code may use global and/or static data. If you need to call MT-unsafe code from within a multithreaded program, you may need to go to some effort to ensure that only one thread calls that code at any time. Protection boundary A protection boundary protects one software subsystem on a computer from another, in such a way that only data that is explicitly shared across such a boundary is accessible to the entities on both sides. In general, all code within a protection boundary will have access to all data within that boundary.
The canonical example of a protection boundary on most modern systems is that between processes and the kernel. The kernel is protected from processes, so that they can only examine or change its internal state in certain strictly-defined ways.
Protection boundaries also exist between individual processes on most modern systems. This prevents one buggy or malicious process from wreaking havoc on others.
Scheduling Scheduling involves deciding what thread should execute next on a particular CPU. It is usually also taken as involving the context switch to that thread.