Thread (computer science)

From Academic Kids

(Redirected from Multithreading)
This article is about the computer science term. For other uses of this word, see Thread (disambiguation).

Many programming languages, operating systems, and other software development environments support what are called "threads" of execution. Threads are similar to processes, in that both represent a single sequence of instructions executed in parallel with other sequences, either by time slicing or multiprocessing. Threads are a way for a program to split itself into two or more simultaneously running tasks. (The name "thread" is by analogy with the way that a number of threads are interwoven to make a piece of fabric).

A common use of threads is having one thread paying attention to the graphical user interface, while others do a long calculation in the background. As a result, the application more readily responds to user's interaction.

An unrelated use of the term thread is for threaded code, which is a form of code consisting entirely of subroutine calls, written without the subroutine call instruction, and processed by an interpreter or the CPU. Two threaded code languages are Forth and early B programming languages.


Threads compared with processes

Threads are distinguished from traditional multi-tasking operating system processes in that processes are typically independent, carry considerable state information, have separate address spaces, and interact only through system-provided inter-process communication mechanisms. Multiple threads, on the other hand, typically share the state information of a single process, and share memory and other resources directly. Context switching between threads in the same process is typically faster than context switching between processes. Systems like Windows NT and OS/2 are said to have "cheap" threads and "expensive" processes, while in other operating systems there is not so big a difference.

An advantage of a multi-threaded program is that it can operate faster on computer systems that have multiple CPUs, or across a cluster of machines. This is because the threads of the program naturally lend themselves for truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviors. In order for data to be correctly manipulated, threads will often need to rendezvous in time in order to process the data in the correct order. Threads may also require atomic operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

Use of threads in programming often causes a state inconsistency. A common anti-pattern is to set a global variable, then invoke subprograms that depend on its value. This is known as accumulate and fire.

Operating systems generally implement threads in one of two ways: preemptive multithreading, or cooperative multithreading. Preemptive multithreading is generally considered the superior implementation, as it allows the operating system to determine when a context switch should occur. Cooperative multithreading, on the other hand, relies on the threads themselves to relinquish control once they are at a stopping point. This can create problems if a thread is waiting for a resource to become available. The disadvantage to preemptive multithreading is that the system may make a context switch at an inappropriate time, causing priority inversion or other bad effects which may be avoided by cooperative multithreading.

Traditional mainstream computing hardware did not have much support for multithreading as switching between threads was generally already quicker than full process context switches. Processors in embedded systems, which have higher requirements for real-time behaviors, might support multithreading by decreasing the thread switch time, perhaps by allocating a dedicated register file for each thread instead of saving/restoring a common register file. In the late 1990s, the idea of executing instructions from multiple threads simultaneously has become known as simultaneous multithreading. This feature was introduced in Intel's Pentium 4 processor, with the name Hyper-threading.

Processes, threads, and fibers

The concept of a process, thread, and fiber are interrelated by a sense of "ownership" and of containment.

A process is the "heaviest" unit of kernel scheduling. Processes own resources allocated by the operating system. Resources include memory, file handles, sockets, device handles, and windows. Processes do not share address spaces or file resources except through explicit methods such as inheriting file handles or shared memory segments, or mapping the same file in a shared way. Processes are typically pre-emptively multitasked. However, Windows 3.1 and older versions of Mac OS used co-operative or non-preemptive multitasking.

A thread is the "lightest" unit of kernel scheduling. At least one thread exists within each process. If multiple threads can exist within a process, then they share the same memory and file resources. Threads are pre-emptively multitasked if the operating system's process scheduler is pre-emptive. Threads do not own resources except for a stack and a copy of the registers including the program counter.

In some situations, there is a distinction between "kernel threads" and "user threads" -- the former are managed and scheduled by the kernel, whereas the latter are managed and scheduled in userspace. In this article, the term "thread" is used to refer to kernel threads, whereas "fiber" is used to refer to user threads. Fibers are co-operatively scheduled: a running fiber must explicitly "yield" to allow another fiber to run. A fiber can be scheduled to run in any thread in the same process.

Thread and fiber issues

Typically fibers are implemented entirely in userspace. As a result, context switching between fibers in a process is extremely efficient: because the kernel is oblivious to the existence of fibers, a context switch does not require any interaction with the kernel at all. Instead, a context switch can be performed by locally saving the CPU registers used by the currently executing fiber and loading the registers required by the fiber to be executed. Since scheduling occurs in userspace, the scheduling policy can be more easily tailored to the requirements of the program's workload.

However, the use of blocking system calls in fibers can be problematic. If a fiber performs a system call that blocks , the other fibers in the process are unable to run until the system call returns. A typical example of this problem is when performing I/O: most programs are written to perform I/O synchronously. When an I/O operation is initiated, a system call is made, and does not return until the I/O operation has been completed. In the intervening period, the entire process is "blocked" by the kernel, and cannot run -- which starves other fibers in the same process from executing.

A common solution to this problem is providing an I/O API that implements a synchronous interface by using non-blocking I/O internally, and scheduling another fiber while the I/O operation is in progress. Similar solutions can be provided for other blocking system calls. Alternatively, the program can be written to avoid the use of synchronous I/O or other blocking system calls.

Win32 supplies a fiber API. SunOS 4.x implemented "light-weight processes" or LWPs as fibers known as "green threads". SunOS 5.x and later, NetBSD 2.x, and DragonFly BSD implement LWPs as threads as well.

The use of kernel threads brings simplicity; the program doesn't need to know how to manage threads, as the kernel handles all aspects of thread management. There are no blocking issues since if a thread blocks, the kernel can reschedule another thread from within the process or from another, nor are extra system calls needed.

However, there are obvious issues with managing threads through the kernel, since on creation and removal, a context switch between kernel and usermode needs to occur, so programs that rely on using a lot of threads for short periods may suffer performance hits.

Hybrid schemes are available which provide a tradeoff between the two.

Relationships between processes, threads, and fibers

The operating system creates a process for the purpose of running a program. Every process has at least one thread. On some operating systems, processes can have more than one thread. A thread can use fibers to implement cooperative multitasking to divide the thread's CPU time for multiple tasks. Generally, this is not done because threads are cheap, easy, and well implemented in modern operating systems.

Processes are used to run an instance of a program. Some programs like word processors are designed to have only one instance of themselves running at the same time. Sometimes, such programs just open up more windows to accommodate multiple simultaneous use. After all, you can go back and forth between five documents, but you can edit one of them at a given instance.

Other programs like command shells maintain a state that you want to keep separate. Each time you open a command shell in Windows, the operating system creates a process for that shell window. The shell windows do not affect each other. Some operating systems support multiple users being logged in simultaneously. It is typical for dozens or even hundreds of people to be logged into some Unix systems. Other than the sluggishness of the computer, the individual users are (usually) blissfully unaware of each other. If Bob runs a program, the operating system creates a process for it. If Alice then runs the same program, the operating system creates another process to run Alice's instance of that program. So if Bob's instance of the program crashes, Alice's instance does not. In this way, processes protect users from failures being experienced by other users.

However, there are times when a single process needs to do multiple things concurrently. The quintessential example is a program with a graphical user interface (GUI). The program must repaint its GUI and respond to user interaction even if it is currently spell-checking a document or playing a song. For situations like these, threads are used.

Threads allow a program to do multiple things concurrently. Since the threads a program spawns share the same address space, one thread can modify data that is used by another thread. This is both a good and a bad thing. It is good because it facilitates easy communication between threads. It can be bad because a poorly written program may cause one thread to inadvertently overwrite data being used by another thread. The sharing of a single address space between multiple threads is one of the reasons that multithreaded programming is usually considered to be more difficult and error-prone that programming a single-threaded application.

There are other potential problems as well such as deadlocks, livelocks, and race conditions. However, all of these problems are concurrency issues and as such affect multi-process and multi-fiber models as well.

Threads are also used by web servers. When a user visits a web site, a web server will use a thread to serve the page to that user. If another user visits the site while the previous user is still being served, the web server can serve the second visitor by using a different thread. Thus, the second user does not have to wait for the first visitor to be served. This is very important because not all users have the same speed Internet connection. A slow user should not delay all other visitors from downloading a web page. For better performance, threads used by web servers and other Internet services are typically pooled and reused to eliminate even the small overhead associated with creating a thread.

Fibers were popular before threads were implemented by the kernels of operating systems. Historically, fibers can be thought of as a trial run at implementing the functionality of threads. There is little point in using fibers today because threads can do everything that fibers can do and threads are implemented well in modern operating systems.

Many software developers believe that in most cases attempts to use fibers actually decrease the performance of an application. There are several reasons for this. First, the use of fibers does not increase the percentage of CPU time that the operating sytem gives to a process. Second, in typical multithreaded or multifibered code there is contention for shared resources like variables. Programming constructs like semaphores and critical sections are used to solve problems that multithreading and multifibering create. Often these tools are implemented more efficiently in the operating system than in user libraries, particularly if those libraries are not written in assembly language. Third, as users install newer versions of an operating system, improvements may be made to the kernel scheduler, operating system provided semaphores, etc. User libraries do not see these benefits unless they call libraries provided by the operating system.

A case where fibers still can be helpful is when the limits of the OS in terms of number of threads per process are reached (for example 2000-3000 threads). In this case context-switching, which includes a system call, is too expensive and fibers can help. This situation may happen, for example, in a server that spawns a new thread for each incoming request.


There are many different and incompatible implementations of threading. These can either be kernel-level or user-level implementations.

Note that fibers can be implemented without operating system support, although some operating systems or libraries provide explicit support for them. For example, recent versions of Microsoft Windows (Windows NT 3.51 SP3 and later) support a fiber API for applications that want to gain performance improvements by managing scheduling themselves, instead of relying on the kernel scheduler (which may not be tuned for the application). Microsoft SQL Server 2000's user mode scheduler, running in fiber mode, is an example of doing this.

Kernel-level implementation examples

  • LWKT in various BSDs
  • M:N threading (in BSDs)
  • NPTL Native Posix Threading Library for Linux from Red Hat.

User-level implementation examples

Comparison between models

An operating system can provide support for processes, threads, and fibres in any combination; which of these it provides will have a significant impact on its overall behavior and characteristics.

In the table below, P stands for processes, T stands for threads, and F stands for fibers.

P T F Example
A program running on DOS. The program can only do one thing at a time.
Windows 3.1 running on top of DOS. Every program is run in a single process, so programs can corrupt each other's memory space. A poorly written program could corrupt data it didn't own, causing the infamous General Protection Fault.
Amiga OS's original implementation. The operating system had full thread support, allowing multiple applications to run independently of each other which are scheduled by the kernel. The lack of process support resulted in a more efficient system (by avoiding the overhead of memory protection), with the price that application bugs could easily crash the entire computer.
This case is used only in embedded systems and small real-time operating systems. Theoretically possible in a general purpose operating system, but no known examples.
Most early implementations of Unix. The operating system could run more than one program at a time, and programs were protected from each other. If a program behaved badly, it could crash its process ending that instance of the program without disrupting the operating system or other programs. However, due to this protection, sharing information between programs was error-prone (using techniques like shared memory) and expensive (using techniques like message passing). Performing any tasks asynchronously required an expensive fork() system call.
Sun OS before Solaris. Sun OS is Sun Microsystem's version of Unix. Sun OS implemented "green threads" in order to allow a single process to asynchronously perform multiple tasks such as playing a sound, repainting a window, and responding to user events such as clicking the stop button. Although processes were pre-emptively scheduled, the "green threads" or fibers were co-operatively multitasked. Often this model was used before real threads were implemented. This model is still used in microcontrollers and embedded devices.
This is the most common case for applications running on Windows NT, Windows 2000, Windows XP, Mac OS X, Linux, and other modern operating systems. Although each of these operating systems allows the programmer to implement fibers or use a fiber library, most programmers do not use fibers in their applications. The programs are multithreaded and run inside a multitasking operating system, but perform no user-level context switching.

On the typical home computer, most running processes have two or more threads. A few processes will have a single thread. Usually these processes are services running without user interaction. Typically there are no processes using fibers.

Pretty much all operating systems after 1995 fall into this category. The use of threads to perform concurrent operations is the most common choice, although there are also multi-process and multi-fiber applications. Threads are used, for example, to enable a program to render its graphical user interface while waiting for input from the user or performing a task like spell checking.

Example of multithreaded code

This is an example of a simple multi-threaded program written in Java. The program calculates prime numbers until the user types the word "stop". Then the program prints how many prime numbers it found and exits. This example demonstrates how threads can access the same variable while working asynchronously. This example also demonstrates a simple "race condition". The thread printing prime numbers continues to do so for a short time after the user types "stop". Of course, this problem is easily corrected using standard programming techniques.


public class Example implements Runnable
   static Thread threadCalculate;
   static Thread threadListen;
   long totalPrimesFound = 0;
   public static void main(String[] args)
       Example e = new Example();
       threadCalculate = new Thread(e);
       threadListen = new Thread(e);
   public void run()
       Thread currentThread = Thread.currentThread();
       if (currentThread == threadCalculate)
       else if (currentThread == threadListen)
   public void calculatePrimes()
       int n = 1;
       while (true)
           boolean isPrime = true;
           for (int i = 2; i < n; i++)
               if ((n / i) * i == n) // (n / i) does return an int, not a float!
                   isPrime = false;
           if (isPrime)
   public void listenForStop()
       BufferedReader input = new BufferedReader(new InputStreamReader(;
       String line = "";
       while (!line.equals("stop"))
               line = input.readLine();
           catch (IOException exception) {}
       System.out.println("Found " + totalPrimesFound +
           " prime numbers before you said stop");

The spin lock article includes a C program using two threads that communicate through a global integer.

See also


External links

de:Thread es:Hilo (informtica) fr:Processus lger ko:스레드 ja:スレッド (コンピュータプログラミング) nl:Thread (informatica) pl:Wątek (informatyka) ru:Многопоточность sv:Trd (dator) zh:线程


Academic Kids Menu

  • Art and Cultures
    • Art (
    • Architecture (
    • Cultures (
    • Music (
    • Musical Instruments (
  • Biographies (
  • Clipart (
  • Geography (
    • Countries of the World (
    • Maps (
    • Flags (
    • Continents (
  • History (
    • Ancient Civilizations (
    • Industrial Revolution (
    • Middle Ages (
    • Prehistory (
    • Renaissance (
    • Timelines (
    • United States (
    • Wars (
    • World History (
  • Human Body (
  • Mathematics (
  • Reference (
  • Science (
    • Animals (
    • Aviation (
    • Dinosaurs (
    • Earth (
    • Inventions (
    • Physical Science (
    • Plants (
    • Scientists (
  • Social Studies (
    • Anthropology (
    • Economics (
    • Government (
    • Religion (
    • Holidays (
  • Space and Astronomy
    • Solar System (
    • Planets (
  • Sports (
  • Timelines (
  • Weather (
  • US States (


  • Home Page (
  • Contact Us (

  • Clip Art (
Personal tools