Expand description

A work-stealing based thread pool for executing futures.

Note: This crate is deprecated in tokio 0.2.x and has been moved and refactored into various places in the tokio::runtime module of the tokio crate. Note that there is no longer a ThreadPool type, you are instead encouraged to make use of the thread pool used by a Runtime configured to use the threaded scheduler.

The Tokio thread pool supports scheduling futures and processing them on multiple CPU cores. It is optimized for the primary Tokio use case of many independent tasks with limited computation and with most tasks waiting on I/O. Usually, users will not create a ThreadPool instance directly, but will use one via a runtime.

The ThreadPool structure manages two sets of threads:

  • Worker threads.
  • Backup threads.

Worker threads are used to schedule futures using a work-stealing strategy. Backup threads, on the other hand, are intended only to support the blocking API. Threads will transition between the two sets.

The advantage of the work-stealing strategy is minimal cross-thread coordination. The thread pool attempts to make as much progress as possible without communicating across threads.

Worker overview

Each worker has two queues: a deque and a mpsc channel. The deque is the primary queue for tasks that are scheduled to run on the worker thread. Tasks can only be pushed onto the deque by the worker, but other workers may “steal” from that deque. The mpsc channel is used to submit futures while external to the pool.

As long as the thread pool has not been shutdown, a worker will run in a loop. Each loop, it consumes all tasks on its mpsc channel and pushes it onto the deque. It then pops tasks off of the deque and executes them.

If a worker has no work, i.e., both queues are empty. It attempts to steal. To do this, it randomly scans other workers’ deques and tries to pop a task. If it finds no work to steal, the thread goes to sleep.

When the worker detects that the pool has been shut down, it exits the loop, cleans up its state, and shuts the thread down.

Thread pool initialization

Note, users normally will use the threadpool created by a runtime.

By default, no threads are spawned on creation. Instead, when new futures are spawned, the pool first checks if there are enough active worker threads. If not, a new worker thread is spawned.

Spawning futures

The spawning behavior depends on whether a future was spawned from within a worker or thread or if it was spawned from an external handle.

When spawning a future while external to the thread pool, the current strategy is to randomly pick a worker to submit the task to. The task is then pushed onto that worker’s mpsc channel.

When spawning a future while on a worker thread, the task is pushed onto the back of the current worker’s deque.

Blocking annotation strategy

The blocking function is used to annotate a section of code that performs a blocking operation, either by issuing a blocking syscall or performing any long running CPU-bound computation.

The strategy for handling blocking closures is to hand off the worker to a new thread. This implies handing off the deque and mpsc. Once this is done, the new thread continues to process the work queue and the original thread is able to block. Once it finishes processing the blocking future, the thread has no additional work and is inserted into the backup pool. This makes it available to other workers that encounter a blocking call.

Modules

Thread parking utilities.

Structs

Error raised by blocking.

Builds a thread pool with custom configuration values.

Submit futures to the associated thread pool for execution.

Future that resolves when the thread pool is shutdown.

Handle returned from ThreadPool::spawn_handle.

Work-stealing based thread pool for executing futures.

Thread worker

Identifies a thread pool worker.

Functions

Enter a blocking section of code.