22.2 Concurrency vs. Parallelism in Rust

While often used interchangeably, concurrency and parallelism are distinct concepts:

  • Concurrency: Is about dealing with multiple tasks by allowing them to make progress independently, managing potentially overlapping execution. It’s primarily about program structure.
  • Parallelism: Is about executing multiple tasks simultaneously, typically leveraging multiple CPU cores to achieve speedup. It’s primarily about execution performance.

A program can be concurrent without being parallel. For instance, a web server on a single-core CPU can concurrently handle multiple clients using task switching, but only one task executes at any given instant. Parallelism requires hardware with multiple processing units.

Rust supports concurrency mainly through two distinct models:

  1. OS Threads (std::thread): These map closely to the native threads provided by the operating system. They are scheduled preemptively by the OS. This model is generally well-suited for CPU-bound tasks where true parallel execution across multiple cores can yield significant performance benefits. This is the focus of this chapter.
  2. Async Tasks (async/.await): These are lightweight tasks scheduled cooperatively by an async runtime library (like Tokio, async-std). They are particularly effective for I/O-bound workloads, where many tasks spend time waiting for external events (e.g., network responses, file I/O). Async tasks allow a small number of OS threads to manage a very large number of concurrent operations efficiently. This model will be covered in a later chapter.

Additionally, libraries like Rayon build upon OS threads to provide higher-level abstractions specifically for data parallelism, simplifying the task of parallelizing computations over collections.