1 //! Useful synchronization primitives.
3 //! ## The need for synchronization
5 //! Conceptually, a Rust program is a series of operations which will
6 //! be executed on a computer. The timeline of events happening in the
7 //! program is consistent with the order of the operations in the code.
9 //! Consider the following code, operating on some global static variables:
12 //! static mut A: u32 = 0;
13 //! static mut B: u32 = 0;
14 //! static mut C: u32 = 0;
22 //! println!("{A} {B} {C}");
28 //! It appears as if some variables stored in memory are changed, an addition
29 //! is performed, result is stored in `A` and the variable `C` is
32 //! When only a single thread is involved, the results are as expected:
33 //! the line `7 4 4` gets printed.
35 //! As for what happens behind the scenes, when optimizations are enabled the
36 //! final generated machine code might look very different from the code:
38 //! - The first store to `C` might be moved before the store to `A` or `B`,
39 //! _as if_ we had written `C = 4; A = 3; B = 4`.
41 //! - Assignment of `A + B` to `A` might be removed, since the sum can be stored
42 //! in a temporary location until it gets printed, with the global variable
43 //! never getting updated.
45 //! - The final result could be determined just by looking at the code
46 //! at compile time, so [constant folding] might turn the whole
47 //! block into a simple `println!("7 4 4")`.
49 //! The compiler is allowed to perform any combination of these
50 //! optimizations, as long as the final optimized code, when executed,
51 //! produces the same results as the one without optimizations.
53 //! Due to the [concurrency] involved in modern computers, assumptions
54 //! about the program's execution order are often wrong. Access to
55 //! global variables can lead to nondeterministic results, **even if**
56 //! compiler optimizations are disabled, and it is **still possible**
57 //! to introduce synchronization bugs.
59 //! Note that thanks to Rust's safety guarantees, accessing global (static)
60 //! variables requires `unsafe` code, assuming we don't use any of the
61 //! synchronization primitives in this module.
63 //! [constant folding]: https://en.wikipedia.org/wiki/Constant_folding
64 //! [concurrency]: https://en.wikipedia.org/wiki/Concurrency_(computer_science)
66 //! ## Out-of-order execution
68 //! Instructions can execute in a different order from the one we define, due to
71 //! - The **compiler** reordering instructions: If the compiler can issue an
72 //! instruction at an earlier point, it will try to do so. For example, it
73 //! might hoist memory loads at the top of a code block, so that the CPU can
74 //! start [prefetching] the values from memory.
76 //! In single-threaded scenarios, this can cause issues when writing
77 //! signal handlers or certain kinds of low-level code.
78 //! Use [compiler fences] to prevent this reordering.
80 //! - A **single processor** executing instructions [out-of-order]:
81 //! Modern CPUs are capable of [superscalar] execution,
82 //! i.e., multiple instructions might be executing at the same time,
83 //! even though the machine code describes a sequential process.
85 //! This kind of reordering is handled transparently by the CPU.
87 //! - A **multiprocessor** system executing multiple hardware threads
88 //! at the same time: In multi-threaded scenarios, you can use two
89 //! kinds of primitives to deal with synchronization:
90 //! - [memory fences] to ensure memory accesses are made visible to
91 //! other CPUs in the right order.
92 //! - [atomic operations] to ensure simultaneous access to the same
93 //! memory location doesn't lead to undefined behavior.
95 //! [prefetching]: https://en.wikipedia.org/wiki/Cache_prefetching
96 //! [compiler fences]: crate::sync::atomic::compiler_fence
97 //! [out-of-order]: https://en.wikipedia.org/wiki/Out-of-order_execution
98 //! [superscalar]: https://en.wikipedia.org/wiki/Superscalar_processor
99 //! [memory fences]: crate::sync::atomic::fence
100 //! [atomic operations]: crate::sync::atomic
102 //! ## Higher-level synchronization objects
104 //! Most of the low-level synchronization primitives are quite error-prone and
105 //! inconvenient to use, which is why the standard library also exposes some
106 //! higher-level synchronization objects.
108 //! These abstractions can be built out of lower-level primitives.
109 //! For efficiency, the sync objects in the standard library are usually
110 //! implemented with help from the operating system's kernel, which is
111 //! able to reschedule the threads while they are blocked on acquiring
114 //! The following is an overview of the available synchronization
117 //! - [`Arc`]: Atomically Reference-Counted pointer, which can be used
118 //! in multithreaded environments to prolong the lifetime of some
119 //! data until all the threads have finished using it.
121 //! - [`Barrier`]: Ensures multiple threads will wait for each other
122 //! to reach a point in the program, before continuing execution all
125 //! - [`Condvar`]: Condition Variable, providing the ability to block
126 //! a thread while waiting for an event to occur.
128 //! - [`mpsc`]: Multi-producer, single-consumer queues, used for
129 //! message-based communication. Can provide a lightweight
130 //! inter-thread synchronisation mechanism, at the cost of some
133 //! - [`Mutex`]: Mutual Exclusion mechanism, which ensures that at
134 //! most one thread at a time is able to access some data.
136 //! - [`Once`]: Used for thread-safe, one-time initialization of a
139 //! - [`RwLock`]: Provides a mutual exclusion mechanism which allows
140 //! multiple readers at the same time, while allowing only one
141 //! writer at a time. In some cases, this can be more efficient than
144 //! [`Arc`]: crate::sync::Arc
145 //! [`Barrier`]: crate::sync::Barrier
146 //! [`Condvar`]: crate::sync::Condvar
147 //! [`mpsc`]: crate::sync::mpsc
148 //! [`Mutex`]: crate::sync::Mutex
149 //! [`Once`]: crate::sync::Once
150 //! [`RwLock`]: crate::sync::RwLock
152 #![stable(feature = "rust1", since = "1.0.0")]
154 #[stable(feature = "rust1", since = "1.0.0")]
155 pub use alloc_crate::sync::{Arc, Weak};
156 #[stable(feature = "rust1", since = "1.0.0")]
157 pub use core::sync::atomic;
158 #[unstable(feature = "exclusive_wrapper", issue = "98407")]
159 pub use core::sync::Exclusive;
161 #[stable(feature = "rust1", since = "1.0.0")]
162 pub use self::barrier::{Barrier, BarrierWaitResult};
163 #[stable(feature = "rust1", since = "1.0.0")]
164 pub use self::condvar::{Condvar, WaitTimeoutResult};
165 #[stable(feature = "rust1", since = "1.0.0")]
166 pub use self::mutex::{Mutex, MutexGuard};
167 #[stable(feature = "rust1", since = "1.0.0")]
169 pub use self::once::{Once, OnceState, ONCE_INIT};
170 #[stable(feature = "rust1", since = "1.0.0")]
171 pub use self::poison::{LockResult, PoisonError, TryLockError, TryLockResult};
172 #[stable(feature = "rust1", since = "1.0.0")]
173 pub use self::rwlock::{RwLock, RwLockReadGuard, RwLockWriteGuard};
175 #[unstable(feature = "once_cell", issue = "74465")]
176 pub use self::lazy_lock::LazyLock;
177 #[unstable(feature = "once_cell", issue = "74465")]
178 pub use self::once_lock::OnceLock;
180 pub(crate) use self::remutex::{ReentrantMutex, ReentrantMutexGuard};