1 // Copyright 2013 The Rust Project Developers. See the COPYRIGHT
2 // file at the top-level directory of this distribution and at
3 // http://rust-lang.org/COPYRIGHT.
5 // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6 // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8 // option. This file may not be copied, modified, or distributed
9 // except according to those terms.
11 //! Useful synchronization primitives.
13 //! ## The need for synchronization
15 //! Conceptually, a Rust program is a series of operations which will
16 //! be executed on a computer. The timeline of events happening in the
17 //! program is consistent with the order of the operations in the code.
19 //! Consider the following code, operating on some global static variables:
22 //! static mut A: u32 = 0;
23 //! static mut B: u32 = 0;
24 //! static mut C: u32 = 0;
32 //! println!("{} {} {}", A, B, C);
38 //! It appears as if some variables stored in memory are changed, an addition
39 //! is performed, result is stored in `A` and the variable `C` is
42 //! When only a single thread is involved, the results are as expected:
43 //! the line `7 4 4` gets printed.
45 //! As for what happens behind the scenes, when optimizations are enabled the
46 //! final generated machine code might look very different from the code:
48 //! - The first store to `C` might be moved before the store to `A` or `B`,
49 //! _as if_ we had written `C = 4; A = 3; B = 4`.
51 //! - Assignment of `A + B` to `A` might be removed, since the sum can be stored
52 //! in a temporary location until it gets printed, with the global variable
53 //! never getting updated.
55 //! - The final result could be determined just by looking at the code
56 //! at compile time, so [constant folding] might turn the whole
57 //! block into a simple `println!("7 4 4")`.
59 //! The compiler is allowed to perform any combination of these
60 //! optimizations, as long as the final optimized code, when executed,
61 //! produces the same results as the one without optimizations.
63 //! Due to the [concurrency] involved in modern computers, assumptions
64 //! about the program's execution order are often wrong. Access to
65 //! global variables can lead to nondeterministic results, **even if**
66 //! compiler optimizations are disabled, and it is **still possible**
67 //! to introduce synchronization bugs.
69 //! Note that thanks to Rust's safety guarantees, accessing global (static)
70 //! variables requires `unsafe` code, assuming we don't use any of the
71 //! synchronization primitives in this module.
73 //! [constant folding]: https://en.wikipedia.org/wiki/Constant_folding
74 //! [concurrency]: https://en.wikipedia.org/wiki/Concurrency_(computer_science)
76 //! ## Out-of-order execution
78 //! Instructions can execute in a different order from the one we define, due to
81 //! - The **compiler** reordering instructions: If the compiler can issue an
82 //! instruction at an earlier point, it will try to do so. For example, it
83 //! might hoist memory loads at the top of a code block, so that the CPU can
84 //! start [prefetching] the values from memory.
86 //! In single-threaded scenarios, this can cause issues when writing
87 //! signal handlers or certain kinds of low-level code.
88 //! Use [compiler fences] to prevent this reordering.
90 //! - A **single processor** executing instructions [out-of-order]:
91 //! Modern CPUs are capable of [superscalar] execution,
92 //! i.e. multiple instructions might be executing at the same time,
93 //! even though the machine code describes a sequential process.
95 //! This kind of reordering is handled transparently by the CPU.
97 //! - A **multiprocessor** system executing multiple hardware threads
98 //! at the same time: In multi-threaded scenarios, you can use two
99 //! kinds of primitives to deal with synchronization:
100 //! - [memory fences] to ensure memory accesses are made visible to
101 //! other CPUs in the right order.
102 //! - [atomic operations] to ensure simultaneous access to the same
103 //! memory location doesn't lead to undefined behavior.
105 //! [prefetching]: https://en.wikipedia.org/wiki/Cache_prefetching
106 //! [compiler fences]: crate::sync::atomic::compiler_fence
107 //! [out-of-order]: https://en.wikipedia.org/wiki/Out-of-order_execution
108 //! [superscalar]: https://en.wikipedia.org/wiki/Superscalar_processor
109 //! [memory fences]: crate::sync::atomic::fence
110 //! [atomic operations]: crate::sync::atomic
112 //! ## Higher-level synchronization objects
114 //! Most of the low-level synchronization primitives are quite error-prone and
115 //! inconvenient to use, which is why the standard library also exposes some
116 //! higher-level synchronization objects.
118 //! These abstractions can be built out of lower-level primitives.
119 //! For efficiency, the sync objects in the standard library are usually
120 //! implemented with help from the operating system's kernel, which is
121 //! able to reschedule the threads while they are blocked on acquiring
124 //! The following is an overview of the available synchronization
127 //! - [`Arc`]: Atomically Reference-Counted pointer, which can be used
128 //! in multithreaded environments to prolong the lifetime of some
129 //! data until all the threads have finished using it.
131 //! - [`Barrier`]: Ensures multiple threads will wait for each other
132 //! to reach a point in the program, before continuing execution all
135 //! - [`Condvar`]: Condition Variable, providing the ability to block
136 //! a thread while waiting for an event to occur.
138 //! - [`mpsc`]: Multi-producer, single-consumer queues, used for
139 //! message-based communication. Can provide a lightweight
140 //! inter-thread synchronisation mechanism, at the cost of some
143 //! - [`Mutex`]: Mutual Exclusion mechanism, which ensures that at
144 //! most one thread at a time is able to access some data.
146 //! - [`Once`]: Used for thread-safe, one-time initialization of a
149 //! - [`RwLock`]: Provides a mutual exclusion mechanism which allows
150 //! multiple readers at the same time, while allowing only one
151 //! writer at a time. In some cases, this can be more efficient than
154 //! [`Arc`]: crate::sync::Arc
155 //! [`Barrier`]: crate::sync::Barrier
156 //! [`Condvar`]: crate::sync::Condvar
157 //! [`mpsc`]: crate::sync::mpsc
158 //! [`Mutex`]: crate::sync::Mutex
159 //! [`Once`]: crate::sync::Once
160 //! [`RwLock`]: crate::sync::RwLock
162 #![stable(feature = "rust1", since = "1.0.0")]
164 #[stable(feature = "rust1", since = "1.0.0")]
165 pub use alloc_crate::sync::{Arc, Weak};
166 #[stable(feature = "rust1", since = "1.0.0")]
167 pub use core::sync::atomic;
169 #[stable(feature = "rust1", since = "1.0.0")]
170 pub use self::barrier::{Barrier, BarrierWaitResult};
171 #[stable(feature = "rust1", since = "1.0.0")]
172 pub use self::condvar::{Condvar, WaitTimeoutResult};
173 #[stable(feature = "rust1", since = "1.0.0")]
174 pub use self::mutex::{Mutex, MutexGuard};
175 #[stable(feature = "rust1", since = "1.0.0")]
176 pub use self::once::{Once, OnceState, ONCE_INIT};
177 #[stable(feature = "rust1", since = "1.0.0")]
178 pub use sys_common::poison::{PoisonError, TryLockError, TryLockResult, LockResult};
179 #[stable(feature = "rust1", since = "1.0.0")]
180 pub use self::rwlock::{RwLock, RwLockReadGuard, RwLockWriteGuard};