1 //! A "once initialization" primitive
3 //! This primitive is meant to be used to run one-time initialization. An
4 //! example use case would be for initializing an FFI library.
6 // A "once" is a relatively simple primitive, and it's also typically provided
7 // by the OS as well (see `pthread_once` or `InitOnceExecuteOnce`). The OS
8 // primitives, however, tend to have surprising restrictions, such as the Unix
9 // one doesn't allow an argument to be passed to the function.
11 // As a result, we end up implementing it ourselves in the standard library.
12 // This also gives us the opportunity to optimize the implementation a bit which
13 // should help the fast path on call sites. Consequently, let's explain how this
14 // primitive works now!
16 // So to recap, the guarantees of a Once are that it will call the
17 // initialization closure at most once, and it will never return until the one
18 // that's running has finished running. This means that we need some form of
19 // blocking here while the custom callback is running at the very least.
20 // Additionally, we add on the restriction of **poisoning**. Whenever an
21 // initialization closure panics, the Once enters a "poisoned" state which means
22 // that all future calls will immediately panic as well.
24 // So to implement this, one might first reach for a `Mutex`, but those cannot
25 // be put into a `static`. It also gets a lot harder with poisoning to figure
26 // out when the mutex needs to be deallocated because it's not after the closure
27 // finishes, but after the first successful closure finishes.
29 // All in all, this is instead implemented with atomics and lock-free
30 // operations! Whee! Each `Once` has one word of atomic state, and this state is
31 // CAS'd on to determine what to do. There are four possible state of a `Once`:
33 // * Incomplete - no initialization has run yet, and no thread is currently
35 // * Poisoned - some thread has previously attempted to initialize the Once, but
36 // it panicked, so the Once is now poisoned. There are no other
37 // threads currently accessing this Once.
38 // * Running - some thread is currently attempting to run initialization. It may
39 // succeed, so all future threads need to wait for it to finish.
40 // Note that this state is accompanied with a payload, described
42 // * Complete - initialization has completed and all future calls should finish
45 // With 4 states we need 2 bits to encode this, and we use the remaining bits
46 // in the word we have allocated as a queue of threads waiting for the thread
47 // responsible for entering the RUNNING state. This queue is just a linked list
48 // of Waiter nodes which is monotonically increasing in size. Each node is
49 // allocated on the stack, and whenever the running closure finishes it will
50 // consume the entire queue and notify all waiters they should try again.
52 // You'll find a few more details in the implementation, but that's the gist of
56 // When running `Once` we deal with multiple atomics:
57 // `Once.state_and_queue` and an unknown number of `Waiter.signaled`.
58 // * `state_and_queue` is used (1) as a state flag, (2) for synchronizing the
59 // result of the `Once`, and (3) for synchronizing `Waiter` nodes.
60 // - At the end of the `call_inner` function we have to make sure the result
61 // of the `Once` is acquired. So every load which can be the only one to
62 // load COMPLETED must have at least Acquire ordering, which means all
64 // - `WaiterQueue::Drop` is the only place that may store COMPLETED, and
65 // must do so with Release ordering to make the result available.
66 // - `wait` inserts `Waiter` nodes as a pointer in `state_and_queue`, and
67 // needs to make the nodes available with Release ordering. The load in
68 // its `compare_and_swap` can be Relaxed because it only has to compare
69 // the atomic, not to read other data.
70 // - `WaiterQueue::Drop` must see the `Waiter` nodes, so it must load
71 // `state_and_queue` with Acquire ordering.
72 // - There is just one store where `state_and_queue` is used only as a
73 // state flag, without having to synchronize data: switching the state
74 // from INCOMPLETE to RUNNING in `call_inner`. This store can be Relaxed,
75 // but the read has to be Acquire because of the requirements mentioned
77 // * `Waiter.signaled` is both used as a flag, and to protect a field with
78 // interior mutability in `Waiter`. `Waiter.thread` is changed in
79 // `WaiterQueue::Drop` which then sets `signaled` with Release ordering.
80 // After `wait` loads `signaled` with Acquire and sees it is true, it needs to
81 // see the changes to drop the `Waiter` struct correctly.
82 // * There is one place where the two atomics `Once.state_and_queue` and
83 // `Waiter.signaled` come together, and might be reordered by the compiler or
84 // processor. Because both use Acquire ordering such a reordering is not
85 // allowed, so no need for SeqCst.
87 #[cfg(all(test, not(target_os = "emscripten")))]
90 use crate::cell::Cell;
93 use crate::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
94 use crate::thread::{self, Thread};
96 /// A synchronization primitive which can be used to run a one-time global
97 /// initialization. Useful for one-time initialization for FFI or related
98 /// functionality. This type can only be constructed with [`Once::new()`].
103 /// use std::sync::Once;
105 /// static START: Once = Once::new();
107 /// START.call_once(|| {
108 /// // run initialization here
111 #[stable(feature = "rust1", since = "1.0.0")]
113 // `state_and_queue` is actually an a pointer to a `Waiter` with extra state
114 // bits, so we add the `PhantomData` appropriately.
115 state_and_queue: AtomicUsize,
116 _marker: marker::PhantomData<*const Waiter>,
119 // The `PhantomData` of a raw pointer removes these two auto traits, but we
120 // enforce both below in the implementation so this should be safe to add.
121 #[stable(feature = "rust1", since = "1.0.0")]
122 unsafe impl Sync for Once {}
123 #[stable(feature = "rust1", since = "1.0.0")]
124 unsafe impl Send for Once {}
126 /// State yielded to [`Once::call_once_force()`]’s closure parameter. The state
127 /// can be used to query the poison status of the [`Once`].
128 #[unstable(feature = "once_poison", issue = "33577")]
130 pub struct OnceState {
132 set_state_on_drop_to: Cell<usize>,
135 /// Initialization value for static [`Once`] values.
140 /// use std::sync::{Once, ONCE_INIT};
142 /// static START: Once = ONCE_INIT;
144 #[stable(feature = "rust1", since = "1.0.0")]
147 reason = "the `new` function is now preferred",
148 suggestion = "Once::new()"
150 pub const ONCE_INIT: Once = Once::new();
152 // Four states that a Once can be in, encoded into the lower bits of
153 // `state_and_queue` in the Once structure.
154 const INCOMPLETE: usize = 0x0;
155 const POISONED: usize = 0x1;
156 const RUNNING: usize = 0x2;
157 const COMPLETE: usize = 0x3;
159 // Mask to learn about the state. All other bits are the queue of waiters if
160 // this is in the RUNNING state.
161 const STATE_MASK: usize = 0x3;
163 // Representation of a node in the linked list of waiters, used while in the
165 // Note: `Waiter` can't hold a mutable pointer to the next thread, because then
166 // `wait` would both hand out a mutable reference to its `Waiter` node, and keep
167 // a shared reference to check `signaled`. Instead we hold shared references and
168 // use interior mutability.
169 #[repr(align(4))] // Ensure the two lower bits are free to use as state bits.
171 thread: Cell<Option<Thread>>,
172 signaled: AtomicBool,
176 // Head of a linked list of waiters.
177 // Every node is a struct on the stack of a waiting thread.
178 // Will wake up the waiters when it gets dropped, i.e. also on panic.
179 struct WaiterQueue<'a> {
180 state_and_queue: &'a AtomicUsize,
181 set_state_on_drop_to: usize,
185 /// Creates a new `Once` value.
187 #[stable(feature = "once_new", since = "1.2.0")]
188 #[rustc_const_stable(feature = "const_once_new", since = "1.32.0")]
189 pub const fn new() -> Once {
190 Once { state_and_queue: AtomicUsize::new(INCOMPLETE), _marker: marker::PhantomData }
193 /// Performs an initialization routine once and only once. The given closure
194 /// will be executed if this is the first time `call_once` has been called,
195 /// and otherwise the routine will *not* be invoked.
197 /// This method will block the calling thread if another initialization
198 /// routine is currently running.
200 /// When this function returns, it is guaranteed that some initialization
201 /// has run and completed (it may not be the closure specified). It is also
202 /// guaranteed that any memory writes performed by the executed closure can
203 /// be reliably observed by other threads at this point (there is a
204 /// happens-before relation between the closure and code executing after the
207 /// If the given closure recursively invokes `call_once` on the same [`Once`]
208 /// instance the exact behavior is not specified, allowed outcomes are
209 /// a panic or a deadlock.
214 /// use std::sync::Once;
216 /// static mut VAL: usize = 0;
217 /// static INIT: Once = Once::new();
219 /// // Accessing a `static mut` is unsafe much of the time, but if we do so
220 /// // in a synchronized fashion (e.g., write once or read all) then we're
223 /// // This function will only call `expensive_computation` once, and will
224 /// // otherwise always return the value returned from the first invocation.
225 /// fn get_cached_val() -> usize {
227 /// INIT.call_once(|| {
228 /// VAL = expensive_computation();
234 /// fn expensive_computation() -> usize {
242 /// The closure `f` will only be executed once if this is called
243 /// concurrently amongst many threads. If that closure panics, however, then
244 /// it will *poison* this [`Once`] instance, causing all future invocations of
245 /// `call_once` to also panic.
247 /// This is similar to [poisoning with mutexes][poison].
249 /// [poison]: struct.Mutex.html#poisoning
250 #[stable(feature = "rust1", since = "1.0.0")]
251 pub fn call_once<F>(&self, f: F)
256 if self.is_completed() {
261 self.call_inner(false, &mut |_| f.take().unwrap()());
264 /// Performs the same function as [`call_once()`] except ignores poisoning.
266 /// Unlike [`call_once()`], if this [`Once`] has been poisoned (i.e., a previous
267 /// call to [`call_once()`] or [`call_once_force()`] caused a panic), calling
268 /// [`call_once_force()`] will still invoke the closure `f` and will _not_
269 /// result in an immediate panic. If `f` panics, the [`Once`] will remain
270 /// in a poison state. If `f` does _not_ panic, the [`Once`] will no
271 /// longer be in a poison state and all future calls to [`call_once()`] or
272 /// [`call_once_force()`] will be no-ops.
274 /// The closure `f` is yielded a [`OnceState`] structure which can be used
275 /// to query the poison status of the [`Once`].
277 /// [`call_once()`]: Once::call_once
278 /// [`call_once_force()`]: Once::call_once_force
283 /// #![feature(once_poison)]
285 /// use std::sync::Once;
288 /// static INIT: Once = Once::new();
290 /// // poison the once
291 /// let handle = thread::spawn(|| {
292 /// INIT.call_once(|| panic!());
294 /// assert!(handle.join().is_err());
296 /// // poisoning propagates
297 /// let handle = thread::spawn(|| {
298 /// INIT.call_once(|| {});
300 /// assert!(handle.join().is_err());
302 /// // call_once_force will still run and reset the poisoned state
303 /// INIT.call_once_force(|state| {
304 /// assert!(state.poisoned());
307 /// // once any success happens, we stop propagating the poison
308 /// INIT.call_once(|| {});
310 #[unstable(feature = "once_poison", issue = "33577")]
311 pub fn call_once_force<F>(&self, f: F)
313 F: FnOnce(&OnceState),
316 if self.is_completed() {
321 self.call_inner(true, &mut |p| f.take().unwrap()(p));
324 /// Returns `true` if some [`call_once()`] call has completed
325 /// successfully. Specifically, `is_completed` will return false in
326 /// the following situations:
327 /// * [`call_once()`] was not called at all,
328 /// * [`call_once()`] was called, but has not yet completed,
329 /// * the [`Once`] instance is poisoned
331 /// This function returning `false` does not mean that [`Once`] has not been
332 /// executed. For example, it may have been executed in the time between
333 /// when `is_completed` starts executing and when it returns, in which case
334 /// the `false` return value would be stale (but still permissible).
336 /// [`call_once()`]: Once::call_once
341 /// use std::sync::Once;
343 /// static INIT: Once = Once::new();
345 /// assert_eq!(INIT.is_completed(), false);
346 /// INIT.call_once(|| {
347 /// assert_eq!(INIT.is_completed(), false);
349 /// assert_eq!(INIT.is_completed(), true);
353 /// use std::sync::Once;
356 /// static INIT: Once = Once::new();
358 /// assert_eq!(INIT.is_completed(), false);
359 /// let handle = thread::spawn(|| {
360 /// INIT.call_once(|| panic!());
362 /// assert!(handle.join().is_err());
363 /// assert_eq!(INIT.is_completed(), false);
365 #[stable(feature = "once_is_completed", since = "1.43.0")]
367 pub fn is_completed(&self) -> bool {
368 // An `Acquire` load is enough because that makes all the initialization
369 // operations visible to us, and, this being a fast path, weaker
370 // ordering helps with performance. This `Acquire` synchronizes with
371 // `Release` operations on the slow path.
372 self.state_and_queue.load(Ordering::Acquire) == COMPLETE
375 // This is a non-generic function to reduce the monomorphization cost of
376 // using `call_once` (this isn't exactly a trivial or small implementation).
378 // Additionally, this is tagged with `#[cold]` as it should indeed be cold
379 // and it helps let LLVM know that calls to this function should be off the
380 // fast path. Essentially, this should help generate more straight line code
383 // Finally, this takes an `FnMut` instead of a `FnOnce` because there's
384 // currently no way to take an `FnOnce` and call it via virtual dispatch
385 // without some allocation overhead.
387 fn call_inner(&self, ignore_poisoning: bool, init: &mut dyn FnMut(&OnceState)) {
388 let mut state_and_queue = self.state_and_queue.load(Ordering::Acquire);
390 match state_and_queue {
392 POISONED if !ignore_poisoning => {
393 // Panic to propagate the poison.
394 panic!("Once instance has previously been poisoned");
396 POISONED | INCOMPLETE => {
397 // Try to register this thread as the one RUNNING.
398 let old = self.state_and_queue.compare_and_swap(
403 if old != state_and_queue {
404 state_and_queue = old;
407 // `waiter_queue` will manage other waiting threads, and
408 // wake them up on drop.
409 let mut waiter_queue = WaiterQueue {
410 state_and_queue: &self.state_and_queue,
411 set_state_on_drop_to: POISONED,
413 // Run the initialization function, letting it know if we're
415 let init_state = OnceState {
416 poisoned: state_and_queue == POISONED,
417 set_state_on_drop_to: Cell::new(COMPLETE),
420 waiter_queue.set_state_on_drop_to = init_state.set_state_on_drop_to.get();
424 // All other values must be RUNNING with possibly a
425 // pointer to the waiter queue in the more significant bits.
426 assert!(state_and_queue & STATE_MASK == RUNNING);
427 wait(&self.state_and_queue, state_and_queue);
428 state_and_queue = self.state_and_queue.load(Ordering::Acquire);
435 fn wait(state_and_queue: &AtomicUsize, mut current_state: usize) {
436 // Note: the following code was carefully written to avoid creating a
437 // mutable reference to `node` that gets aliased.
439 // Don't queue this thread if the status is no longer running,
440 // otherwise we will not be woken up.
441 if current_state & STATE_MASK != RUNNING {
445 // Create the node for our current thread.
447 thread: Cell::new(Some(thread::current())),
448 signaled: AtomicBool::new(false),
449 next: (current_state & !STATE_MASK) as *const Waiter,
451 let me = &node as *const Waiter as usize;
453 // Try to slide in the node at the head of the linked list, making sure
454 // that another thread didn't just replace the head of the linked list.
455 let old = state_and_queue.compare_and_swap(current_state, me | RUNNING, Ordering::Release);
456 if old != current_state {
461 // We have enqueued ourselves, now lets wait.
462 // It is important not to return before being signaled, otherwise we
463 // would drop our `Waiter` node and leave a hole in the linked list
464 // (and a dangling reference). Guard against spurious wakeups by
465 // reparking ourselves until we are signaled.
466 while !node.signaled.load(Ordering::Acquire) {
467 // If the managing thread happens to signal and unpark us before we
468 // can park ourselves, the result could be this thread never gets
469 // unparked. Luckily `park` comes with the guarantee that if it got
470 // an `unpark` just before on an unparked thread is does not park.
477 #[stable(feature = "std_debug", since = "1.16.0")]
478 impl fmt::Debug for Once {
479 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
484 impl Drop for WaiterQueue<'_> {
486 // Swap out our state with however we finished.
487 let state_and_queue =
488 self.state_and_queue.swap(self.set_state_on_drop_to, Ordering::AcqRel);
490 // We should only ever see an old state which was RUNNING.
491 assert_eq!(state_and_queue & STATE_MASK, RUNNING);
493 // Walk the entire linked list of waiters and wake them up (in lifo
494 // order, last to register is first to wake up).
496 // Right after setting `node.signaled = true` the other thread may
497 // free `node` if there happens to be has a spurious wakeup.
498 // So we have to take out the `thread` field and copy the pointer to
500 let mut queue = (state_and_queue & !STATE_MASK) as *const Waiter;
501 while !queue.is_null() {
502 let next = (*queue).next;
503 let thread = (*queue).thread.take().unwrap();
504 (*queue).signaled.store(true, Ordering::Release);
505 // ^- FIXME (maybe): This is another case of issue #55005
506 // `store()` has a potentially dangling ref to `signaled`.
515 /// Returns `true` if the associated [`Once`] was poisoned prior to the
516 /// invocation of the closure passed to [`Once::call_once_force()`].
520 /// A poisoned [`Once`]:
523 /// #![feature(once_poison)]
525 /// use std::sync::Once;
528 /// static INIT: Once = Once::new();
530 /// // poison the once
531 /// let handle = thread::spawn(|| {
532 /// INIT.call_once(|| panic!());
534 /// assert!(handle.join().is_err());
536 /// INIT.call_once_force(|state| {
537 /// assert!(state.poisoned());
541 /// An unpoisoned [`Once`]:
544 /// #![feature(once_poison)]
546 /// use std::sync::Once;
548 /// static INIT: Once = Once::new();
550 /// INIT.call_once_force(|state| {
551 /// assert!(!state.poisoned());
553 #[unstable(feature = "once_poison", issue = "33577")]
554 pub fn poisoned(&self) -> bool {
558 /// Poison the associated [`Once`] without explicitly panicking.
559 // NOTE: This is currently only exposed for the `lazy` module
560 pub(crate) fn poison(&self) {
561 self.set_state_on_drop_to.set(POISONED);