3 //! ## The threading model
5 //! An executing Rust program consists of a collection of native OS threads,
6 //! each with their own stack and local state. Threads can be named, and
7 //! provide some built-in support for low-level synchronization.
9 //! Communication between threads can be done through
10 //! [channels], Rust's message-passing types, along with [other forms of thread
11 //! synchronization](../../std/sync/index.html) and shared-memory data
12 //! structures. In particular, types that are guaranteed to be
13 //! threadsafe are easily shared between threads using the
14 //! atomically-reference-counted container, [`Arc`].
16 //! Fatal logic errors in Rust cause *thread panic*, during which
17 //! a thread will unwind the stack, running destructors and freeing
18 //! owned resources. While not meant as a 'try/catch' mechanism, panics
19 //! in Rust can nonetheless be caught (unless compiling with `panic=abort`) with
20 //! [`catch_unwind`](../../std/panic/fn.catch_unwind.html) and recovered
21 //! from, or alternatively be resumed with
22 //! [`resume_unwind`](../../std/panic/fn.resume_unwind.html). If the panic
23 //! is not caught the thread will exit, but the panic may optionally be
24 //! detected from a different thread with [`join`]. If the main thread panics
25 //! without the panic being caught, the application will exit with a
26 //! non-zero exit code.
28 //! When the main thread of a Rust program terminates, the entire program shuts
29 //! down, even if other threads are still running. However, this module provides
30 //! convenient facilities for automatically waiting for the termination of a
31 //! thread (i.e., join).
33 //! ## Spawning a thread
35 //! A new thread can be spawned using the [`thread::spawn`][`spawn`] function:
40 //! thread::spawn(move || {
45 //! In this example, the spawned thread is "detached," which means that there is
46 //! no way for the program to learn when the spawned thread completes or otherwise
49 //! To learn when a thread completes, it is necessary to capture the [`JoinHandle`]
50 //! object that is returned by the call to [`spawn`], which provides
51 //! a `join` method that allows the caller to wait for the completion of the
57 //! let thread_join_handle = thread::spawn(move || {
61 //! let res = thread_join_handle.join();
64 //! The [`join`] method returns a [`thread::Result`] containing [`Ok`] of the final
65 //! value produced by the spawned thread, or [`Err`] of the value given to
66 //! a call to [`panic!`] if the thread panicked.
68 //! Note that there is no parent/child relationship between a thread that spawns a
69 //! new thread and the thread being spawned. In particular, the spawned thread may or
70 //! may not outlive the spawning thread, unless the spawning thread is the main thread.
72 //! ## Configuring threads
74 //! A new thread can be configured before it is spawned via the [`Builder`] type,
75 //! which currently allows you to set the name and stack size for the thread:
78 //! # #![allow(unused_must_use)]
81 //! thread::Builder::new().name("thread1".to_string()).spawn(move || {
82 //! println!("Hello, world!");
86 //! ## The `Thread` type
88 //! Threads are represented via the [`Thread`] type, which you can get in one of
91 //! * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`]
92 //! function, and calling [`thread`][`JoinHandle::thread`] on the [`JoinHandle`].
93 //! * By requesting the current thread, using the [`thread::current`] function.
95 //! The [`thread::current`] function is available even for threads not spawned
96 //! by the APIs of this module.
98 //! ## Thread-local storage
100 //! This module also provides an implementation of thread-local storage for Rust
101 //! programs. Thread-local storage is a method of storing data into a global
102 //! variable that each thread in the program will have its own copy of.
103 //! Threads do not share this data, so accesses do not need to be synchronized.
105 //! A thread-local key owns the value it contains and will destroy the value when the
106 //! thread exits. It is created with the [`thread_local!`] macro and can contain any
107 //! value that is `'static` (no borrowed pointers). It provides an accessor function,
108 //! [`with`], that yields a shared reference to the value to the specified
109 //! closure. Thread-local keys allow only shared access to values, as there would be no
110 //! way to guarantee uniqueness if mutable borrows were allowed. Most values
111 //! will want to make use of some form of **interior mutability** through the
112 //! [`Cell`] or [`RefCell`] types.
114 //! ## Naming threads
116 //! Threads are able to have associated names for identification purposes. By default, spawned
117 //! threads are unnamed. To specify a name for a thread, build the thread with [`Builder`] and pass
118 //! the desired thread name to [`Builder::name`]. To retrieve the thread name from within the
119 //! thread, use [`Thread::name`]. A couple examples of where the name of a thread gets used:
121 //! * If a panic occurs in a named thread, the thread name will be printed in the panic message.
122 //! * The thread name is provided to the OS where applicable (e.g., `pthread_setname_np` in
123 //! unix-like platforms).
127 //! The default stack size for spawned threads is 2 MiB, though this particular stack size is
128 //! subject to change in the future. There are two ways to manually specify the stack size for
131 //! * Build the thread with [`Builder`] and pass the desired stack size to [`Builder::stack_size`].
132 //! * Set the `RUST_MIN_STACK` environment variable to an integer representing the desired stack
133 //! size (in bytes). Note that setting [`Builder::stack_size`] will override this.
135 //! Note that the stack size of the main thread is *not* determined by Rust.
137 //! [channels]: crate::sync::mpsc
138 //! [`join`]: JoinHandle::join
139 //! [`Result`]: crate::result::Result
140 //! [`Ok`]: crate::result::Result::Ok
141 //! [`Err`]: crate::result::Result::Err
142 //! [`thread::current`]: current
143 //! [`thread::Result`]: Result
144 //! [`unpark`]: Thread::unpark
145 //! [`thread::park_timeout`]: park_timeout
146 //! [`Cell`]: crate::cell::Cell
147 //! [`RefCell`]: crate::cell::RefCell
148 //! [`with`]: LocalKey::with
149 //! [`thread_local!`]: crate::thread_local
151 #![stable(feature = "rust1", since = "1.0.0")]
152 #![deny(unsafe_op_in_unsafe_fn)]
154 #[cfg(all(test, not(target_os = "emscripten")))]
158 use crate::cell::UnsafeCell;
159 use crate::ffi::{CStr, CString};
162 use crate::marker::PhantomData;
164 use crate::num::NonZeroU64;
165 use crate::num::NonZeroUsize;
167 use crate::panicking;
169 use crate::ptr::addr_of_mut;
171 use crate::sync::Arc;
172 use crate::sys::thread as imp;
173 use crate::sys_common::mutex;
174 use crate::sys_common::thread;
175 use crate::sys_common::thread_info;
176 use crate::sys_common::thread_parker::Parker;
177 use crate::sys_common::{AsInner, IntoInner};
178 use crate::time::Duration;
180 ////////////////////////////////////////////////////////////////////////////////
181 // Thread-local storage
182 ////////////////////////////////////////////////////////////////////////////////
187 #[stable(feature = "scoped_threads", since = "1.63.0")]
190 #[stable(feature = "scoped_threads", since = "1.63.0")]
191 pub use scoped::{scope, Scope, ScopedJoinHandle};
193 #[stable(feature = "rust1", since = "1.0.0")]
194 pub use self::local::{AccessError, LocalKey};
196 // The types used by the thread_local! macro to access TLS keys. Note that there
197 // are two types, the "OS" type and the "fast" type. The OS thread local key
198 // type is accessed via platform-specific API calls and is slow, while the fast
199 // key type is accessed via code generated via LLVM, where TLS keys are set up
200 // by the elf linker. Note that the OS TLS type is always available: on macOS
201 // the standard library is compiled with support for older platform versions
202 // where fast TLS was not available; end-user code is compiled with fast TLS
203 // where available, but both are needed.
205 #[unstable(feature = "libstd_thread_internals", issue = "none")]
206 #[cfg(target_thread_local)]
208 pub use self::local::fast::Key as __FastLocalKeyInner;
209 #[unstable(feature = "libstd_thread_internals", issue = "none")]
211 pub use self::local::os::Key as __OsLocalKeyInner;
212 #[unstable(feature = "libstd_thread_internals", issue = "none")]
213 #[cfg(all(target_family = "wasm", not(target_feature = "atomics")))]
215 pub use self::local::statik::Key as __StaticLocalKeyInner;
217 ////////////////////////////////////////////////////////////////////////////////
219 ////////////////////////////////////////////////////////////////////////////////
221 /// Thread factory, which can be used in order to configure the properties of
224 /// Methods can be chained on it in order to configure it.
226 /// The two configurations available are:
228 /// - [`name`]: specifies an [associated name for the thread][naming-threads]
229 /// - [`stack_size`]: specifies the [desired stack size for the thread][stack-size]
231 /// The [`spawn`] method will take ownership of the builder and create an
232 /// [`io::Result`] to the thread handle with the given configuration.
234 /// The [`thread::spawn`] free function uses a `Builder` with default
235 /// configuration and [`unwrap`]s its return value.
237 /// You may want to use [`spawn`] instead of [`thread::spawn`], when you want
238 /// to recover from a failure to launch a thread, indeed the free function will
239 /// panic where the `Builder` method will return a [`io::Result`].
246 /// let builder = thread::Builder::new();
248 /// let handler = builder.spawn(|| {
252 /// handler.join().unwrap();
255 /// [`stack_size`]: Builder::stack_size
256 /// [`name`]: Builder::name
257 /// [`spawn`]: Builder::spawn
258 /// [`thread::spawn`]: spawn
259 /// [`io::Result`]: crate::io::Result
260 /// [`unwrap`]: crate::result::Result::unwrap
261 /// [naming-threads]: ./index.html#naming-threads
262 /// [stack-size]: ./index.html#stack-size
263 #[must_use = "must eventually spawn the thread"]
264 #[stable(feature = "rust1", since = "1.0.0")]
267 // A name for the thread-to-be, for identification in panic messages
268 name: Option<String>,
269 // The size of the stack for the spawned thread in bytes
270 stack_size: Option<usize>,
274 /// Generates the base configuration for spawning a thread, from which
275 /// configuration methods can be chained.
282 /// let builder = thread::Builder::new()
283 /// .name("foo".into())
284 /// .stack_size(32 * 1024);
286 /// let handler = builder.spawn(|| {
290 /// handler.join().unwrap();
292 #[stable(feature = "rust1", since = "1.0.0")]
293 pub fn new() -> Builder {
294 Builder { name: None, stack_size: None }
297 /// Names the thread-to-be. Currently the name is used for identification
298 /// only in panic messages.
300 /// The name must not contain null bytes (`\0`).
302 /// For more information about named threads, see
303 /// [this module-level documentation][naming-threads].
310 /// let builder = thread::Builder::new()
311 /// .name("foo".into());
313 /// let handler = builder.spawn(|| {
314 /// assert_eq!(thread::current().name(), Some("foo"))
317 /// handler.join().unwrap();
320 /// [naming-threads]: ./index.html#naming-threads
321 #[stable(feature = "rust1", since = "1.0.0")]
322 pub fn name(mut self, name: String) -> Builder {
323 self.name = Some(name);
327 /// Sets the size of the stack (in bytes) for the new thread.
329 /// The actual stack size may be greater than this value if
330 /// the platform specifies a minimal stack size.
332 /// For more information about the stack size for threads, see
333 /// [this module-level documentation][stack-size].
340 /// let builder = thread::Builder::new().stack_size(32 * 1024);
343 /// [stack-size]: ./index.html#stack-size
344 #[stable(feature = "rust1", since = "1.0.0")]
345 pub fn stack_size(mut self, size: usize) -> Builder {
346 self.stack_size = Some(size);
350 /// Spawns a new thread by taking ownership of the `Builder`, and returns an
351 /// [`io::Result`] to its [`JoinHandle`].
353 /// The spawned thread may outlive the caller (unless the caller thread
354 /// is the main thread; the whole process is terminated when the main
355 /// thread finishes). The join handle can be used to block on
356 /// termination of the spawned thread, including recovering its panics.
358 /// For a more complete documentation see [`thread::spawn`][`spawn`].
362 /// Unlike the [`spawn`] free function, this method yields an
363 /// [`io::Result`] to capture any failure to create the thread at
366 /// [`io::Result`]: crate::io::Result
370 /// Panics if a thread name was set and it contained null bytes.
377 /// let builder = thread::Builder::new();
379 /// let handler = builder.spawn(|| {
383 /// handler.join().unwrap();
385 #[stable(feature = "rust1", since = "1.0.0")]
386 pub fn spawn<F, T>(self, f: F) -> io::Result<JoinHandle<T>>
392 unsafe { self.spawn_unchecked(f) }
395 /// Spawns a new thread without any lifetime restrictions by taking ownership
396 /// of the `Builder`, and returns an [`io::Result`] to its [`JoinHandle`].
398 /// The spawned thread may outlive the caller (unless the caller thread
399 /// is the main thread; the whole process is terminated when the main
400 /// thread finishes). The join handle can be used to block on
401 /// termination of the spawned thread, including recovering its panics.
403 /// This method is identical to [`thread::Builder::spawn`][`Builder::spawn`],
404 /// except for the relaxed lifetime bounds, which render it unsafe.
405 /// For a more complete documentation see [`thread::spawn`][`spawn`].
409 /// Unlike the [`spawn`] free function, this method yields an
410 /// [`io::Result`] to capture any failure to create the thread at
415 /// Panics if a thread name was set and it contained null bytes.
419 /// The caller has to ensure that the spawned thread does not outlive any
420 /// references in the supplied thread closure and its return type.
421 /// This can be guaranteed in two ways:
423 /// - ensure that [`join`][`JoinHandle::join`] is called before any referenced
425 /// - use only types with `'static` lifetime bounds, i.e., those with no or only
426 /// `'static` references (both [`thread::Builder::spawn`][`Builder::spawn`]
427 /// and [`thread::spawn`][`spawn`] enforce this property statically)
432 /// #![feature(thread_spawn_unchecked)]
435 /// let builder = thread::Builder::new();
438 /// let thread_x = &x;
440 /// let handler = unsafe {
441 /// builder.spawn_unchecked(move || {
442 /// println!("x = {}", *thread_x);
446 /// // caller has to ensure `join()` is called, otherwise
447 /// // it is possible to access freed memory if `x` gets
448 /// // dropped before the thread closure is executed!
449 /// handler.join().unwrap();
452 /// [`io::Result`]: crate::io::Result
453 #[unstable(feature = "thread_spawn_unchecked", issue = "55132")]
454 pub unsafe fn spawn_unchecked<'a, F, T>(self, f: F) -> io::Result<JoinHandle<T>>
460 Ok(JoinHandle(unsafe { self.spawn_unchecked_(f, None) }?))
463 unsafe fn spawn_unchecked_<'a, 'scope, F, T>(
466 scope_data: Option<Arc<scoped::ScopeData>>,
467 ) -> io::Result<JoinInner<'scope, T>>
474 let Builder { name, stack_size } = self;
476 let stack_size = stack_size.unwrap_or_else(thread::min_stack);
478 let my_thread = Thread::new(name.map(|name| {
479 CString::new(name).expect("thread name may not contain interior null bytes")
481 let their_thread = my_thread.clone();
483 let my_packet: Arc<Packet<'scope, T>> = Arc::new(Packet {
485 result: UnsafeCell::new(None),
486 _marker: PhantomData,
488 let their_packet = my_packet.clone();
490 let output_capture = crate::io::set_output_capture(None);
491 crate::io::set_output_capture(output_capture.clone());
494 if let Some(name) = their_thread.cname() {
495 imp::Thread::set_name(name);
498 crate::io::set_output_capture(output_capture);
500 // SAFETY: the stack guard passed is the one for the current thread.
501 // This means the current thread's stack and the new thread's stack
502 // are properly set and protected from each other.
503 thread_info::set(unsafe { imp::guard::current() }, their_thread);
504 let try_result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
505 crate::sys_common::backtrace::__rust_begin_short_backtrace(f)
507 // SAFETY: `their_packet` as been built just above and moved by the
508 // closure (it is an Arc<...>) and `my_packet` will be stored in the
509 // same `JoinInner` as this closure meaning the mutation will be
510 // safe (not modify it and affect a value far away).
511 unsafe { *their_packet.result.get() = Some(try_result) };
514 if let Some(scope_data) = &my_packet.scope {
515 scope_data.increment_num_running_threads();
521 // `imp::Thread::new` takes a closure with a `'static` lifetime, since it's passed
522 // through FFI or otherwise used with low-level threading primitives that have no
523 // notion of or way to enforce lifetimes.
525 // As mentioned in the `Safety` section of this function's documentation, the caller of
526 // this function needs to guarantee that the passed-in lifetime is sufficiently long
527 // for the lifetime of the thread.
529 // Similarly, the `sys` implementation must guarantee that no references to the closure
530 // exist after the thread has terminated, which is signaled by `Thread::join`
535 mem::transmute::<Box<dyn FnOnce() + 'a>, Box<dyn FnOnce() + 'static>>(
546 ////////////////////////////////////////////////////////////////////////////////
548 ////////////////////////////////////////////////////////////////////////////////
550 /// Spawns a new thread, returning a [`JoinHandle`] for it.
552 /// The join handle provides a [`join`] method that can be used to join the spawned
553 /// thread. If the spawned thread panics, [`join`] will return an [`Err`] containing
554 /// the argument given to [`panic!`].
556 /// If the join handle is dropped, the spawned thread will implicitly be *detached*.
557 /// In this case, the spawned thread may no longer be joined.
558 /// (It is the responsibility of the program to either eventually join threads it
559 /// creates or detach them; otherwise, a resource leak will result.)
561 /// This call will create a thread using default parameters of [`Builder`], if you
562 /// want to specify the stack size or the name of the thread, use this API
565 /// As you can see in the signature of `spawn` there are two constraints on
566 /// both the closure given to `spawn` and its return value, let's explain them:
568 /// - The `'static` constraint means that the closure and its return value
569 /// must have a lifetime of the whole program execution. The reason for this
570 /// is that threads can outlive the lifetime they have been created in.
572 /// Indeed if the thread, and by extension its return value, can outlive their
573 /// caller, we need to make sure that they will be valid afterwards, and since
574 /// we *can't* know when it will return we need to have them valid as long as
575 /// possible, that is until the end of the program, hence the `'static`
577 /// - The [`Send`] constraint is because the closure will need to be passed
578 /// *by value* from the thread where it is spawned to the new thread. Its
579 /// return value will need to be passed from the new thread to the thread
580 /// where it is `join`ed.
581 /// As a reminder, the [`Send`] marker trait expresses that it is safe to be
582 /// passed from thread to thread. [`Sync`] expresses that it is safe to have a
583 /// reference be passed from thread to thread.
587 /// Panics if the OS fails to create a thread; use [`Builder::spawn`]
588 /// to recover from such errors.
592 /// Creating a thread.
597 /// let handler = thread::spawn(|| {
601 /// handler.join().unwrap();
604 /// As mentioned in the module documentation, threads are usually made to
605 /// communicate using [`channels`], here is how it usually looks.
607 /// This example also shows how to use `move`, in order to give ownership
608 /// of values to a thread.
612 /// use std::sync::mpsc::channel;
614 /// let (tx, rx) = channel();
616 /// let sender = thread::spawn(move || {
617 /// tx.send("Hello, thread".to_owned())
618 /// .expect("Unable to send on channel");
621 /// let receiver = thread::spawn(move || {
622 /// let value = rx.recv().expect("Unable to receive from channel");
623 /// println!("{value}");
626 /// sender.join().expect("The sender thread has panicked");
627 /// receiver.join().expect("The receiver thread has panicked");
630 /// A thread can also return a value through its [`JoinHandle`], you can use
631 /// this to make asynchronous computations (futures might be more appropriate
637 /// let computation = thread::spawn(|| {
638 /// // Some expensive computation.
642 /// let result = computation.join().unwrap();
643 /// println!("{result}");
646 /// [`channels`]: crate::sync::mpsc
647 /// [`join`]: JoinHandle::join
648 /// [`Err`]: crate::result::Result::Err
649 #[stable(feature = "rust1", since = "1.0.0")]
650 pub fn spawn<F, T>(f: F) -> JoinHandle<T>
656 Builder::new().spawn(f).expect("failed to spawn thread")
659 /// Gets a handle to the thread that invokes it.
663 /// Getting a handle to the current thread with `thread::current()`:
668 /// let handler = thread::Builder::new()
669 /// .name("named thread".into())
671 /// let handle = thread::current();
672 /// assert_eq!(handle.name(), Some("named thread"));
676 /// handler.join().unwrap();
679 #[stable(feature = "rust1", since = "1.0.0")]
680 pub fn current() -> Thread {
681 thread_info::current_thread().expect(
682 "use of std::thread::current() is not possible \
683 after the thread's local data has been destroyed",
687 /// Cooperatively gives up a timeslice to the OS scheduler.
689 /// This calls the underlying OS scheduler's yield primitive, signaling
690 /// that the calling thread is willing to give up its remaining timeslice
691 /// so that the OS may schedule other threads on the CPU.
693 /// A drawback of yielding in a loop is that if the OS does not have any
694 /// other ready threads to run on the current CPU, the thread will effectively
695 /// busy-wait, which wastes CPU time and energy.
697 /// Therefore, when waiting for events of interest, a programmer's first
698 /// choice should be to use synchronization devices such as [`channel`]s,
699 /// [`Condvar`]s, [`Mutex`]es or [`join`] since these primitives are
700 /// implemented in a blocking manner, giving up the CPU until the event
701 /// of interest has occurred which avoids repeated yielding.
703 /// `yield_now` should thus be used only rarely, mostly in situations where
704 /// repeated polling is required because there is no other suitable way to
705 /// learn when an event of interest has occurred.
712 /// thread::yield_now();
715 /// [`channel`]: crate::sync::mpsc
716 /// [`join`]: JoinHandle::join
717 /// [`Condvar`]: crate::sync::Condvar
718 /// [`Mutex`]: crate::sync::Mutex
719 #[stable(feature = "rust1", since = "1.0.0")]
721 imp::Thread::yield_now()
724 /// Determines whether the current thread is unwinding because of panic.
726 /// A common use of this feature is to poison shared resources when writing
727 /// unsafe code, by checking `panicking` when the `drop` is called.
729 /// This is usually not needed when writing safe code, as [`Mutex`es][Mutex]
730 /// already poison themselves when a thread panics while holding the lock.
732 /// This can also be used in multithreaded applications, in order to send a
733 /// message to other threads warning that a thread has panicked (e.g., for
734 /// monitoring purposes).
741 /// struct SomeStruct;
743 /// impl Drop for SomeStruct {
744 /// fn drop(&mut self) {
745 /// if thread::panicking() {
746 /// println!("dropped while unwinding");
748 /// println!("dropped while not unwinding");
755 /// let a = SomeStruct;
760 /// let b = SomeStruct;
765 /// [Mutex]: crate::sync::Mutex
768 #[stable(feature = "rust1", since = "1.0.0")]
769 pub fn panicking() -> bool {
770 panicking::panicking()
773 /// Puts the current thread to sleep for at least the specified amount of time.
775 /// The thread may sleep longer than the duration specified due to scheduling
776 /// specifics or platform-dependent functionality. It will never sleep less.
778 /// This function is blocking, and should not be used in `async` functions.
780 /// # Platform-specific behavior
782 /// On Unix platforms, the underlying syscall may be interrupted by a
783 /// spurious wakeup or signal handler. To ensure the sleep occurs for at least
784 /// the specified duration, this function may invoke that system call multiple
792 /// // Let's sleep for 2 seconds:
793 /// thread::sleep_ms(2000);
795 #[stable(feature = "rust1", since = "1.0.0")]
796 #[deprecated(since = "1.6.0", note = "replaced by `std::thread::sleep`")]
797 pub fn sleep_ms(ms: u32) {
798 sleep(Duration::from_millis(ms as u64))
801 /// Puts the current thread to sleep for at least the specified amount of time.
803 /// The thread may sleep longer than the duration specified due to scheduling
804 /// specifics or platform-dependent functionality. It will never sleep less.
806 /// This function is blocking, and should not be used in `async` functions.
808 /// # Platform-specific behavior
810 /// On Unix platforms, the underlying syscall may be interrupted by a
811 /// spurious wakeup or signal handler. To ensure the sleep occurs for at least
812 /// the specified duration, this function may invoke that system call multiple
814 /// Platforms which do not support nanosecond precision for sleeping will
815 /// have `dur` rounded up to the nearest granularity of time they can sleep for.
817 /// Currently, specifying a zero duration on Unix platforms returns immediately
818 /// without invoking the underlying [`nanosleep`] syscall, whereas on Windows
819 /// platforms the underlying [`Sleep`] syscall is always invoked.
820 /// If the intention is to yield the current time-slice you may want to use
821 /// [`yield_now`] instead.
823 /// [`nanosleep`]: https://linux.die.net/man/2/nanosleep
824 /// [`Sleep`]: https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-sleep
829 /// use std::{thread, time};
831 /// let ten_millis = time::Duration::from_millis(10);
832 /// let now = time::Instant::now();
834 /// thread::sleep(ten_millis);
836 /// assert!(now.elapsed() >= ten_millis);
838 #[stable(feature = "thread_sleep", since = "1.4.0")]
839 pub fn sleep(dur: Duration) {
840 imp::Thread::sleep(dur)
843 /// Blocks unless or until the current thread's token is made available.
845 /// A call to `park` does not guarantee that the thread will remain parked
846 /// forever, and callers should be prepared for this possibility.
848 /// # park and unpark
850 /// Every thread is equipped with some basic low-level blocking support, via the
851 /// [`thread::park`][`park`] function and [`thread::Thread::unpark`][`unpark`]
852 /// method. [`park`] blocks the current thread, which can then be resumed from
853 /// another thread by calling the [`unpark`] method on the blocked thread's
856 /// Conceptually, each [`Thread`] handle has an associated token, which is
857 /// initially not present:
859 /// * The [`thread::park`][`park`] function blocks the current thread unless or
860 /// until the token is available for its thread handle, at which point it
861 /// atomically consumes the token. It may also return *spuriously*, without
862 /// consuming the token. [`thread::park_timeout`] does the same, but allows
863 /// specifying a maximum time to block the thread for.
865 /// * The [`unpark`] method on a [`Thread`] atomically makes the token available
866 /// if it wasn't already. Because the token is initially absent, [`unpark`]
867 /// followed by [`park`] will result in the second call returning immediately.
869 /// In other words, each [`Thread`] acts a bit like a spinlock that can be
870 /// locked and unlocked using `park` and `unpark`.
872 /// Notice that being unblocked does not imply any synchronization with someone
873 /// that unparked this thread, it could also be spurious.
874 /// For example, it would be a valid, but inefficient, implementation to make both [`park`] and
875 /// [`unpark`] return immediately without doing anything.
877 /// The API is typically used by acquiring a handle to the current thread,
878 /// placing that handle in a shared data structure so that other threads can
879 /// find it, and then `park`ing in a loop. When some desired condition is met, another
880 /// thread calls [`unpark`] on the handle.
882 /// The motivation for this design is twofold:
884 /// * It avoids the need to allocate mutexes and condvars when building new
885 /// synchronization primitives; the threads already provide basic
886 /// blocking/signaling.
888 /// * It can be implemented very efficiently on many platforms.
894 /// use std::sync::{Arc, atomic::{Ordering, AtomicBool}};
895 /// use std::time::Duration;
897 /// let flag = Arc::new(AtomicBool::new(false));
898 /// let flag2 = Arc::clone(&flag);
900 /// let parked_thread = thread::spawn(move || {
901 /// // We want to wait until the flag is set. We *could* just spin, but using
902 /// // park/unpark is more efficient.
903 /// while !flag2.load(Ordering::Acquire) {
904 /// println!("Parking thread");
906 /// // We *could* get here spuriously, i.e., way before the 10ms below are over!
907 /// // But that is no problem, we are in a loop until the flag is set anyway.
908 /// println!("Thread unparked");
910 /// println!("Flag received");
913 /// // Let some time pass for the thread to be spawned.
914 /// thread::sleep(Duration::from_millis(10));
916 /// // Set the flag, and let the thread wake up.
917 /// // There is no race condition here, if `unpark`
918 /// // happens first, `park` will return immediately.
919 /// // Hence there is no risk of a deadlock.
920 /// flag.store(true, Ordering::Release);
921 /// println!("Unpark the thread");
922 /// parked_thread.thread().unpark();
924 /// parked_thread.join().unwrap();
927 /// [`unpark`]: Thread::unpark
928 /// [`thread::park_timeout`]: park_timeout
929 #[stable(feature = "rust1", since = "1.0.0")]
931 // SAFETY: park_timeout is called on the parker owned by this thread.
933 current().inner.as_ref().parker().park();
937 /// Use [`park_timeout`].
939 /// Blocks unless or until the current thread's token is made available or
940 /// the specified duration has been reached (may wake spuriously).
942 /// The semantics of this function are equivalent to [`park`] except
943 /// that the thread will be blocked for roughly no longer than `dur`. This
944 /// method should not be used for precise timing due to anomalies such as
945 /// preemption or platform differences that might not cause the maximum
946 /// amount of time waited to be precisely `ms` long.
948 /// See the [park documentation][`park`] for more detail.
949 #[stable(feature = "rust1", since = "1.0.0")]
950 #[deprecated(since = "1.6.0", note = "replaced by `std::thread::park_timeout`")]
951 pub fn park_timeout_ms(ms: u32) {
952 park_timeout(Duration::from_millis(ms as u64))
955 /// Blocks unless or until the current thread's token is made available or
956 /// the specified duration has been reached (may wake spuriously).
958 /// The semantics of this function are equivalent to [`park`][park] except
959 /// that the thread will be blocked for roughly no longer than `dur`. This
960 /// method should not be used for precise timing due to anomalies such as
961 /// preemption or platform differences that might not cause the maximum
962 /// amount of time waited to be precisely `dur` long.
964 /// See the [park documentation][park] for more details.
966 /// # Platform-specific behavior
968 /// Platforms which do not support nanosecond precision for sleeping will have
969 /// `dur` rounded up to the nearest granularity of time they can sleep for.
973 /// Waiting for the complete expiration of the timeout:
976 /// use std::thread::park_timeout;
977 /// use std::time::{Instant, Duration};
979 /// let timeout = Duration::from_secs(2);
980 /// let beginning_park = Instant::now();
982 /// let mut timeout_remaining = timeout;
984 /// park_timeout(timeout_remaining);
985 /// let elapsed = beginning_park.elapsed();
986 /// if elapsed >= timeout {
989 /// println!("restarting park_timeout after {elapsed:?}");
990 /// timeout_remaining = timeout - elapsed;
993 #[stable(feature = "park_timeout", since = "1.4.0")]
994 pub fn park_timeout(dur: Duration) {
995 // SAFETY: park_timeout is called on the parker owned by this thread.
997 current().inner.as_ref().parker().park_timeout(dur);
1001 ////////////////////////////////////////////////////////////////////////////////
1003 ////////////////////////////////////////////////////////////////////////////////
1005 /// A unique identifier for a running thread.
1007 /// A `ThreadId` is an opaque object that uniquely identifies each thread
1008 /// created during the lifetime of a process. `ThreadId`s are guaranteed not to
1009 /// be reused, even when a thread terminates. `ThreadId`s are under the control
1010 /// of Rust's standard library and there may not be any relationship between
1011 /// `ThreadId` and the underlying platform's notion of a thread identifier --
1012 /// the two concepts cannot, therefore, be used interchangeably. A `ThreadId`
1013 /// can be retrieved from the [`id`] method on a [`Thread`].
1018 /// use std::thread;
1020 /// let other_thread = thread::spawn(|| {
1021 /// thread::current().id()
1024 /// let other_thread_id = other_thread.join().unwrap();
1025 /// assert!(thread::current().id() != other_thread_id);
1028 /// [`id`]: Thread::id
1029 #[stable(feature = "thread_id", since = "1.19.0")]
1030 #[derive(Eq, PartialEq, Clone, Copy, Hash, Debug)]
1031 pub struct ThreadId(NonZeroU64);
1034 // Generate a new unique thread ID.
1035 fn new() -> ThreadId {
1036 // It is UB to attempt to acquire this mutex reentrantly!
1037 static GUARD: mutex::StaticMutex = mutex::StaticMutex::new();
1038 static mut COUNTER: u64 = 1;
1041 let guard = GUARD.lock();
1043 // If we somehow use up all our bits, panic so that we're not
1044 // covering up subtle bugs of IDs being reused.
1045 if COUNTER == u64::MAX {
1046 drop(guard); // in case the panic handler ends up calling `ThreadId::new()`, avoid reentrant lock acquire.
1047 panic!("failed to generate unique thread ID: bitspace exhausted");
1053 ThreadId(NonZeroU64::new(id).unwrap())
1057 /// This returns a numeric identifier for the thread identified by this
1060 /// As noted in the documentation for the type itself, it is essentially an
1061 /// opaque ID, but is guaranteed to be unique for each thread. The returned
1062 /// value is entirely opaque -- only equality testing is stable. Note that
1063 /// it is not guaranteed which values new threads will return, and this may
1064 /// change across Rust versions.
1066 #[unstable(feature = "thread_id_value", issue = "67939")]
1067 pub fn as_u64(&self) -> NonZeroU64 {
1072 ////////////////////////////////////////////////////////////////////////////////
1074 ////////////////////////////////////////////////////////////////////////////////
1076 /// The internal representation of a `Thread` handle
1078 name: Option<CString>, // Guaranteed to be UTF-8
1084 fn parker(self: Pin<&Self>) -> Pin<&Parker> {
1085 unsafe { Pin::map_unchecked(self, |inner| &inner.parker) }
1090 #[stable(feature = "rust1", since = "1.0.0")]
1091 /// A handle to a thread.
1093 /// Threads are represented via the `Thread` type, which you can get in one of
1096 /// * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`]
1097 /// function, and calling [`thread`][`JoinHandle::thread`] on the
1099 /// * By requesting the current thread, using the [`thread::current`] function.
1101 /// The [`thread::current`] function is available even for threads not spawned
1102 /// by the APIs of this module.
1104 /// There is usually no need to create a `Thread` struct yourself, one
1105 /// should instead use a function like `spawn` to create new threads, see the
1106 /// docs of [`Builder`] and [`spawn`] for more details.
1108 /// [`thread::current`]: current
1110 inner: Pin<Arc<Inner>>,
1114 // Used only internally to construct a thread object without spawning
1115 // Panics if the name contains nuls.
1116 pub(crate) fn new(name: Option<CString>) -> Thread {
1117 // We have to use `unsafe` here to construct the `Parker` in-place,
1118 // which is required for the UNIX implementation.
1120 // SAFETY: We pin the Arc immediately after creation, so its address never
1122 let inner = unsafe {
1123 let mut arc = Arc::<Inner>::new_uninit();
1124 let ptr = Arc::get_mut_unchecked(&mut arc).as_mut_ptr();
1125 addr_of_mut!((*ptr).name).write(name);
1126 addr_of_mut!((*ptr).id).write(ThreadId::new());
1127 Parker::new(addr_of_mut!((*ptr).parker));
1128 Pin::new_unchecked(arc.assume_init())
1134 /// Atomically makes the handle's token available if it is not already.
1136 /// Every thread is equipped with some basic low-level blocking support, via
1137 /// the [`park`][park] function and the `unpark()` method. These can be
1138 /// used as a more CPU-efficient implementation of a spinlock.
1140 /// See the [park documentation][park] for more details.
1145 /// use std::thread;
1146 /// use std::time::Duration;
1148 /// let parked_thread = thread::Builder::new()
1150 /// println!("Parking thread");
1152 /// println!("Thread unparked");
1156 /// // Let some time pass for the thread to be spawned.
1157 /// thread::sleep(Duration::from_millis(10));
1159 /// println!("Unpark the thread");
1160 /// parked_thread.thread().unpark();
1162 /// parked_thread.join().unwrap();
1164 #[stable(feature = "rust1", since = "1.0.0")]
1166 pub fn unpark(&self) {
1167 self.inner.as_ref().parker().unpark();
1170 /// Gets the thread's unique identifier.
1175 /// use std::thread;
1177 /// let other_thread = thread::spawn(|| {
1178 /// thread::current().id()
1181 /// let other_thread_id = other_thread.join().unwrap();
1182 /// assert!(thread::current().id() != other_thread_id);
1184 #[stable(feature = "thread_id", since = "1.19.0")]
1186 pub fn id(&self) -> ThreadId {
1190 /// Gets the thread's name.
1192 /// For more information about named threads, see
1193 /// [this module-level documentation][naming-threads].
1197 /// Threads by default have no name specified:
1200 /// use std::thread;
1202 /// let builder = thread::Builder::new();
1204 /// let handler = builder.spawn(|| {
1205 /// assert!(thread::current().name().is_none());
1208 /// handler.join().unwrap();
1211 /// Thread with a specified name:
1214 /// use std::thread;
1216 /// let builder = thread::Builder::new()
1217 /// .name("foo".into());
1219 /// let handler = builder.spawn(|| {
1220 /// assert_eq!(thread::current().name(), Some("foo"))
1223 /// handler.join().unwrap();
1226 /// [naming-threads]: ./index.html#naming-threads
1227 #[stable(feature = "rust1", since = "1.0.0")]
1229 pub fn name(&self) -> Option<&str> {
1230 self.cname().map(|s| unsafe { str::from_utf8_unchecked(s.to_bytes()) })
1233 fn cname(&self) -> Option<&CStr> {
1234 self.inner.name.as_deref()
1238 #[stable(feature = "rust1", since = "1.0.0")]
1239 impl fmt::Debug for Thread {
1240 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1241 f.debug_struct("Thread")
1242 .field("id", &self.id())
1243 .field("name", &self.name())
1244 .finish_non_exhaustive()
1248 ////////////////////////////////////////////////////////////////////////////////
1250 ////////////////////////////////////////////////////////////////////////////////
1252 /// A specialized [`Result`] type for threads.
1254 /// Indicates the manner in which a thread exited.
1256 /// The value contained in the `Result::Err` variant
1257 /// is the value the thread panicked with;
1258 /// that is, the argument the `panic!` macro was called with.
1259 /// Unlike with normal errors, this value doesn't implement
1260 /// the [`Error`](crate::error::Error) trait.
1262 /// Thus, a sensible way to handle a thread panic is to either:
1264 /// 1. propagate the panic with [`std::panic::resume_unwind`]
1265 /// 2. or in case the thread is intended to be a subsystem boundary
1266 /// that is supposed to isolate system-level failures,
1267 /// match on the `Err` variant and handle the panic in an appropriate way
1269 /// A thread that completes without panicking is considered to exit successfully.
1273 /// Matching on the result of a joined thread:
1276 /// use std::{fs, thread, panic};
1278 /// fn copy_in_thread() -> thread::Result<()> {
1279 /// thread::spawn(|| {
1280 /// fs::copy("foo.txt", "bar.txt").unwrap();
1285 /// match copy_in_thread() {
1286 /// Ok(_) => println!("copy succeeded"),
1287 /// Err(e) => panic::resume_unwind(e),
1292 /// [`Result`]: crate::result::Result
1293 /// [`std::panic::resume_unwind`]: crate::panic::resume_unwind
1294 #[stable(feature = "rust1", since = "1.0.0")]
1295 pub type Result<T> = crate::result::Result<T, Box<dyn Any + Send + 'static>>;
1297 // This packet is used to communicate the return value between the spawned
1298 // thread and the rest of the program. It is shared through an `Arc` and
1299 // there's no need for a mutex here because synchronization happens with `join()`
1300 // (the caller will never read this packet until the thread has exited).
1302 // An Arc to the packet is stored into a `JoinInner` which in turns is placed
1304 struct Packet<'scope, T> {
1305 scope: Option<Arc<scoped::ScopeData>>,
1306 result: UnsafeCell<Option<Result<T>>>,
1307 _marker: PhantomData<Option<&'scope scoped::ScopeData>>,
1310 // Due to the usage of `UnsafeCell` we need to manually implement Sync.
1311 // The type `T` should already always be Send (otherwise the thread could not
1312 // have been created) and the Packet is Sync because all access to the
1313 // `UnsafeCell` synchronized (by the `join()` boundary), and `ScopeData` is Sync.
1314 unsafe impl<'scope, T: Sync> Sync for Packet<'scope, T> {}
1316 impl<'scope, T> Drop for Packet<'scope, T> {
1317 fn drop(&mut self) {
1318 // If this packet was for a thread that ran in a scope, the thread
1319 // panicked, and nobody consumed the panic payload, we make sure
1320 // the scope function will panic.
1321 let unhandled_panic = matches!(self.result.get_mut(), Some(Err(_)));
1322 // Drop the result without causing unwinding.
1323 // This is only relevant for threads that aren't join()ed, as
1324 // join() will take the `result` and set it to None, such that
1325 // there is nothing left to drop here.
1326 // If this panics, we should handle that, because we're outside the
1327 // outermost `catch_unwind` of our thread.
1328 // We just abort in that case, since there's nothing else we can do.
1329 // (And even if we tried to handle it somehow, we'd also need to handle
1330 // the case where the panic payload we get out of it also panics on
1331 // drop, and so on. See issue #86027.)
1332 if let Err(_) = panic::catch_unwind(panic::AssertUnwindSafe(|| {
1333 *self.result.get_mut() = None;
1335 rtabort!("thread result panicked on drop");
1337 // Book-keeping so the scope knows when it's done.
1338 if let Some(scope) = &self.scope {
1339 // Now that there will be no more user code running on this thread
1340 // that can use 'scope, mark the thread as 'finished'.
1341 // It's important we only do this after the `result` has been dropped,
1342 // since dropping it might still use things it borrowed from 'scope.
1343 scope.decrement_num_running_threads(unhandled_panic);
1348 /// Inner representation for JoinHandle
1349 struct JoinInner<'scope, T> {
1350 native: imp::Thread,
1352 packet: Arc<Packet<'scope, T>>,
1355 impl<'scope, T> JoinInner<'scope, T> {
1356 fn join(mut self) -> Result<T> {
1358 Arc::get_mut(&mut self.packet).unwrap().result.get_mut().take().unwrap()
1362 /// An owned permission to join on a thread (block on its termination).
1364 /// A `JoinHandle` *detaches* the associated thread when it is dropped, which
1365 /// means that there is no longer any handle to the thread and no way to `join`
1368 /// Due to platform restrictions, it is not possible to [`Clone`] this
1369 /// handle: the ability to join a thread is a uniquely-owned permission.
1371 /// This `struct` is created by the [`thread::spawn`] function and the
1372 /// [`thread::Builder::spawn`] method.
1376 /// Creation from [`thread::spawn`]:
1379 /// use std::thread;
1381 /// let join_handle: thread::JoinHandle<_> = thread::spawn(|| {
1382 /// // some work here
1386 /// Creation from [`thread::Builder::spawn`]:
1389 /// use std::thread;
1391 /// let builder = thread::Builder::new();
1393 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1394 /// // some work here
1398 /// A thread being detached and outliving the thread that spawned it:
1401 /// use std::thread;
1402 /// use std::time::Duration;
1404 /// let original_thread = thread::spawn(|| {
1405 /// let _detached_thread = thread::spawn(|| {
1406 /// // Here we sleep to make sure that the first thread returns before.
1407 /// thread::sleep(Duration::from_millis(10));
1408 /// // This will be called, even though the JoinHandle is dropped.
1409 /// println!("♫ Still alive ♫");
1413 /// original_thread.join().expect("The thread being joined has panicked");
1414 /// println!("Original thread is joined.");
1416 /// // We make sure that the new thread has time to run, before the main
1417 /// // thread returns.
1419 /// thread::sleep(Duration::from_millis(1000));
1422 /// [`thread::Builder::spawn`]: Builder::spawn
1423 /// [`thread::spawn`]: spawn
1424 #[stable(feature = "rust1", since = "1.0.0")]
1425 pub struct JoinHandle<T>(JoinInner<'static, T>);
1427 #[stable(feature = "joinhandle_impl_send_sync", since = "1.29.0")]
1428 unsafe impl<T> Send for JoinHandle<T> {}
1429 #[stable(feature = "joinhandle_impl_send_sync", since = "1.29.0")]
1430 unsafe impl<T> Sync for JoinHandle<T> {}
1432 impl<T> JoinHandle<T> {
1433 /// Extracts a handle to the underlying thread.
1438 /// use std::thread;
1440 /// let builder = thread::Builder::new();
1442 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1443 /// // some work here
1446 /// let thread = join_handle.thread();
1447 /// println!("thread id: {:?}", thread.id());
1449 #[stable(feature = "rust1", since = "1.0.0")]
1451 pub fn thread(&self) -> &Thread {
1455 /// Waits for the associated thread to finish.
1457 /// This function will return immediately if the associated thread has already finished.
1459 /// In terms of [atomic memory orderings], the completion of the associated
1460 /// thread synchronizes with this function returning. In other words, all
1461 /// operations performed by that thread [happen
1462 /// before](https://doc.rust-lang.org/nomicon/atomics.html#data-accesses) all
1463 /// operations that happen after `join` returns.
1465 /// If the associated thread panics, [`Err`] is returned with the parameter given
1468 /// [`Err`]: crate::result::Result::Err
1469 /// [atomic memory orderings]: crate::sync::atomic
1473 /// This function may panic on some platforms if a thread attempts to join
1474 /// itself or otherwise may create a deadlock with joining threads.
1479 /// use std::thread;
1481 /// let builder = thread::Builder::new();
1483 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1484 /// // some work here
1486 /// join_handle.join().expect("Couldn't join on the associated thread");
1488 #[stable(feature = "rust1", since = "1.0.0")]
1489 pub fn join(self) -> Result<T> {
1493 /// Checks if the associated thread has finished running its main function.
1495 /// `is_finished` supports implementing a non-blocking join operation, by checking
1496 /// `is_finished`, and calling `join` if it returns `true`. This function does not block. To
1497 /// block while waiting on the thread to finish, use [`join`][Self::join].
1499 /// This might return `true` for a brief moment after the thread's main
1500 /// function has returned, but before the thread itself has stopped running.
1501 /// However, once this returns `true`, [`join`][Self::join] can be expected
1502 /// to return quickly, without blocking for any significant amount of time.
1503 #[stable(feature = "thread_is_running", since = "1.61.0")]
1504 pub fn is_finished(&self) -> bool {
1505 Arc::strong_count(&self.0.packet) == 1
1509 impl<T> AsInner<imp::Thread> for JoinHandle<T> {
1510 fn as_inner(&self) -> &imp::Thread {
1515 impl<T> IntoInner<imp::Thread> for JoinHandle<T> {
1516 fn into_inner(self) -> imp::Thread {
1521 #[stable(feature = "std_debug", since = "1.16.0")]
1522 impl<T> fmt::Debug for JoinHandle<T> {
1523 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1524 f.debug_struct("JoinHandle").finish_non_exhaustive()
1528 fn _assert_sync_and_send() {
1529 fn _assert_both<T: Send + Sync>() {}
1530 _assert_both::<JoinHandle<()>>();
1531 _assert_both::<Thread>();
1534 /// Returns an estimate of the default amount of parallelism a program should use.
1536 /// Parallelism is a resource. A given machine provides a certain capacity for
1537 /// parallelism, i.e., a bound on the number of computations it can perform
1538 /// simultaneously. This number often corresponds to the amount of CPUs a
1539 /// computer has, but it may diverge in various cases.
1541 /// Host environments such as VMs or container orchestrators may want to
1542 /// restrict the amount of parallelism made available to programs in them. This
1543 /// is often done to limit the potential impact of (unintentionally)
1544 /// resource-intensive programs on other programs running on the same machine.
1548 /// The purpose of this API is to provide an easy and portable way to query
1549 /// the default amount of parallelism the program should use. Among other things it
1550 /// does not expose information on NUMA regions, does not account for
1551 /// differences in (co)processor capabilities or current system load,
1552 /// and will not modify the program's global state in order to more accurately
1553 /// query the amount of available parallelism.
1555 /// Where both fixed steady-state and burst limits are available the steady-state
1556 /// capacity will be used to ensure more predictable latencies.
1558 /// Resource limits can be changed during the runtime of a program, therefore the value is
1559 /// not cached and instead recomputed every time this function is called. It should not be
1560 /// called from hot code.
1562 /// The value returned by this function should be considered a simplified
1563 /// approximation of the actual amount of parallelism available at any given
1564 /// time. To get a more detailed or precise overview of the amount of
1565 /// parallelism available to the program, you may wish to use
1566 /// platform-specific APIs as well. The following platform limitations currently
1567 /// apply to `available_parallelism`:
1570 /// - It may undercount the amount of parallelism available on systems with more
1571 /// than 64 logical CPUs. However, programs typically need specific support to
1572 /// take advantage of more than 64 logical CPUs, and in the absence of such
1573 /// support, the number returned by this function accurately reflects the
1574 /// number of logical CPUs the program can use by default.
1575 /// - It may overcount the amount of parallelism available on systems limited by
1576 /// process-wide affinity masks, or job object limitations.
1579 /// - It may overcount the amount of parallelism available when limited by a
1580 /// process-wide affinity mask or cgroup quotas and `sched_getaffinity()` or cgroup fs can't be
1581 /// queried, e.g. due to sandboxing.
1582 /// - It may undercount the amount of parallelism if the current thread's affinity mask
1583 /// does not reflect the process' cpuset, e.g. due to pinned threads.
1584 /// - If the process is in a cgroup v1 cpu controller, this may need to
1585 /// scan mountpoints to find the corresponding cgroup v1 controller,
1586 /// which may take time on systems with large numbers of mountpoints.
1587 /// (This does not apply to cgroup v2, or to processes not in a
1591 /// - It may overcount the amount of parallelism available when running in a VM
1592 /// with CPU usage limits (e.g. an overcommitted host).
1596 /// This function will, but is not limited to, return errors in the following
1599 /// - If the amount of parallelism is not known for the target platform.
1600 /// - If the program lacks permission to query the amount of parallelism made
1601 /// available to it.
1606 /// # #![allow(dead_code)]
1607 /// use std::{io, thread};
1609 /// fn main() -> io::Result<()> {
1610 /// let count = thread::available_parallelism()?.get();
1611 /// assert!(count >= 1_usize);
1615 #[doc(alias = "available_concurrency")] // Alias for a previous name we gave this API on unstable.
1616 #[doc(alias = "hardware_concurrency")] // Alias for C++ `std::thread::hardware_concurrency`.
1617 #[doc(alias = "num_cpus")] // Alias for a popular ecosystem crate which provides similar functionality.
1618 #[stable(feature = "available_parallelism", since = "1.59.0")]
1619 pub fn available_parallelism() -> io::Result<NonZeroUsize> {
1620 imp::available_parallelism()