3 //! ## The threading model
5 //! An executing Rust program consists of a collection of native OS threads,
6 //! each with their own stack and local state. Threads can be named, and
7 //! provide some built-in support for low-level synchronization.
9 //! Communication between threads can be done through
10 //! [channels], Rust's message-passing types, along with [other forms of thread
11 //! synchronization](../../std/sync/index.html) and shared-memory data
12 //! structures. In particular, types that are guaranteed to be
13 //! threadsafe are easily shared between threads using the
14 //! atomically-reference-counted container, [`Arc`].
16 //! Fatal logic errors in Rust cause *thread panic*, during which
17 //! a thread will unwind the stack, running destructors and freeing
18 //! owned resources. While not meant as a 'try/catch' mechanism, panics
19 //! in Rust can nonetheless be caught (unless compiling with `panic=abort`) with
20 //! [`catch_unwind`](../../std/panic/fn.catch_unwind.html) and recovered
21 //! from, or alternatively be resumed with
22 //! [`resume_unwind`](../../std/panic/fn.resume_unwind.html). If the panic
23 //! is not caught the thread will exit, but the panic may optionally be
24 //! detected from a different thread with [`join`]. If the main thread panics
25 //! without the panic being caught, the application will exit with a
26 //! non-zero exit code.
28 //! When the main thread of a Rust program terminates, the entire program shuts
29 //! down, even if other threads are still running. However, this module provides
30 //! convenient facilities for automatically waiting for the termination of a
31 //! child thread (i.e., join).
33 //! ## Spawning a thread
35 //! A new thread can be spawned using the [`thread::spawn`][`spawn`] function:
40 //! thread::spawn(move || {
45 //! In this example, the spawned thread is "detached" from the current
46 //! thread. This means that it can outlive its parent (the thread that spawned
47 //! it), unless this parent is the main thread.
49 //! The parent thread can also wait on the completion of the child
50 //! thread; a call to [`spawn`] produces a [`JoinHandle`], which provides
51 //! a `join` method for waiting:
56 //! let child = thread::spawn(move || {
60 //! let res = child.join();
63 //! The [`join`] method returns a [`thread::Result`] containing [`Ok`] of the final
64 //! value produced by the child thread, or [`Err`] of the value given to
65 //! a call to [`panic!`] if the child panicked.
67 //! ## Configuring threads
69 //! A new thread can be configured before it is spawned via the [`Builder`] type,
70 //! which currently allows you to set the name and stack size for the child thread:
73 //! # #![allow(unused_must_use)]
76 //! thread::Builder::new().name("child1".to_string()).spawn(move || {
77 //! println!("Hello, world!");
81 //! ## The `Thread` type
83 //! Threads are represented via the [`Thread`] type, which you can get in one of
86 //! * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`]
87 //! function, and calling [`thread`][`JoinHandle::thread`] on the [`JoinHandle`].
88 //! * By requesting the current thread, using the [`thread::current`] function.
90 //! The [`thread::current`] function is available even for threads not spawned
91 //! by the APIs of this module.
93 //! ## Thread-local storage
95 //! This module also provides an implementation of thread-local storage for Rust
96 //! programs. Thread-local storage is a method of storing data into a global
97 //! variable that each thread in the program will have its own copy of.
98 //! Threads do not share this data, so accesses do not need to be synchronized.
100 //! A thread-local key owns the value it contains and will destroy the value when the
101 //! thread exits. It is created with the [`thread_local!`] macro and can contain any
102 //! value that is `'static` (no borrowed pointers). It provides an accessor function,
103 //! [`with`], that yields a shared reference to the value to the specified
104 //! closure. Thread-local keys allow only shared access to values, as there would be no
105 //! way to guarantee uniqueness if mutable borrows were allowed. Most values
106 //! will want to make use of some form of **interior mutability** through the
107 //! [`Cell`] or [`RefCell`] types.
109 //! ## Naming threads
111 //! Threads are able to have associated names for identification purposes. By default, spawned
112 //! threads are unnamed. To specify a name for a thread, build the thread with [`Builder`] and pass
113 //! the desired thread name to [`Builder::name`]. To retrieve the thread name from within the
114 //! thread, use [`Thread::name`]. A couple examples of where the name of a thread gets used:
116 //! * If a panic occurs in a named thread, the thread name will be printed in the panic message.
117 //! * The thread name is provided to the OS where applicable (e.g., `pthread_setname_np` in
118 //! unix-like platforms).
122 //! The default stack size for spawned threads is 2 MiB, though this particular stack size is
123 //! subject to change in the future. There are two ways to manually specify the stack size for
126 //! * Build the thread with [`Builder`] and pass the desired stack size to [`Builder::stack_size`].
127 //! * Set the `RUST_MIN_STACK` environment variable to an integer representing the desired stack
128 //! size (in bytes). Note that setting [`Builder::stack_size`] will override this.
130 //! Note that the stack size of the main thread is *not* determined by Rust.
132 //! [channels]: crate::sync::mpsc
133 //! [`join`]: JoinHandle::join
134 //! [`Result`]: crate::result::Result
135 //! [`Ok`]: crate::result::Result::Ok
136 //! [`Err`]: crate::result::Result::Err
137 //! [`thread::current`]: current
138 //! [`thread::Result`]: Result
139 //! [`unpark`]: Thread::unpark
140 //! [`Thread::name`]: Thread::name
141 //! [`thread::park_timeout`]: park_timeout
142 //! [`Cell`]: crate::cell::Cell
143 //! [`RefCell`]: crate::cell::RefCell
144 //! [`with`]: LocalKey::with
146 #![stable(feature = "rust1", since = "1.0.0")]
148 #[cfg(all(test, not(target_os = "emscripten")))]
152 use crate::cell::UnsafeCell;
153 use crate::ffi::{CStr, CString};
157 use crate::num::NonZeroU64;
159 use crate::panicking;
161 use crate::sync::atomic::AtomicUsize;
162 use crate::sync::atomic::Ordering::SeqCst;
163 use crate::sync::{Arc, Condvar, Mutex};
164 use crate::sys::thread as imp;
165 use crate::sys_common::mutex;
166 use crate::sys_common::thread;
167 use crate::sys_common::thread_info;
168 use crate::sys_common::{AsInner, IntoInner};
169 use crate::time::Duration;
171 ////////////////////////////////////////////////////////////////////////////////
172 // Thread-local storage
173 ////////////////////////////////////////////////////////////////////////////////
178 #[stable(feature = "rust1", since = "1.0.0")]
179 pub use self::local::{AccessError, LocalKey};
181 // The types used by the thread_local! macro to access TLS keys. Note that there
182 // are two types, the "OS" type and the "fast" type. The OS thread local key
183 // type is accessed via platform-specific API calls and is slow, while the fast
184 // key type is accessed via code generated via LLVM, where TLS keys are set up
185 // by the elf linker. Note that the OS TLS type is always available: on macOS
186 // the standard library is compiled with support for older platform versions
187 // where fast TLS was not available; end-user code is compiled with fast TLS
188 // where available, but both are needed.
190 #[unstable(feature = "libstd_thread_internals", issue = "none")]
191 #[cfg(target_thread_local)]
193 pub use self::local::fast::Key as __FastLocalKeyInner;
194 #[unstable(feature = "libstd_thread_internals", issue = "none")]
196 pub use self::local::os::Key as __OsLocalKeyInner;
197 #[unstable(feature = "libstd_thread_internals", issue = "none")]
198 #[cfg(all(target_arch = "wasm32", not(target_feature = "atomics")))]
200 pub use self::local::statik::Key as __StaticLocalKeyInner;
202 ////////////////////////////////////////////////////////////////////////////////
204 ////////////////////////////////////////////////////////////////////////////////
206 /// Thread factory, which can be used in order to configure the properties of
209 /// Methods can be chained on it in order to configure it.
211 /// The two configurations available are:
213 /// - [`name`]: specifies an [associated name for the thread][naming-threads]
214 /// - [`stack_size`]: specifies the [desired stack size for the thread][stack-size]
216 /// The [`spawn`] method will take ownership of the builder and create an
217 /// [`io::Result`] to the thread handle with the given configuration.
219 /// The [`thread::spawn`] free function uses a `Builder` with default
220 /// configuration and [`unwrap`]s its return value.
222 /// You may want to use [`spawn`] instead of [`thread::spawn`], when you want
223 /// to recover from a failure to launch a thread, indeed the free function will
224 /// panic where the `Builder` method will return a [`io::Result`].
231 /// let builder = thread::Builder::new();
233 /// let handler = builder.spawn(|| {
237 /// handler.join().unwrap();
240 /// [`stack_size`]: Builder::stack_size
241 /// [`name`]: Builder::name
242 /// [`spawn`]: Builder::spawn
243 /// [`thread::spawn`]: spawn
244 /// [`io::Result`]: crate::io::Result
245 /// [`unwrap`]: crate::result::Result::unwrap
246 /// [naming-threads]: ./index.html#naming-threads
247 /// [stack-size]: ./index.html#stack-size
248 #[stable(feature = "rust1", since = "1.0.0")]
251 // A name for the thread-to-be, for identification in panic messages
252 name: Option<String>,
253 // The size of the stack for the spawned thread in bytes
254 stack_size: Option<usize>,
258 /// Generates the base configuration for spawning a thread, from which
259 /// configuration methods can be chained.
266 /// let builder = thread::Builder::new()
267 /// .name("foo".into())
268 /// .stack_size(32 * 1024);
270 /// let handler = builder.spawn(|| {
274 /// handler.join().unwrap();
276 #[stable(feature = "rust1", since = "1.0.0")]
277 pub fn new() -> Builder {
278 Builder { name: None, stack_size: None }
281 /// Names the thread-to-be. Currently the name is used for identification
282 /// only in panic messages.
284 /// The name must not contain null bytes (`\0`).
286 /// For more information about named threads, see
287 /// [this module-level documentation][naming-threads].
294 /// let builder = thread::Builder::new()
295 /// .name("foo".into());
297 /// let handler = builder.spawn(|| {
298 /// assert_eq!(thread::current().name(), Some("foo"))
301 /// handler.join().unwrap();
304 /// [naming-threads]: ./index.html#naming-threads
305 #[stable(feature = "rust1", since = "1.0.0")]
306 pub fn name(mut self, name: String) -> Builder {
307 self.name = Some(name);
311 /// Sets the size of the stack (in bytes) for the new thread.
313 /// The actual stack size may be greater than this value if
314 /// the platform specifies a minimal stack size.
316 /// For more information about the stack size for threads, see
317 /// [this module-level documentation][stack-size].
324 /// let builder = thread::Builder::new().stack_size(32 * 1024);
327 /// [stack-size]: ./index.html#stack-size
328 #[stable(feature = "rust1", since = "1.0.0")]
329 pub fn stack_size(mut self, size: usize) -> Builder {
330 self.stack_size = Some(size);
334 /// Spawns a new thread by taking ownership of the `Builder`, and returns an
335 /// [`io::Result`] to its [`JoinHandle`].
337 /// The spawned thread may outlive the caller (unless the caller thread
338 /// is the main thread; the whole process is terminated when the main
339 /// thread finishes). The join handle can be used to block on
340 /// termination of the child thread, including recovering its panics.
342 /// For a more complete documentation see [`thread::spawn`][`spawn`].
346 /// Unlike the [`spawn`] free function, this method yields an
347 /// [`io::Result`] to capture any failure to create the thread at
350 /// [`io::Result`]: crate::io::Result
354 /// Panics if a thread name was set and it contained null bytes.
361 /// let builder = thread::Builder::new();
363 /// let handler = builder.spawn(|| {
367 /// handler.join().unwrap();
369 #[stable(feature = "rust1", since = "1.0.0")]
370 pub fn spawn<F, T>(self, f: F) -> io::Result<JoinHandle<T>>
376 unsafe { self.spawn_unchecked(f) }
379 /// Spawns a new thread without any lifetime restrictions by taking ownership
380 /// of the `Builder`, and returns an [`io::Result`] to its [`JoinHandle`].
382 /// The spawned thread may outlive the caller (unless the caller thread
383 /// is the main thread; the whole process is terminated when the main
384 /// thread finishes). The join handle can be used to block on
385 /// termination of the child thread, including recovering its panics.
387 /// This method is identical to [`thread::Builder::spawn`][`Builder::spawn`],
388 /// except for the relaxed lifetime bounds, which render it unsafe.
389 /// For a more complete documentation see [`thread::spawn`][`spawn`].
393 /// Unlike the [`spawn`] free function, this method yields an
394 /// [`io::Result`] to capture any failure to create the thread at
399 /// Panics if a thread name was set and it contained null bytes.
403 /// The caller has to ensure that no references in the supplied thread closure
404 /// or its return type can outlive the spawned thread's lifetime. This can be
405 /// guaranteed in two ways:
407 /// - ensure that [`join`][`JoinHandle::join`] is called before any referenced
409 /// - use only types with `'static` lifetime bounds, i.e., those with no or only
410 /// `'static` references (both [`thread::Builder::spawn`][`Builder::spawn`]
411 /// and [`thread::spawn`][`spawn`] enforce this property statically)
416 /// #![feature(thread_spawn_unchecked)]
419 /// let builder = thread::Builder::new();
422 /// let thread_x = &x;
424 /// let handler = unsafe {
425 /// builder.spawn_unchecked(move || {
426 /// println!("x = {}", *thread_x);
430 /// // caller has to ensure `join()` is called, otherwise
431 /// // it is possible to access freed memory if `x` gets
432 /// // dropped before the thread closure is executed!
433 /// handler.join().unwrap();
436 /// [`io::Result`]: crate::io::Result
437 #[unstable(feature = "thread_spawn_unchecked", issue = "55132")]
438 pub unsafe fn spawn_unchecked<'a, F, T>(self, f: F) -> io::Result<JoinHandle<T>>
444 let Builder { name, stack_size } = self;
446 let stack_size = stack_size.unwrap_or_else(thread::min_stack);
448 let my_thread = Thread::new(name);
449 let their_thread = my_thread.clone();
451 let my_packet: Arc<UnsafeCell<Option<Result<T>>>> = Arc::new(UnsafeCell::new(None));
452 let their_packet = my_packet.clone();
455 if let Some(name) = their_thread.cname() {
456 imp::Thread::set_name(name);
459 thread_info::set(imp::guard::current(), their_thread);
460 let try_result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
461 crate::sys_common::backtrace::__rust_begin_short_backtrace(f)
463 *their_packet.get() = Some(try_result);
466 Ok(JoinHandle(JoinInner {
467 // `imp::Thread::new` takes a closure with a `'static` lifetime, since it's passed
468 // through FFI or otherwise used with low-level threading primitives that have no
469 // notion of or way to enforce lifetimes.
471 // As mentioned in the `Safety` section of this function's documentation, the caller of
472 // this function needs to guarantee that the passed-in lifetime is sufficiently long
473 // for the lifetime of the thread.
475 // Similarly, the `sys` implementation must guarantee that no references to the closure
476 // exist after the thread has terminated, which is signaled by `Thread::join`
478 native: Some(imp::Thread::new(
480 mem::transmute::<Box<dyn FnOnce() + 'a>, Box<dyn FnOnce() + 'static>>(Box::new(
485 packet: Packet(my_packet),
490 ////////////////////////////////////////////////////////////////////////////////
492 ////////////////////////////////////////////////////////////////////////////////
494 /// Spawns a new thread, returning a [`JoinHandle`] for it.
496 /// The join handle will implicitly *detach* the child thread upon being
497 /// dropped. In this case, the child thread may outlive the parent (unless
498 /// the parent thread is the main thread; the whole process is terminated when
499 /// the main thread finishes). Additionally, the join handle provides a [`join`]
500 /// method that can be used to join the child thread. If the child thread
501 /// panics, [`join`] will return an [`Err`] containing the argument given to
504 /// This will create a thread using default parameters of [`Builder`], if you
505 /// want to specify the stack size or the name of the thread, use this API
508 /// As you can see in the signature of `spawn` there are two constraints on
509 /// both the closure given to `spawn` and its return value, let's explain them:
511 /// - The `'static` constraint means that the closure and its return value
512 /// must have a lifetime of the whole program execution. The reason for this
513 /// is that threads can `detach` and outlive the lifetime they have been
515 /// Indeed if the thread, and by extension its return value, can outlive their
516 /// caller, we need to make sure that they will be valid afterwards, and since
517 /// we *can't* know when it will return we need to have them valid as long as
518 /// possible, that is until the end of the program, hence the `'static`
520 /// - The [`Send`] constraint is because the closure will need to be passed
521 /// *by value* from the thread where it is spawned to the new thread. Its
522 /// return value will need to be passed from the new thread to the thread
523 /// where it is `join`ed.
524 /// As a reminder, the [`Send`] marker trait expresses that it is safe to be
525 /// passed from thread to thread. [`Sync`] expresses that it is safe to have a
526 /// reference be passed from thread to thread.
530 /// Panics if the OS fails to create a thread; use [`Builder::spawn`]
531 /// to recover from such errors.
535 /// Creating a thread.
540 /// let handler = thread::spawn(|| {
544 /// handler.join().unwrap();
547 /// As mentioned in the module documentation, threads are usually made to
548 /// communicate using [`channels`], here is how it usually looks.
550 /// This example also shows how to use `move`, in order to give ownership
551 /// of values to a thread.
555 /// use std::sync::mpsc::channel;
557 /// let (tx, rx) = channel();
559 /// let sender = thread::spawn(move || {
560 /// tx.send("Hello, thread".to_owned())
561 /// .expect("Unable to send on channel");
564 /// let receiver = thread::spawn(move || {
565 /// let value = rx.recv().expect("Unable to receive from channel");
566 /// println!("{}", value);
569 /// sender.join().expect("The sender thread has panicked");
570 /// receiver.join().expect("The receiver thread has panicked");
573 /// A thread can also return a value through its [`JoinHandle`], you can use
574 /// this to make asynchronous computations (futures might be more appropriate
580 /// let computation = thread::spawn(|| {
581 /// // Some expensive computation.
585 /// let result = computation.join().unwrap();
586 /// println!("{}", result);
589 /// [`channels`]: crate::sync::mpsc
590 /// [`join`]: JoinHandle::join
591 /// [`Err`]: crate::result::Result::Err
592 #[stable(feature = "rust1", since = "1.0.0")]
593 pub fn spawn<F, T>(f: F) -> JoinHandle<T>
599 Builder::new().spawn(f).expect("failed to spawn thread")
602 /// Gets a handle to the thread that invokes it.
606 /// Getting a handle to the current thread with `thread::current()`:
611 /// let handler = thread::Builder::new()
612 /// .name("named thread".into())
614 /// let handle = thread::current();
615 /// assert_eq!(handle.name(), Some("named thread"));
619 /// handler.join().unwrap();
621 #[stable(feature = "rust1", since = "1.0.0")]
622 pub fn current() -> Thread {
623 thread_info::current_thread().expect(
624 "use of std::thread::current() is not possible \
625 after the thread's local data has been destroyed",
629 /// Cooperatively gives up a timeslice to the OS scheduler.
631 /// This is used when the programmer knows that the thread will have nothing
632 /// to do for some time, and thus avoid wasting computing time.
634 /// For example when polling on a resource, it is common to check that it is
635 /// available, and if not to yield in order to avoid busy waiting.
637 /// Thus the pattern of `yield`ing after a failed poll is rather common when
638 /// implementing low-level shared resources or synchronization primitives.
640 /// However programmers will usually prefer to use [`channel`]s, [`Condvar`]s,
641 /// [`Mutex`]es or [`join`] for their synchronization routines, as they avoid
642 /// thinking about thread scheduling.
644 /// Note that [`channel`]s for example are implemented using this primitive.
645 /// Indeed when you call `send` or `recv`, which are blocking, they will yield
646 /// if the channel is not available.
653 /// thread::yield_now();
656 /// [`channel`]: crate::sync::mpsc
657 /// [`join`]: JoinHandle::join
658 #[stable(feature = "rust1", since = "1.0.0")]
660 imp::Thread::yield_now()
663 /// Determines whether the current thread is unwinding because of panic.
665 /// A common use of this feature is to poison shared resources when writing
666 /// unsafe code, by checking `panicking` when the `drop` is called.
668 /// This is usually not needed when writing safe code, as [`Mutex`es][Mutex]
669 /// already poison themselves when a thread panics while holding the lock.
671 /// This can also be used in multithreaded applications, in order to send a
672 /// message to other threads warning that a thread has panicked (e.g., for
673 /// monitoring purposes).
680 /// struct SomeStruct;
682 /// impl Drop for SomeStruct {
683 /// fn drop(&mut self) {
684 /// if thread::panicking() {
685 /// println!("dropped while unwinding");
687 /// println!("dropped while not unwinding");
694 /// let a = SomeStruct;
699 /// let b = SomeStruct;
704 #[stable(feature = "rust1", since = "1.0.0")]
705 pub fn panicking() -> bool {
706 panicking::panicking()
709 /// Puts the current thread to sleep for at least the specified amount of time.
711 /// The thread may sleep longer than the duration specified due to scheduling
712 /// specifics or platform-dependent functionality. It will never sleep less.
714 /// This function is blocking, and should not be used in `async` functions.
716 /// # Platform-specific behavior
718 /// On Unix platforms, the underlying syscall may be interrupted by a
719 /// spurious wakeup or signal handler. To ensure the sleep occurs for at least
720 /// the specified duration, this function may invoke that system call multiple
728 /// // Let's sleep for 2 seconds:
729 /// thread::sleep_ms(2000);
731 #[stable(feature = "rust1", since = "1.0.0")]
732 #[rustc_deprecated(since = "1.6.0", reason = "replaced by `std::thread::sleep`")]
733 pub fn sleep_ms(ms: u32) {
734 sleep(Duration::from_millis(ms as u64))
737 /// Puts the current thread to sleep for at least the specified amount of time.
739 /// The thread may sleep longer than the duration specified due to scheduling
740 /// specifics or platform-dependent functionality. It will never sleep less.
742 /// This function is blocking, and should not be used in `async` functions.
744 /// # Platform-specific behavior
746 /// On Unix platforms, the underlying syscall may be interrupted by a
747 /// spurious wakeup or signal handler. To ensure the sleep occurs for at least
748 /// the specified duration, this function may invoke that system call multiple
750 /// Platforms which do not support nanosecond precision for sleeping will
751 /// have `dur` rounded up to the nearest granularity of time they can sleep for.
756 /// use std::{thread, time};
758 /// let ten_millis = time::Duration::from_millis(10);
759 /// let now = time::Instant::now();
761 /// thread::sleep(ten_millis);
763 /// assert!(now.elapsed() >= ten_millis);
765 #[stable(feature = "thread_sleep", since = "1.4.0")]
766 pub fn sleep(dur: Duration) {
767 imp::Thread::sleep(dur)
770 // constants for park/unpark
771 const EMPTY: usize = 0;
772 const PARKED: usize = 1;
773 const NOTIFIED: usize = 2;
775 /// Blocks unless or until the current thread's token is made available.
777 /// A call to `park` does not guarantee that the thread will remain parked
778 /// forever, and callers should be prepared for this possibility.
780 /// # park and unpark
782 /// Every thread is equipped with some basic low-level blocking support, via the
783 /// [`thread::park`][`park`] function and [`thread::Thread::unpark`][`unpark`]
784 /// method. [`park`] blocks the current thread, which can then be resumed from
785 /// another thread by calling the [`unpark`] method on the blocked thread's
788 /// Conceptually, each [`Thread`] handle has an associated token, which is
789 /// initially not present:
791 /// * The [`thread::park`][`park`] function blocks the current thread unless or
792 /// until the token is available for its thread handle, at which point it
793 /// atomically consumes the token. It may also return *spuriously*, without
794 /// consuming the token. [`thread::park_timeout`] does the same, but allows
795 /// specifying a maximum time to block the thread for.
797 /// * The [`unpark`] method on a [`Thread`] atomically makes the token available
798 /// if it wasn't already. Because the token is initially absent, [`unpark`]
799 /// followed by [`park`] will result in the second call returning immediately.
801 /// In other words, each [`Thread`] acts a bit like a spinlock that can be
802 /// locked and unlocked using `park` and `unpark`.
804 /// Notice that being unblocked does not imply any synchronization with someone
805 /// that unparked this thread, it could also be spurious.
806 /// For example, it would be a valid, but inefficient, implementation to make both [`park`] and
807 /// [`unpark`] return immediately without doing anything.
809 /// The API is typically used by acquiring a handle to the current thread,
810 /// placing that handle in a shared data structure so that other threads can
811 /// find it, and then `park`ing in a loop. When some desired condition is met, another
812 /// thread calls [`unpark`] on the handle.
814 /// The motivation for this design is twofold:
816 /// * It avoids the need to allocate mutexes and condvars when building new
817 /// synchronization primitives; the threads already provide basic
818 /// blocking/signaling.
820 /// * It can be implemented very efficiently on many platforms.
826 /// use std::sync::{Arc, atomic::{Ordering, AtomicBool}};
827 /// use std::time::Duration;
829 /// let flag = Arc::new(AtomicBool::new(false));
830 /// let flag2 = Arc::clone(&flag);
832 /// let parked_thread = thread::spawn(move || {
833 /// // We want to wait until the flag is set. We *could* just spin, but using
834 /// // park/unpark is more efficient.
835 /// while !flag2.load(Ordering::Acquire) {
836 /// println!("Parking thread");
838 /// // We *could* get here spuriously, i.e., way before the 10ms below are over!
839 /// // But that is no problem, we are in a loop until the flag is set anyway.
840 /// println!("Thread unparked");
842 /// println!("Flag received");
845 /// // Let some time pass for the thread to be spawned.
846 /// thread::sleep(Duration::from_millis(10));
848 /// // Set the flag, and let the thread wake up.
849 /// // There is no race condition here, if `unpark`
850 /// // happens first, `park` will return immediately.
851 /// // Hence there is no risk of a deadlock.
852 /// flag.store(true, Ordering::Release);
853 /// println!("Unpark the thread");
854 /// parked_thread.thread().unpark();
856 /// parked_thread.join().unwrap();
859 /// [`unpark`]: Thread::unpark
860 /// [`thread::park_timeout`]: park_timeout
862 // The implementation currently uses the trivial strategy of a Mutex+Condvar
863 // with wakeup flag, which does not actually allow spurious wakeups. In the
864 // future, this will be implemented in a more efficient way, perhaps along the lines of
865 // http://cr.openjdk.java.net/~stefank/6989984.1/raw_files/new/src/os/linux/vm/os_linux.cpp
866 // or futuxes, and in either case may allow spurious wakeups.
867 #[stable(feature = "rust1", since = "1.0.0")]
869 let thread = current();
871 // If we were previously notified then we consume this notification and
873 if thread.inner.state.compare_exchange(NOTIFIED, EMPTY, SeqCst, SeqCst).is_ok() {
877 // Otherwise we need to coordinate going to sleep
878 let mut m = thread.inner.lock.lock().unwrap();
879 match thread.inner.state.compare_exchange(EMPTY, PARKED, SeqCst, SeqCst) {
882 // We must read here, even though we know it will be `NOTIFIED`.
883 // This is because `unpark` may have been called again since we read
884 // `NOTIFIED` in the `compare_exchange` above. We must perform an
885 // acquire operation that synchronizes with that `unpark` to observe
886 // any writes it made before the call to unpark. To do that we must
887 // read from the write it made to `state`.
888 let old = thread.inner.state.swap(EMPTY, SeqCst);
889 assert_eq!(old, NOTIFIED, "park state changed unexpectedly");
891 } // should consume this notification, so prohibit spurious wakeups in next park.
892 Err(_) => panic!("inconsistent park state"),
895 m = thread.inner.cvar.wait(m).unwrap();
896 match thread.inner.state.compare_exchange(NOTIFIED, EMPTY, SeqCst, SeqCst) {
897 Ok(_) => return, // got a notification
898 Err(_) => {} // spurious wakeup, go back to sleep
903 /// Use [`park_timeout`].
905 /// Blocks unless or until the current thread's token is made available or
906 /// the specified duration has been reached (may wake spuriously).
908 /// The semantics of this function are equivalent to [`park`] except
909 /// that the thread will be blocked for roughly no longer than `dur`. This
910 /// method should not be used for precise timing due to anomalies such as
911 /// preemption or platform differences that may not cause the maximum
912 /// amount of time waited to be precisely `ms` long.
914 /// See the [park documentation][`park`] for more detail.
915 #[stable(feature = "rust1", since = "1.0.0")]
916 #[rustc_deprecated(since = "1.6.0", reason = "replaced by `std::thread::park_timeout`")]
917 pub fn park_timeout_ms(ms: u32) {
918 park_timeout(Duration::from_millis(ms as u64))
921 /// Blocks unless or until the current thread's token is made available or
922 /// the specified duration has been reached (may wake spuriously).
924 /// The semantics of this function are equivalent to [`park`][park] except
925 /// that the thread will be blocked for roughly no longer than `dur`. This
926 /// method should not be used for precise timing due to anomalies such as
927 /// preemption or platform differences that may not cause the maximum
928 /// amount of time waited to be precisely `dur` long.
930 /// See the [park documentation][park] for more details.
932 /// # Platform-specific behavior
934 /// Platforms which do not support nanosecond precision for sleeping will have
935 /// `dur` rounded up to the nearest granularity of time they can sleep for.
939 /// Waiting for the complete expiration of the timeout:
942 /// use std::thread::park_timeout;
943 /// use std::time::{Instant, Duration};
945 /// let timeout = Duration::from_secs(2);
946 /// let beginning_park = Instant::now();
948 /// let mut timeout_remaining = timeout;
950 /// park_timeout(timeout_remaining);
951 /// let elapsed = beginning_park.elapsed();
952 /// if elapsed >= timeout {
955 /// println!("restarting park_timeout after {:?}", elapsed);
956 /// timeout_remaining = timeout - elapsed;
959 #[stable(feature = "park_timeout", since = "1.4.0")]
960 pub fn park_timeout(dur: Duration) {
961 let thread = current();
963 // Like `park` above we have a fast path for an already-notified thread, and
964 // afterwards we start coordinating for a sleep.
966 if thread.inner.state.compare_exchange(NOTIFIED, EMPTY, SeqCst, SeqCst).is_ok() {
969 let m = thread.inner.lock.lock().unwrap();
970 match thread.inner.state.compare_exchange(EMPTY, PARKED, SeqCst, SeqCst) {
973 // We must read again here, see `park`.
974 let old = thread.inner.state.swap(EMPTY, SeqCst);
975 assert_eq!(old, NOTIFIED, "park state changed unexpectedly");
977 } // should consume this notification, so prohibit spurious wakeups in next park.
978 Err(_) => panic!("inconsistent park_timeout state"),
981 // Wait with a timeout, and if we spuriously wake up or otherwise wake up
982 // from a notification we just want to unconditionally set the state back to
983 // empty, either consuming a notification or un-flagging ourselves as
985 let (_m, _result) = thread.inner.cvar.wait_timeout(m, dur).unwrap();
986 match thread.inner.state.swap(EMPTY, SeqCst) {
987 NOTIFIED => {} // got a notification, hurray!
988 PARKED => {} // no notification, alas
989 n => panic!("inconsistent park_timeout state: {}", n),
993 ////////////////////////////////////////////////////////////////////////////////
995 ////////////////////////////////////////////////////////////////////////////////
997 /// A unique identifier for a running thread.
999 /// A `ThreadId` is an opaque object that has a unique value for each thread
1000 /// that creates one. `ThreadId`s are not guaranteed to correspond to a thread's
1001 /// system-designated identifier. A `ThreadId` can be retrieved from the [`id`]
1002 /// method on a [`Thread`].
1007 /// use std::thread;
1009 /// let other_thread = thread::spawn(|| {
1010 /// thread::current().id()
1013 /// let other_thread_id = other_thread.join().unwrap();
1014 /// assert!(thread::current().id() != other_thread_id);
1017 /// [`id`]: Thread::id
1018 #[stable(feature = "thread_id", since = "1.19.0")]
1019 #[derive(Eq, PartialEq, Clone, Copy, Hash, Debug)]
1020 pub struct ThreadId(NonZeroU64);
1023 // Generate a new unique thread ID.
1024 fn new() -> ThreadId {
1025 // We never call `GUARD.init()`, so it is UB to attempt to
1026 // acquire this mutex reentrantly!
1027 static GUARD: mutex::Mutex = mutex::Mutex::new();
1028 static mut COUNTER: u64 = 1;
1031 let _guard = GUARD.lock();
1033 // If we somehow use up all our bits, panic so that we're not
1034 // covering up subtle bugs of IDs being reused.
1035 if COUNTER == u64::MAX {
1036 panic!("failed to generate unique thread ID: bitspace exhausted");
1042 ThreadId(NonZeroU64::new(id).unwrap())
1046 /// This returns a numeric identifier for the thread identified by this
1049 /// As noted in the documentation for the type itself, it is essentially an
1050 /// opaque ID, but is guaranteed to be unique for each thread. The returned
1051 /// value is entirely opaque -- only equality testing is stable. Note that
1052 /// it is not guaranteed which values new threads will return, and this may
1053 /// change across Rust versions.
1054 #[unstable(feature = "thread_id_value", issue = "67939")]
1055 pub fn as_u64(&self) -> NonZeroU64 {
1060 ////////////////////////////////////////////////////////////////////////////////
1062 ////////////////////////////////////////////////////////////////////////////////
1064 /// The internal representation of a `Thread` handle
1066 name: Option<CString>, // Guaranteed to be UTF-8
1069 // state for thread park/unpark
1076 #[stable(feature = "rust1", since = "1.0.0")]
1077 /// A handle to a thread.
1079 /// Threads are represented via the `Thread` type, which you can get in one of
1082 /// * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`]
1083 /// function, and calling [`thread`][`JoinHandle::thread`] on the
1085 /// * By requesting the current thread, using the [`thread::current`] function.
1087 /// The [`thread::current`] function is available even for threads not spawned
1088 /// by the APIs of this module.
1090 /// There is usually no need to create a `Thread` struct yourself, one
1091 /// should instead use a function like `spawn` to create new threads, see the
1092 /// docs of [`Builder`] and [`spawn`] for more details.
1094 /// [`thread::current`]: current
1100 // Used only internally to construct a thread object without spawning
1101 // Panics if the name contains nuls.
1102 pub(crate) fn new(name: Option<String>) -> Thread {
1104 name.map(|n| CString::new(n).expect("thread name may not contain interior null bytes"));
1106 inner: Arc::new(Inner {
1108 id: ThreadId::new(),
1109 state: AtomicUsize::new(EMPTY),
1110 lock: Mutex::new(()),
1111 cvar: Condvar::new(),
1116 /// Atomically makes the handle's token available if it is not already.
1118 /// Every thread is equipped with some basic low-level blocking support, via
1119 /// the [`park`][park] function and the `unpark()` method. These can be
1120 /// used as a more CPU-efficient implementation of a spinlock.
1122 /// See the [park documentation][park] for more details.
1127 /// use std::thread;
1128 /// use std::time::Duration;
1130 /// let parked_thread = thread::Builder::new()
1132 /// println!("Parking thread");
1134 /// println!("Thread unparked");
1138 /// // Let some time pass for the thread to be spawned.
1139 /// thread::sleep(Duration::from_millis(10));
1141 /// println!("Unpark the thread");
1142 /// parked_thread.thread().unpark();
1144 /// parked_thread.join().unwrap();
1146 #[stable(feature = "rust1", since = "1.0.0")]
1147 pub fn unpark(&self) {
1148 // To ensure the unparked thread will observe any writes we made
1149 // before this call, we must perform a release operation that `park`
1150 // can synchronize with. To do that we must write `NOTIFIED` even if
1151 // `state` is already `NOTIFIED`. That is why this must be a swap
1152 // rather than a compare-and-swap that returns if it reads `NOTIFIED`
1154 match self.inner.state.swap(NOTIFIED, SeqCst) {
1155 EMPTY => return, // no one was waiting
1156 NOTIFIED => return, // already unparked
1157 PARKED => {} // gotta go wake someone up
1158 _ => panic!("inconsistent state in unpark"),
1161 // There is a period between when the parked thread sets `state` to
1162 // `PARKED` (or last checked `state` in the case of a spurious wake
1163 // up) and when it actually waits on `cvar`. If we were to notify
1164 // during this period it would be ignored and then when the parked
1165 // thread went to sleep it would never wake up. Fortunately, it has
1166 // `lock` locked at this stage so we can acquire `lock` to wait until
1167 // it is ready to receive the notification.
1169 // Releasing `lock` before the call to `notify_one` means that when the
1170 // parked thread wakes it doesn't get woken only to have to wait for us
1171 // to release `lock`.
1172 drop(self.inner.lock.lock().unwrap());
1173 self.inner.cvar.notify_one()
1176 /// Gets the thread's unique identifier.
1181 /// use std::thread;
1183 /// let other_thread = thread::spawn(|| {
1184 /// thread::current().id()
1187 /// let other_thread_id = other_thread.join().unwrap();
1188 /// assert!(thread::current().id() != other_thread_id);
1190 #[stable(feature = "thread_id", since = "1.19.0")]
1191 pub fn id(&self) -> ThreadId {
1195 /// Gets the thread's name.
1197 /// For more information about named threads, see
1198 /// [this module-level documentation][naming-threads].
1202 /// Threads by default have no name specified:
1205 /// use std::thread;
1207 /// let builder = thread::Builder::new();
1209 /// let handler = builder.spawn(|| {
1210 /// assert!(thread::current().name().is_none());
1213 /// handler.join().unwrap();
1216 /// Thread with a specified name:
1219 /// use std::thread;
1221 /// let builder = thread::Builder::new()
1222 /// .name("foo".into());
1224 /// let handler = builder.spawn(|| {
1225 /// assert_eq!(thread::current().name(), Some("foo"))
1228 /// handler.join().unwrap();
1231 /// [naming-threads]: ./index.html#naming-threads
1232 #[stable(feature = "rust1", since = "1.0.0")]
1233 pub fn name(&self) -> Option<&str> {
1234 self.cname().map(|s| unsafe { str::from_utf8_unchecked(s.to_bytes()) })
1237 fn cname(&self) -> Option<&CStr> {
1238 self.inner.name.as_deref()
1242 #[stable(feature = "rust1", since = "1.0.0")]
1243 impl fmt::Debug for Thread {
1244 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1245 f.debug_struct("Thread").field("id", &self.id()).field("name", &self.name()).finish()
1249 ////////////////////////////////////////////////////////////////////////////////
1251 ////////////////////////////////////////////////////////////////////////////////
1253 /// A specialized [`Result`] type for threads.
1255 /// Indicates the manner in which a thread exited.
1257 /// The value contained in the `Result::Err` variant
1258 /// is the value the thread panicked with;
1259 /// that is, the argument the `panic!` macro was called with.
1260 /// Unlike with normal errors, this value doesn't implement
1261 /// the [`Error`](crate::error::Error) trait.
1263 /// Thus, a sensible way to handle a thread panic is to either:
1264 /// 1. `unwrap` the `Result<T>`, propagating the panic
1265 /// 2. or in case the thread is intended to be a subsystem boundary
1266 /// that is supposed to isolate system-level failures,
1267 /// match on the `Err` variant and handle the panic in an appropriate way.
1269 /// A thread that completes without panicking is considered to exit successfully.
1274 /// use std::thread;
1277 /// fn copy_in_thread() -> thread::Result<()> {
1278 /// thread::spawn(move || { fs::copy("foo.txt", "bar.txt").unwrap(); }).join()
1282 /// match copy_in_thread() {
1283 /// Ok(_) => println!("this is fine"),
1284 /// Err(_) => println!("thread panicked"),
1289 /// [`Result`]: crate::result::Result
1290 #[stable(feature = "rust1", since = "1.0.0")]
1291 pub type Result<T> = crate::result::Result<T, Box<dyn Any + Send + 'static>>;
1293 // This packet is used to communicate the return value between the child thread
1294 // and the parent thread. Memory is shared through the `Arc` within and there's
1295 // no need for a mutex here because synchronization happens with `join()` (the
1296 // parent thread never reads this packet until the child has exited).
1298 // This packet itself is then stored into a `JoinInner` which in turns is placed
1299 // in `JoinHandle` and `JoinGuard`. Due to the usage of `UnsafeCell` we need to
1300 // manually worry about impls like Send and Sync. The type `T` should
1301 // already always be Send (otherwise the thread could not have been created) and
1302 // this type is inherently Sync because no methods take &self. Regardless,
1303 // however, we add inheriting impls for Send/Sync to this type to ensure it's
1304 // Send/Sync and that future modifications will still appropriately classify it.
1305 struct Packet<T>(Arc<UnsafeCell<Option<Result<T>>>>);
1307 unsafe impl<T: Send> Send for Packet<T> {}
1308 unsafe impl<T: Sync> Sync for Packet<T> {}
1310 /// Inner representation for JoinHandle
1311 struct JoinInner<T> {
1312 native: Option<imp::Thread>,
1317 impl<T> JoinInner<T> {
1318 fn join(&mut self) -> Result<T> {
1319 self.native.take().unwrap().join();
1320 unsafe { (*self.packet.0.get()).take().unwrap() }
1324 /// An owned permission to join on a thread (block on its termination).
1326 /// A `JoinHandle` *detaches* the associated thread when it is dropped, which
1327 /// means that there is no longer any handle to thread and no way to `join`
1330 /// Due to platform restrictions, it is not possible to [`Clone`] this
1331 /// handle: the ability to join a thread is a uniquely-owned permission.
1333 /// This `struct` is created by the [`thread::spawn`] function and the
1334 /// [`thread::Builder::spawn`] method.
1338 /// Creation from [`thread::spawn`]:
1341 /// use std::thread;
1343 /// let join_handle: thread::JoinHandle<_> = thread::spawn(|| {
1344 /// // some work here
1348 /// Creation from [`thread::Builder::spawn`]:
1351 /// use std::thread;
1353 /// let builder = thread::Builder::new();
1355 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1356 /// // some work here
1360 /// Child being detached and outliving its parent:
1363 /// use std::thread;
1364 /// use std::time::Duration;
1366 /// let original_thread = thread::spawn(|| {
1367 /// let _detached_thread = thread::spawn(|| {
1368 /// // Here we sleep to make sure that the first thread returns before.
1369 /// thread::sleep(Duration::from_millis(10));
1370 /// // This will be called, even though the JoinHandle is dropped.
1371 /// println!("♫ Still alive ♫");
1375 /// original_thread.join().expect("The thread being joined has panicked");
1376 /// println!("Original thread is joined.");
1378 /// // We make sure that the new thread has time to run, before the main
1379 /// // thread returns.
1381 /// thread::sleep(Duration::from_millis(1000));
1384 /// [`thread::Builder::spawn`]: Builder::spawn
1385 /// [`thread::spawn`]: spawn
1386 #[stable(feature = "rust1", since = "1.0.0")]
1387 pub struct JoinHandle<T>(JoinInner<T>);
1389 #[stable(feature = "joinhandle_impl_send_sync", since = "1.29.0")]
1390 unsafe impl<T> Send for JoinHandle<T> {}
1391 #[stable(feature = "joinhandle_impl_send_sync", since = "1.29.0")]
1392 unsafe impl<T> Sync for JoinHandle<T> {}
1394 impl<T> JoinHandle<T> {
1395 /// Extracts a handle to the underlying thread.
1400 /// use std::thread;
1402 /// let builder = thread::Builder::new();
1404 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1405 /// // some work here
1408 /// let thread = join_handle.thread();
1409 /// println!("thread id: {:?}", thread.id());
1411 #[stable(feature = "rust1", since = "1.0.0")]
1412 pub fn thread(&self) -> &Thread {
1416 /// Waits for the associated thread to finish.
1418 /// In terms of [atomic memory orderings], the completion of the associated
1419 /// thread synchronizes with this function returning. In other words, all
1420 /// operations performed by that thread are ordered before all
1421 /// operations that happen after `join` returns.
1423 /// If the child thread panics, [`Err`] is returned with the parameter given
1426 /// [`Err`]: crate::result::Result::Err
1427 /// [atomic memory orderings]: crate::sync::atomic
1431 /// This function may panic on some platforms if a thread attempts to join
1432 /// itself or otherwise may create a deadlock with joining threads.
1437 /// use std::thread;
1439 /// let builder = thread::Builder::new();
1441 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1442 /// // some work here
1444 /// join_handle.join().expect("Couldn't join on the associated thread");
1446 #[stable(feature = "rust1", since = "1.0.0")]
1447 pub fn join(mut self) -> Result<T> {
1452 impl<T> AsInner<imp::Thread> for JoinHandle<T> {
1453 fn as_inner(&self) -> &imp::Thread {
1454 self.0.native.as_ref().unwrap()
1458 impl<T> IntoInner<imp::Thread> for JoinHandle<T> {
1459 fn into_inner(self) -> imp::Thread {
1460 self.0.native.unwrap()
1464 #[stable(feature = "std_debug", since = "1.16.0")]
1465 impl<T> fmt::Debug for JoinHandle<T> {
1466 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1467 f.pad("JoinHandle { .. }")
1471 fn _assert_sync_and_send() {
1472 fn _assert_both<T: Send + Sync>() {}
1473 _assert_both::<JoinHandle<()>>();
1474 _assert_both::<Thread>();