1 //! Single-threaded reference-counting pointers. 'Rc' stands for 'Reference
4 //! The type [`Rc<T>`][`Rc`] provides shared ownership of a value of type `T`,
5 //! allocated in the heap. Invoking [`clone`][clone] on [`Rc`] produces a new
6 //! pointer to the same allocation in the heap. When the last [`Rc`] pointer to a
7 //! given allocation is destroyed, the value stored in that allocation (often
8 //! referred to as "inner value") is also dropped.
10 //! Shared references in Rust disallow mutation by default, and [`Rc`]
11 //! is no exception: you cannot generally obtain a mutable reference to
12 //! something inside an [`Rc`]. If you need mutability, put a [`Cell`]
13 //! or [`RefCell`] inside the [`Rc`]; see [an example of mutability
14 //! inside an `Rc`][mutability].
16 //! [`Rc`] uses non-atomic reference counting. This means that overhead is very
17 //! low, but an [`Rc`] cannot be sent between threads, and consequently [`Rc`]
18 //! does not implement [`Send`][send]. As a result, the Rust compiler
19 //! will check *at compile time* that you are not sending [`Rc`]s between
20 //! threads. If you need multi-threaded, atomic reference counting, use
21 //! [`sync::Arc`][arc].
23 //! The [`downgrade`][downgrade] method can be used to create a non-owning
24 //! [`Weak`] pointer. A [`Weak`] pointer can be [`upgrade`][upgrade]d
25 //! to an [`Rc`], but this will return [`None`] if the value stored in the allocation has
26 //! already been dropped. In other words, `Weak` pointers do not keep the value
27 //! inside the allocation alive; however, they *do* keep the allocation
28 //! (the backing store for the inner value) alive.
30 //! A cycle between [`Rc`] pointers will never be deallocated. For this reason,
31 //! [`Weak`] is used to break cycles. For example, a tree could have strong
32 //! [`Rc`] pointers from parent nodes to children, and [`Weak`] pointers from
33 //! children back to their parents.
35 //! `Rc<T>` automatically dereferences to `T` (via the [`Deref`] trait),
36 //! so you can call `T`'s methods on a value of type [`Rc<T>`][`Rc`]. To avoid name
37 //! clashes with `T`'s methods, the methods of [`Rc<T>`][`Rc`] itself are associated
38 //! functions, called using [fully qualified syntax]:
43 //! let my_rc = Rc::new(());
44 //! Rc::downgrade(&my_rc);
47 //! `Rc<T>`'s implementations of traits like `Clone` may also be called using
48 //! fully qualified syntax. Some people prefer to use fully qualified syntax,
49 //! while others prefer using method-call syntax.
54 //! let rc = Rc::new(());
55 //! // Method-call syntax
56 //! let rc2 = rc.clone();
57 //! // Fully qualified syntax
58 //! let rc3 = Rc::clone(&rc);
61 //! [`Weak<T>`][`Weak`] does not auto-dereference to `T`, because the inner value may have
62 //! already been dropped.
64 //! # Cloning references
66 //! Creating a new reference to the same allocation as an existing reference counted pointer
67 //! is done using the `Clone` trait implemented for [`Rc<T>`][`Rc`] and [`Weak<T>`][`Weak`].
72 //! let foo = Rc::new(vec![1.0, 2.0, 3.0]);
73 //! // The two syntaxes below are equivalent.
74 //! let a = foo.clone();
75 //! let b = Rc::clone(&foo);
76 //! // a and b both point to the same memory location as foo.
79 //! The `Rc::clone(&from)` syntax is the most idiomatic because it conveys more explicitly
80 //! the meaning of the code. In the example above, this syntax makes it easier to see that
81 //! this code is creating a new reference rather than copying the whole content of foo.
85 //! Consider a scenario where a set of `Gadget`s are owned by a given `Owner`.
86 //! We want to have our `Gadget`s point to their `Owner`. We can't do this with
87 //! unique ownership, because more than one gadget may belong to the same
88 //! `Owner`. [`Rc`] allows us to share an `Owner` between multiple `Gadget`s,
89 //! and have the `Owner` remain allocated as long as any `Gadget` points at it.
96 //! // ...other fields
101 //! owner: Rc<Owner>,
102 //! // ...other fields
106 //! // Create a reference-counted `Owner`.
107 //! let gadget_owner: Rc<Owner> = Rc::new(
109 //! name: "Gadget Man".to_string(),
113 //! // Create `Gadget`s belonging to `gadget_owner`. Cloning the `Rc<Owner>`
114 //! // gives us a new pointer to the same `Owner` allocation, incrementing
115 //! // the reference count in the process.
116 //! let gadget1 = Gadget {
118 //! owner: Rc::clone(&gadget_owner),
120 //! let gadget2 = Gadget {
122 //! owner: Rc::clone(&gadget_owner),
125 //! // Dispose of our local variable `gadget_owner`.
126 //! drop(gadget_owner);
128 //! // Despite dropping `gadget_owner`, we're still able to print out the name
129 //! // of the `Owner` of the `Gadget`s. This is because we've only dropped a
130 //! // single `Rc<Owner>`, not the `Owner` it points to. As long as there are
131 //! // other `Rc<Owner>` pointing at the same `Owner` allocation, it will remain
132 //! // live. The field projection `gadget1.owner.name` works because
133 //! // `Rc<Owner>` automatically dereferences to `Owner`.
134 //! println!("Gadget {} owned by {}", gadget1.id, gadget1.owner.name);
135 //! println!("Gadget {} owned by {}", gadget2.id, gadget2.owner.name);
137 //! // At the end of the function, `gadget1` and `gadget2` are destroyed, and
138 //! // with them the last counted references to our `Owner`. Gadget Man now
139 //! // gets destroyed as well.
143 //! If our requirements change, and we also need to be able to traverse from
144 //! `Owner` to `Gadget`, we will run into problems. An [`Rc`] pointer from `Owner`
145 //! to `Gadget` introduces a cycle. This means that their
146 //! reference counts can never reach 0, and the allocation will never be destroyed:
147 //! a memory leak. In order to get around this, we can use [`Weak`]
150 //! Rust actually makes it somewhat difficult to produce this loop in the first
151 //! place. In order to end up with two values that point at each other, one of
152 //! them needs to be mutable. This is difficult because [`Rc`] enforces
153 //! memory safety by only giving out shared references to the value it wraps,
154 //! and these don't allow direct mutation. We need to wrap the part of the
155 //! value we wish to mutate in a [`RefCell`], which provides *interior
156 //! mutability*: a method to achieve mutability through a shared reference.
157 //! [`RefCell`] enforces Rust's borrowing rules at runtime.
161 //! use std::rc::Weak;
162 //! use std::cell::RefCell;
166 //! gadgets: RefCell<Vec<Weak<Gadget>>>,
167 //! // ...other fields
172 //! owner: Rc<Owner>,
173 //! // ...other fields
177 //! // Create a reference-counted `Owner`. Note that we've put the `Owner`'s
178 //! // vector of `Gadget`s inside a `RefCell` so that we can mutate it through
179 //! // a shared reference.
180 //! let gadget_owner: Rc<Owner> = Rc::new(
182 //! name: "Gadget Man".to_string(),
183 //! gadgets: RefCell::new(vec![]),
187 //! // Create `Gadget`s belonging to `gadget_owner`, as before.
188 //! let gadget1 = Rc::new(
191 //! owner: Rc::clone(&gadget_owner),
194 //! let gadget2 = Rc::new(
197 //! owner: Rc::clone(&gadget_owner),
201 //! // Add the `Gadget`s to their `Owner`.
203 //! let mut gadgets = gadget_owner.gadgets.borrow_mut();
204 //! gadgets.push(Rc::downgrade(&gadget1));
205 //! gadgets.push(Rc::downgrade(&gadget2));
207 //! // `RefCell` dynamic borrow ends here.
210 //! // Iterate over our `Gadget`s, printing their details out.
211 //! for gadget_weak in gadget_owner.gadgets.borrow().iter() {
213 //! // `gadget_weak` is a `Weak<Gadget>`. Since `Weak` pointers can't
214 //! // guarantee the allocation still exists, we need to call
215 //! // `upgrade`, which returns an `Option<Rc<Gadget>>`.
217 //! // In this case we know the allocation still exists, so we simply
218 //! // `unwrap` the `Option`. In a more complicated program, you might
219 //! // need graceful error handling for a `None` result.
221 //! let gadget = gadget_weak.upgrade().unwrap();
222 //! println!("Gadget {} owned by {}", gadget.id, gadget.owner.name);
225 //! // At the end of the function, `gadget_owner`, `gadget1`, and `gadget2`
226 //! // are destroyed. There are now no strong (`Rc`) pointers to the
227 //! // gadgets, so they are destroyed. This zeroes the reference count on
228 //! // Gadget Man, so he gets destroyed as well.
232 //! [clone]: Clone::clone
233 //! [`Cell`]: core::cell::Cell
234 //! [`RefCell`]: core::cell::RefCell
235 //! [send]: core::marker::Send
236 //! [arc]: crate::sync::Arc
237 //! [`Deref`]: core::ops::Deref
238 //! [downgrade]: Rc::downgrade
239 //! [upgrade]: Weak::upgrade
240 //! [mutability]: core::cell#introducing-mutability-inside-of-something-immutable
241 //! [fully qualified syntax]: https://doc.rust-lang.org/book/ch19-03-advanced-traits.html#fully-qualified-syntax-for-disambiguation-calling-methods-with-the-same-name
243 #![stable(feature = "rust1", since = "1.0.0")]
246 use crate::boxed::Box;
252 use core::cell::Cell;
253 use core::cmp::Ordering;
254 use core::convert::{From, TryFrom};
256 use core::hash::{Hash, Hasher};
257 use core::intrinsics::abort;
259 use core::marker::{self, PhantomData, Unpin, Unsize};
260 use core::mem::{self, align_of_val_raw, forget, size_of_val};
261 use core::ops::{CoerceUnsized, Deref, DispatchFromDyn, Receiver};
263 use core::ptr::{self, NonNull};
264 use core::slice::from_raw_parts_mut;
267 box_free, handle_alloc_error, AllocError, Allocator, Global, Layout, WriteCloneIntoRaw,
269 use crate::borrow::{Cow, ToOwned};
270 use crate::string::String;
276 // This is repr(C) to future-proof against possible field-reordering, which
277 // would interfere with otherwise safe [into|from]_raw() of transmutable
280 struct RcBox<T: ?Sized> {
286 /// A single-threaded reference-counting pointer. 'Rc' stands for 'Reference
289 /// See the [module-level documentation](./index.html) for more details.
291 /// The inherent methods of `Rc` are all associated functions, which means
292 /// that you have to call them as e.g., [`Rc::get_mut(&mut value)`][get_mut] instead of
293 /// `value.get_mut()`. This avoids conflicts with methods of the inner type `T`.
295 /// [get_mut]: Rc::get_mut
296 #[cfg_attr(not(test), rustc_diagnostic_item = "Rc")]
297 #[stable(feature = "rust1", since = "1.0.0")]
298 pub struct Rc<T: ?Sized> {
299 ptr: NonNull<RcBox<T>>,
300 phantom: PhantomData<RcBox<T>>,
303 #[stable(feature = "rust1", since = "1.0.0")]
304 impl<T: ?Sized> !marker::Send for Rc<T> {}
305 #[stable(feature = "rust1", since = "1.0.0")]
306 impl<T: ?Sized> !marker::Sync for Rc<T> {}
308 #[unstable(feature = "coerce_unsized", issue = "27732")]
309 impl<T: ?Sized + Unsize<U>, U: ?Sized> CoerceUnsized<Rc<U>> for Rc<T> {}
311 #[unstable(feature = "dispatch_from_dyn", issue = "none")]
312 impl<T: ?Sized + Unsize<U>, U: ?Sized> DispatchFromDyn<Rc<U>> for Rc<T> {}
314 impl<T: ?Sized> Rc<T> {
316 fn inner(&self) -> &RcBox<T> {
317 // This unsafety is ok because while this Rc is alive we're guaranteed
318 // that the inner pointer is valid.
319 unsafe { self.ptr.as_ref() }
322 fn from_inner(ptr: NonNull<RcBox<T>>) -> Self {
323 Self { ptr, phantom: PhantomData }
326 unsafe fn from_ptr(ptr: *mut RcBox<T>) -> Self {
327 Self::from_inner(unsafe { NonNull::new_unchecked(ptr) })
332 /// Constructs a new `Rc<T>`.
339 /// let five = Rc::new(5);
341 #[stable(feature = "rust1", since = "1.0.0")]
342 pub fn new(value: T) -> Rc<T> {
343 // There is an implicit weak pointer owned by all the strong
344 // pointers, which ensures that the weak destructor never frees
345 // the allocation while the strong destructor is running, even
346 // if the weak pointer is stored inside the strong one.
348 Box::leak(box RcBox { strong: Cell::new(1), weak: Cell::new(1), value }).into(),
352 /// Constructs a new `Rc<T>` using a weak reference to itself. Attempting
353 /// to upgrade the weak reference before this function returns will result
354 /// in a `None` value. However, the weak reference may be cloned freely and
355 /// stored for use at a later time.
360 /// #![feature(arc_new_cyclic)]
361 /// #![allow(dead_code)]
362 /// use std::rc::{Rc, Weak};
365 /// self_weak: Weak<Self>,
366 /// // ... more fields
369 /// pub fn new() -> Rc<Self> {
370 /// Rc::new_cyclic(|self_weak| {
371 /// Gadget { self_weak: self_weak.clone(), /* ... */ }
376 #[unstable(feature = "arc_new_cyclic", issue = "75861")]
377 pub fn new_cyclic(data_fn: impl FnOnce(&Weak<T>) -> T) -> Rc<T> {
378 // Construct the inner in the "uninitialized" state with a single
380 let uninit_ptr: NonNull<_> = Box::leak(box RcBox {
381 strong: Cell::new(0),
383 value: mem::MaybeUninit::<T>::uninit(),
387 let init_ptr: NonNull<RcBox<T>> = uninit_ptr.cast();
389 let weak = Weak { ptr: init_ptr };
391 // It's important we don't give up ownership of the weak pointer, or
392 // else the memory might be freed by the time `data_fn` returns. If
393 // we really wanted to pass ownership, we could create an additional
394 // weak pointer for ourselves, but this would result in additional
395 // updates to the weak reference count which might not be necessary
397 let data = data_fn(&weak);
400 let inner = init_ptr.as_ptr();
401 ptr::write(ptr::addr_of_mut!((*inner).value), data);
403 let prev_value = (*inner).strong.get();
404 debug_assert_eq!(prev_value, 0, "No prior strong references should exist");
405 (*inner).strong.set(1);
408 let strong = Rc::from_inner(init_ptr);
410 // Strong references should collectively own a shared weak reference,
411 // so don't run the destructor for our old weak reference.
416 /// Constructs a new `Rc` with uninitialized contents.
421 /// #![feature(new_uninit)]
422 /// #![feature(get_mut_unchecked)]
426 /// let mut five = Rc::<u32>::new_uninit();
428 /// let five = unsafe {
429 /// // Deferred initialization:
430 /// Rc::get_mut_unchecked(&mut five).as_mut_ptr().write(5);
432 /// five.assume_init()
435 /// assert_eq!(*five, 5)
437 #[unstable(feature = "new_uninit", issue = "63291")]
438 pub fn new_uninit() -> Rc<mem::MaybeUninit<T>> {
440 Rc::from_ptr(Rc::allocate_for_layout(
442 |layout| Global.allocate(layout),
443 |mem| mem as *mut RcBox<mem::MaybeUninit<T>>,
448 /// Constructs a new `Rc` with uninitialized contents, with the memory
449 /// being filled with `0` bytes.
451 /// See [`MaybeUninit::zeroed`][zeroed] for examples of correct and
452 /// incorrect usage of this method.
457 /// #![feature(new_uninit)]
461 /// let zero = Rc::<u32>::new_zeroed();
462 /// let zero = unsafe { zero.assume_init() };
464 /// assert_eq!(*zero, 0)
467 /// [zeroed]: mem::MaybeUninit::zeroed
468 #[unstable(feature = "new_uninit", issue = "63291")]
469 pub fn new_zeroed() -> Rc<mem::MaybeUninit<T>> {
471 Rc::from_ptr(Rc::allocate_for_layout(
473 |layout| Global.allocate_zeroed(layout),
474 |mem| mem as *mut RcBox<mem::MaybeUninit<T>>,
479 /// Constructs a new `Rc<T>`, returning an error if the allocation fails
484 /// #![feature(allocator_api)]
487 /// let five = Rc::try_new(5);
488 /// # Ok::<(), std::alloc::AllocError>(())
490 #[unstable(feature = "allocator_api", issue = "32838")]
491 pub fn try_new(value: T) -> Result<Rc<T>, AllocError> {
492 // There is an implicit weak pointer owned by all the strong
493 // pointers, which ensures that the weak destructor never frees
494 // the allocation while the strong destructor is running, even
495 // if the weak pointer is stored inside the strong one.
497 Box::leak(Box::try_new(RcBox { strong: Cell::new(1), weak: Cell::new(1), value })?)
502 /// Constructs a new `Rc` with uninitialized contents, returning an error if the allocation fails
507 /// #![feature(allocator_api, new_uninit)]
508 /// #![feature(get_mut_unchecked)]
512 /// let mut five = Rc::<u32>::try_new_uninit()?;
514 /// let five = unsafe {
515 /// // Deferred initialization:
516 /// Rc::get_mut_unchecked(&mut five).as_mut_ptr().write(5);
518 /// five.assume_init()
521 /// assert_eq!(*five, 5);
522 /// # Ok::<(), std::alloc::AllocError>(())
524 #[unstable(feature = "allocator_api", issue = "32838")]
525 // #[unstable(feature = "new_uninit", issue = "63291")]
526 pub fn try_new_uninit() -> Result<Rc<mem::MaybeUninit<T>>, AllocError> {
528 Ok(Rc::from_ptr(Rc::try_allocate_for_layout(
530 |layout| Global.allocate(layout),
531 |mem| mem as *mut RcBox<mem::MaybeUninit<T>>,
536 /// Constructs a new `Rc` with uninitialized contents, with the memory
537 /// being filled with `0` bytes, returning an error if the allocation fails
539 /// See [`MaybeUninit::zeroed`][zeroed] for examples of correct and
540 /// incorrect usage of this method.
545 /// #![feature(allocator_api, new_uninit)]
549 /// let zero = Rc::<u32>::try_new_zeroed()?;
550 /// let zero = unsafe { zero.assume_init() };
552 /// assert_eq!(*zero, 0);
553 /// # Ok::<(), std::alloc::AllocError>(())
556 /// [zeroed]: mem::MaybeUninit::zeroed
557 #[unstable(feature = "allocator_api", issue = "32838")]
558 //#[unstable(feature = "new_uninit", issue = "63291")]
559 pub fn try_new_zeroed() -> Result<Rc<mem::MaybeUninit<T>>, AllocError> {
561 Ok(Rc::from_ptr(Rc::try_allocate_for_layout(
563 |layout| Global.allocate_zeroed(layout),
564 |mem| mem as *mut RcBox<mem::MaybeUninit<T>>,
568 /// Constructs a new `Pin<Rc<T>>`. If `T` does not implement `Unpin`, then
569 /// `value` will be pinned in memory and unable to be moved.
570 #[stable(feature = "pin", since = "1.33.0")]
571 pub fn pin(value: T) -> Pin<Rc<T>> {
572 unsafe { Pin::new_unchecked(Rc::new(value)) }
575 /// Returns the inner value, if the `Rc` has exactly one strong reference.
577 /// Otherwise, an [`Err`] is returned with the same `Rc` that was
580 /// This will succeed even if there are outstanding weak references.
587 /// let x = Rc::new(3);
588 /// assert_eq!(Rc::try_unwrap(x), Ok(3));
590 /// let x = Rc::new(4);
591 /// let _y = Rc::clone(&x);
592 /// assert_eq!(*Rc::try_unwrap(x).unwrap_err(), 4);
595 #[stable(feature = "rc_unique", since = "1.4.0")]
596 pub fn try_unwrap(this: Self) -> Result<T, Self> {
597 if Rc::strong_count(&this) == 1 {
599 let val = ptr::read(&*this); // copy the contained object
601 // Indicate to Weaks that they can't be promoted by decrementing
602 // the strong count, and then remove the implicit "strong weak"
603 // pointer while also handling drop logic by just crafting a
605 this.inner().dec_strong();
606 let _weak = Weak { ptr: this.ptr };
617 /// Constructs a new reference-counted slice with uninitialized contents.
622 /// #![feature(new_uninit)]
623 /// #![feature(get_mut_unchecked)]
627 /// let mut values = Rc::<[u32]>::new_uninit_slice(3);
629 /// let values = unsafe {
630 /// // Deferred initialization:
631 /// Rc::get_mut_unchecked(&mut values)[0].as_mut_ptr().write(1);
632 /// Rc::get_mut_unchecked(&mut values)[1].as_mut_ptr().write(2);
633 /// Rc::get_mut_unchecked(&mut values)[2].as_mut_ptr().write(3);
635 /// values.assume_init()
638 /// assert_eq!(*values, [1, 2, 3])
640 #[unstable(feature = "new_uninit", issue = "63291")]
641 pub fn new_uninit_slice(len: usize) -> Rc<[mem::MaybeUninit<T>]> {
642 unsafe { Rc::from_ptr(Rc::allocate_for_slice(len)) }
645 /// Constructs a new reference-counted slice with uninitialized contents, with the memory being
646 /// filled with `0` bytes.
648 /// See [`MaybeUninit::zeroed`][zeroed] for examples of correct and
649 /// incorrect usage of this method.
654 /// #![feature(new_uninit)]
658 /// let values = Rc::<[u32]>::new_zeroed_slice(3);
659 /// let values = unsafe { values.assume_init() };
661 /// assert_eq!(*values, [0, 0, 0])
664 /// [zeroed]: mem::MaybeUninit::zeroed
665 #[unstable(feature = "new_uninit", issue = "63291")]
666 pub fn new_zeroed_slice(len: usize) -> Rc<[mem::MaybeUninit<T>]> {
668 Rc::from_ptr(Rc::allocate_for_layout(
669 Layout::array::<T>(len).unwrap(),
670 |layout| Global.allocate_zeroed(layout),
672 ptr::slice_from_raw_parts_mut(mem as *mut T, len)
673 as *mut RcBox<[mem::MaybeUninit<T>]>
680 impl<T> Rc<mem::MaybeUninit<T>> {
681 /// Converts to `Rc<T>`.
685 /// As with [`MaybeUninit::assume_init`],
686 /// it is up to the caller to guarantee that the inner value
687 /// really is in an initialized state.
688 /// Calling this when the content is not yet fully initialized
689 /// causes immediate undefined behavior.
691 /// [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init
696 /// #![feature(new_uninit)]
697 /// #![feature(get_mut_unchecked)]
701 /// let mut five = Rc::<u32>::new_uninit();
703 /// let five = unsafe {
704 /// // Deferred initialization:
705 /// Rc::get_mut_unchecked(&mut five).as_mut_ptr().write(5);
707 /// five.assume_init()
710 /// assert_eq!(*five, 5)
712 #[unstable(feature = "new_uninit", issue = "63291")]
714 pub unsafe fn assume_init(self) -> Rc<T> {
715 Rc::from_inner(mem::ManuallyDrop::new(self).ptr.cast())
719 impl<T> Rc<[mem::MaybeUninit<T>]> {
720 /// Converts to `Rc<[T]>`.
724 /// As with [`MaybeUninit::assume_init`],
725 /// it is up to the caller to guarantee that the inner value
726 /// really is in an initialized state.
727 /// Calling this when the content is not yet fully initialized
728 /// causes immediate undefined behavior.
730 /// [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init
735 /// #![feature(new_uninit)]
736 /// #![feature(get_mut_unchecked)]
740 /// let mut values = Rc::<[u32]>::new_uninit_slice(3);
742 /// let values = unsafe {
743 /// // Deferred initialization:
744 /// Rc::get_mut_unchecked(&mut values)[0].as_mut_ptr().write(1);
745 /// Rc::get_mut_unchecked(&mut values)[1].as_mut_ptr().write(2);
746 /// Rc::get_mut_unchecked(&mut values)[2].as_mut_ptr().write(3);
748 /// values.assume_init()
751 /// assert_eq!(*values, [1, 2, 3])
753 #[unstable(feature = "new_uninit", issue = "63291")]
755 pub unsafe fn assume_init(self) -> Rc<[T]> {
756 unsafe { Rc::from_ptr(mem::ManuallyDrop::new(self).ptr.as_ptr() as _) }
760 impl<T: ?Sized> Rc<T> {
761 /// Consumes the `Rc`, returning the wrapped pointer.
763 /// To avoid a memory leak the pointer must be converted back to an `Rc` using
764 /// [`Rc::from_raw`][from_raw].
766 /// [from_raw]: Rc::from_raw
773 /// let x = Rc::new("hello".to_owned());
774 /// let x_ptr = Rc::into_raw(x);
775 /// assert_eq!(unsafe { &*x_ptr }, "hello");
777 #[stable(feature = "rc_raw", since = "1.17.0")]
778 pub fn into_raw(this: Self) -> *const T {
779 let ptr = Self::as_ptr(&this);
784 /// Provides a raw pointer to the data.
786 /// The counts are not affected in any way and the `Rc` is not consumed. The pointer is valid
787 /// for as long there are strong counts in the `Rc`.
794 /// let x = Rc::new("hello".to_owned());
795 /// let y = Rc::clone(&x);
796 /// let x_ptr = Rc::as_ptr(&x);
797 /// assert_eq!(x_ptr, Rc::as_ptr(&y));
798 /// assert_eq!(unsafe { &*x_ptr }, "hello");
800 #[stable(feature = "weak_into_raw", since = "1.45.0")]
801 pub fn as_ptr(this: &Self) -> *const T {
802 let ptr: *mut RcBox<T> = NonNull::as_ptr(this.ptr);
804 // SAFETY: This cannot go through Deref::deref or Rc::inner because
805 // this is required to retain raw/mut provenance such that e.g. `get_mut` can
806 // write through the pointer after the Rc is recovered through `from_raw`.
807 unsafe { ptr::addr_of_mut!((*ptr).value) }
810 /// Constructs an `Rc<T>` from a raw pointer.
812 /// The raw pointer must have been previously returned by a call to
813 /// [`Rc<U>::into_raw`][into_raw] where `U` must have the same size
814 /// and alignment as `T`. This is trivially true if `U` is `T`.
815 /// Note that if `U` is not `T` but has the same size and alignment, this is
816 /// basically like transmuting references of different types. See
817 /// [`mem::transmute`][transmute] for more information on what
818 /// restrictions apply in this case.
820 /// The user of `from_raw` has to make sure a specific value of `T` is only
823 /// This function is unsafe because improper use may lead to memory unsafety,
824 /// even if the returned `Rc<T>` is never accessed.
826 /// [into_raw]: Rc::into_raw
827 /// [transmute]: core::mem::transmute
834 /// let x = Rc::new("hello".to_owned());
835 /// let x_ptr = Rc::into_raw(x);
838 /// // Convert back to an `Rc` to prevent leak.
839 /// let x = Rc::from_raw(x_ptr);
840 /// assert_eq!(&*x, "hello");
842 /// // Further calls to `Rc::from_raw(x_ptr)` would be memory-unsafe.
845 /// // The memory was freed when `x` went out of scope above, so `x_ptr` is now dangling!
847 #[stable(feature = "rc_raw", since = "1.17.0")]
848 pub unsafe fn from_raw(ptr: *const T) -> Self {
849 let offset = unsafe { data_offset(ptr) };
851 // Reverse the offset to find the original RcBox.
853 unsafe { (ptr as *mut RcBox<T>).set_ptr_value((ptr as *mut u8).offset(-offset)) };
855 unsafe { Self::from_ptr(rc_ptr) }
858 /// Creates a new [`Weak`] pointer to this allocation.
865 /// let five = Rc::new(5);
867 /// let weak_five = Rc::downgrade(&five);
869 #[stable(feature = "rc_weak", since = "1.4.0")]
870 pub fn downgrade(this: &Self) -> Weak<T> {
871 this.inner().inc_weak();
872 // Make sure we do not create a dangling Weak
873 debug_assert!(!is_dangling(this.ptr.as_ptr()));
874 Weak { ptr: this.ptr }
877 /// Gets the number of [`Weak`] pointers to this allocation.
884 /// let five = Rc::new(5);
885 /// let _weak_five = Rc::downgrade(&five);
887 /// assert_eq!(1, Rc::weak_count(&five));
890 #[stable(feature = "rc_counts", since = "1.15.0")]
891 pub fn weak_count(this: &Self) -> usize {
892 this.inner().weak() - 1
895 /// Gets the number of strong (`Rc`) pointers to this allocation.
902 /// let five = Rc::new(5);
903 /// let _also_five = Rc::clone(&five);
905 /// assert_eq!(2, Rc::strong_count(&five));
908 #[stable(feature = "rc_counts", since = "1.15.0")]
909 pub fn strong_count(this: &Self) -> usize {
910 this.inner().strong()
913 /// Returns `true` if there are no other `Rc` or [`Weak`] pointers to
916 fn is_unique(this: &Self) -> bool {
917 Rc::weak_count(this) == 0 && Rc::strong_count(this) == 1
920 /// Returns a mutable reference into the given `Rc`, if there are
921 /// no other `Rc` or [`Weak`] pointers to the same allocation.
923 /// Returns [`None`] otherwise, because it is not safe to
924 /// mutate a shared value.
926 /// See also [`make_mut`][make_mut], which will [`clone`][clone]
927 /// the inner value when there are other pointers.
929 /// [make_mut]: Rc::make_mut
930 /// [clone]: Clone::clone
937 /// let mut x = Rc::new(3);
938 /// *Rc::get_mut(&mut x).unwrap() = 4;
939 /// assert_eq!(*x, 4);
941 /// let _y = Rc::clone(&x);
942 /// assert!(Rc::get_mut(&mut x).is_none());
945 #[stable(feature = "rc_unique", since = "1.4.0")]
946 pub fn get_mut(this: &mut Self) -> Option<&mut T> {
947 if Rc::is_unique(this) { unsafe { Some(Rc::get_mut_unchecked(this)) } } else { None }
950 /// Returns a mutable reference into the given `Rc`,
951 /// without any check.
953 /// See also [`get_mut`], which is safe and does appropriate checks.
955 /// [`get_mut`]: Rc::get_mut
959 /// Any other `Rc` or [`Weak`] pointers to the same allocation must not be dereferenced
960 /// for the duration of the returned borrow.
961 /// This is trivially the case if no such pointers exist,
962 /// for example immediately after `Rc::new`.
967 /// #![feature(get_mut_unchecked)]
971 /// let mut x = Rc::new(String::new());
973 /// Rc::get_mut_unchecked(&mut x).push_str("foo")
975 /// assert_eq!(*x, "foo");
978 #[unstable(feature = "get_mut_unchecked", issue = "63292")]
979 pub unsafe fn get_mut_unchecked(this: &mut Self) -> &mut T {
980 // We are careful to *not* create a reference covering the "count" fields, as
981 // this would conflict with accesses to the reference counts (e.g. by `Weak`).
982 unsafe { &mut (*this.ptr.as_ptr()).value }
986 #[stable(feature = "ptr_eq", since = "1.17.0")]
987 /// Returns `true` if the two `Rc`s point to the same allocation
988 /// (in a vein similar to [`ptr::eq`]).
995 /// let five = Rc::new(5);
996 /// let same_five = Rc::clone(&five);
997 /// let other_five = Rc::new(5);
999 /// assert!(Rc::ptr_eq(&five, &same_five));
1000 /// assert!(!Rc::ptr_eq(&five, &other_five));
1003 /// [`ptr::eq`]: core::ptr::eq
1004 pub fn ptr_eq(this: &Self, other: &Self) -> bool {
1005 this.ptr.as_ptr() == other.ptr.as_ptr()
1009 impl<T: Clone> Rc<T> {
1010 /// Makes a mutable reference into the given `Rc`.
1012 /// If there are other `Rc` pointers to the same allocation, then `make_mut` will
1013 /// [`clone`] the inner value to a new allocation to ensure unique ownership. This is also
1014 /// referred to as clone-on-write.
1016 /// If there are no other `Rc` pointers to this allocation, then [`Weak`]
1017 /// pointers to this allocation will be disassociated.
1019 /// See also [`get_mut`], which will fail rather than cloning.
1021 /// [`clone`]: Clone::clone
1022 /// [`get_mut`]: Rc::get_mut
1027 /// use std::rc::Rc;
1029 /// let mut data = Rc::new(5);
1031 /// *Rc::make_mut(&mut data) += 1; // Won't clone anything
1032 /// let mut other_data = Rc::clone(&data); // Won't clone inner data
1033 /// *Rc::make_mut(&mut data) += 1; // Clones inner data
1034 /// *Rc::make_mut(&mut data) += 1; // Won't clone anything
1035 /// *Rc::make_mut(&mut other_data) *= 2; // Won't clone anything
1037 /// // Now `data` and `other_data` point to different allocations.
1038 /// assert_eq!(*data, 8);
1039 /// assert_eq!(*other_data, 12);
1042 /// [`Weak`] pointers will be disassociated:
1045 /// use std::rc::Rc;
1047 /// let mut data = Rc::new(75);
1048 /// let weak = Rc::downgrade(&data);
1050 /// assert!(75 == *data);
1051 /// assert!(75 == *weak.upgrade().unwrap());
1053 /// *Rc::make_mut(&mut data) += 1;
1055 /// assert!(76 == *data);
1056 /// assert!(weak.upgrade().is_none());
1059 #[stable(feature = "rc_unique", since = "1.4.0")]
1060 pub fn make_mut(this: &mut Self) -> &mut T {
1061 if Rc::strong_count(this) != 1 {
1062 // Gotta clone the data, there are other Rcs.
1063 // Pre-allocate memory to allow writing the cloned value directly.
1064 let mut rc = Self::new_uninit();
1066 let data = Rc::get_mut_unchecked(&mut rc);
1067 (**this).write_clone_into_raw(data.as_mut_ptr());
1068 *this = rc.assume_init();
1070 } else if Rc::weak_count(this) != 0 {
1071 // Can just steal the data, all that's left is Weaks
1072 let mut rc = Self::new_uninit();
1074 let data = Rc::get_mut_unchecked(&mut rc);
1075 data.as_mut_ptr().copy_from_nonoverlapping(&**this, 1);
1077 this.inner().dec_strong();
1078 // Remove implicit strong-weak ref (no need to craft a fake
1079 // Weak here -- we know other Weaks can clean up for us)
1080 this.inner().dec_weak();
1081 ptr::write(this, rc.assume_init());
1084 // This unsafety is ok because we're guaranteed that the pointer
1085 // returned is the *only* pointer that will ever be returned to T. Our
1086 // reference count is guaranteed to be 1 at this point, and we required
1087 // the `Rc<T>` itself to be `mut`, so we're returning the only possible
1088 // reference to the allocation.
1089 unsafe { &mut this.ptr.as_mut().value }
1095 #[stable(feature = "rc_downcast", since = "1.29.0")]
1096 /// Attempt to downcast the `Rc<dyn Any>` to a concrete type.
1101 /// use std::any::Any;
1102 /// use std::rc::Rc;
1104 /// fn print_if_string(value: Rc<dyn Any>) {
1105 /// if let Ok(string) = value.downcast::<String>() {
1106 /// println!("String ({}): {}", string.len(), string);
1110 /// let my_string = "Hello World".to_string();
1111 /// print_if_string(Rc::new(my_string));
1112 /// print_if_string(Rc::new(0i8));
1114 pub fn downcast<T: Any>(self) -> Result<Rc<T>, Rc<dyn Any>> {
1115 if (*self).is::<T>() {
1116 let ptr = self.ptr.cast::<RcBox<T>>();
1118 Ok(Rc::from_inner(ptr))
1125 impl<T: ?Sized> Rc<T> {
1126 /// Allocates an `RcBox<T>` with sufficient space for
1127 /// a possibly-unsized inner value where the value has the layout provided.
1129 /// The function `mem_to_rcbox` is called with the data pointer
1130 /// and must return back a (potentially fat)-pointer for the `RcBox<T>`.
1131 unsafe fn allocate_for_layout(
1132 value_layout: Layout,
1133 allocate: impl FnOnce(Layout) -> Result<NonNull<[u8]>, AllocError>,
1134 mem_to_rcbox: impl FnOnce(*mut u8) -> *mut RcBox<T>,
1135 ) -> *mut RcBox<T> {
1136 // Calculate layout using the given value layout.
1137 // Previously, layout was calculated on the expression
1138 // `&*(ptr as *const RcBox<T>)`, but this created a misaligned
1139 // reference (see #54908).
1140 let layout = Layout::new::<RcBox<()>>().extend(value_layout).unwrap().0.pad_to_align();
1142 Rc::try_allocate_for_layout(value_layout, allocate, mem_to_rcbox)
1143 .unwrap_or_else(|_| handle_alloc_error(layout))
1147 /// Allocates an `RcBox<T>` with sufficient space for
1148 /// a possibly-unsized inner value where the value has the layout provided,
1149 /// returning an error if allocation fails.
1151 /// The function `mem_to_rcbox` is called with the data pointer
1152 /// and must return back a (potentially fat)-pointer for the `RcBox<T>`.
1154 unsafe fn try_allocate_for_layout(
1155 value_layout: Layout,
1156 allocate: impl FnOnce(Layout) -> Result<NonNull<[u8]>, AllocError>,
1157 mem_to_rcbox: impl FnOnce(*mut u8) -> *mut RcBox<T>,
1158 ) -> Result<*mut RcBox<T>, AllocError> {
1159 // Calculate layout using the given value layout.
1160 // Previously, layout was calculated on the expression
1161 // `&*(ptr as *const RcBox<T>)`, but this created a misaligned
1162 // reference (see #54908).
1163 let layout = Layout::new::<RcBox<()>>().extend(value_layout).unwrap().0.pad_to_align();
1165 // Allocate for the layout.
1166 let ptr = allocate(layout)?;
1168 // Initialize the RcBox
1169 let inner = mem_to_rcbox(ptr.as_non_null_ptr().as_ptr());
1171 debug_assert_eq!(Layout::for_value(&*inner), layout);
1173 ptr::write(&mut (*inner).strong, Cell::new(1));
1174 ptr::write(&mut (*inner).weak, Cell::new(1));
1180 /// Allocates an `RcBox<T>` with sufficient space for an unsized inner value
1181 unsafe fn allocate_for_ptr(ptr: *const T) -> *mut RcBox<T> {
1182 // Allocate for the `RcBox<T>` using the given value.
1184 Self::allocate_for_layout(
1185 Layout::for_value(&*ptr),
1186 |layout| Global.allocate(layout),
1187 |mem| (ptr as *mut RcBox<T>).set_ptr_value(mem),
1192 fn from_box(v: Box<T>) -> Rc<T> {
1194 let (box_unique, alloc) = Box::into_unique(v);
1195 let bptr = box_unique.as_ptr();
1197 let value_size = size_of_val(&*bptr);
1198 let ptr = Self::allocate_for_ptr(bptr);
1200 // Copy value as bytes
1201 ptr::copy_nonoverlapping(
1202 bptr as *const T as *const u8,
1203 &mut (*ptr).value as *mut _ as *mut u8,
1207 // Free the allocation without dropping its contents
1208 box_free(box_unique, alloc);
1216 /// Allocates an `RcBox<[T]>` with the given length.
1217 unsafe fn allocate_for_slice(len: usize) -> *mut RcBox<[T]> {
1219 Self::allocate_for_layout(
1220 Layout::array::<T>(len).unwrap(),
1221 |layout| Global.allocate(layout),
1222 |mem| ptr::slice_from_raw_parts_mut(mem as *mut T, len) as *mut RcBox<[T]>,
1227 /// Copy elements from slice into newly allocated Rc<\[T\]>
1229 /// Unsafe because the caller must either take ownership or bind `T: Copy`
1230 unsafe fn copy_from_slice(v: &[T]) -> Rc<[T]> {
1232 let ptr = Self::allocate_for_slice(v.len());
1233 ptr::copy_nonoverlapping(v.as_ptr(), &mut (*ptr).value as *mut [T] as *mut T, v.len());
1238 /// Constructs an `Rc<[T]>` from an iterator known to be of a certain size.
1240 /// Behavior is undefined should the size be wrong.
1241 unsafe fn from_iter_exact(iter: impl iter::Iterator<Item = T>, len: usize) -> Rc<[T]> {
1242 // Panic guard while cloning T elements.
1243 // In the event of a panic, elements that have been written
1244 // into the new RcBox will be dropped, then the memory freed.
1252 impl<T> Drop for Guard<T> {
1253 fn drop(&mut self) {
1255 let slice = from_raw_parts_mut(self.elems, self.n_elems);
1256 ptr::drop_in_place(slice);
1258 Global.deallocate(self.mem, self.layout);
1264 let ptr = Self::allocate_for_slice(len);
1266 let mem = ptr as *mut _ as *mut u8;
1267 let layout = Layout::for_value(&*ptr);
1269 // Pointer to first element
1270 let elems = &mut (*ptr).value as *mut [T] as *mut T;
1272 let mut guard = Guard { mem: NonNull::new_unchecked(mem), elems, layout, n_elems: 0 };
1274 for (i, item) in iter.enumerate() {
1275 ptr::write(elems.add(i), item);
1279 // All clear. Forget the guard so it doesn't free the new RcBox.
1287 /// Specialization trait used for `From<&[T]>`.
1288 trait RcFromSlice<T> {
1289 fn from_slice(slice: &[T]) -> Self;
1292 impl<T: Clone> RcFromSlice<T> for Rc<[T]> {
1294 default fn from_slice(v: &[T]) -> Self {
1295 unsafe { Self::from_iter_exact(v.iter().cloned(), v.len()) }
1299 impl<T: Copy> RcFromSlice<T> for Rc<[T]> {
1301 fn from_slice(v: &[T]) -> Self {
1302 unsafe { Rc::copy_from_slice(v) }
1306 #[stable(feature = "rust1", since = "1.0.0")]
1307 impl<T: ?Sized> Deref for Rc<T> {
1311 fn deref(&self) -> &T {
1316 #[unstable(feature = "receiver_trait", issue = "none")]
1317 impl<T: ?Sized> Receiver for Rc<T> {}
1319 #[stable(feature = "rust1", since = "1.0.0")]
1320 unsafe impl<#[may_dangle] T: ?Sized> Drop for Rc<T> {
1323 /// This will decrement the strong reference count. If the strong reference
1324 /// count reaches zero then the only other references (if any) are
1325 /// [`Weak`], so we `drop` the inner value.
1330 /// use std::rc::Rc;
1334 /// impl Drop for Foo {
1335 /// fn drop(&mut self) {
1336 /// println!("dropped!");
1340 /// let foo = Rc::new(Foo);
1341 /// let foo2 = Rc::clone(&foo);
1343 /// drop(foo); // Doesn't print anything
1344 /// drop(foo2); // Prints "dropped!"
1346 fn drop(&mut self) {
1348 self.inner().dec_strong();
1349 if self.inner().strong() == 0 {
1350 // destroy the contained object
1351 ptr::drop_in_place(Self::get_mut_unchecked(self));
1353 // remove the implicit "strong weak" pointer now that we've
1354 // destroyed the contents.
1355 self.inner().dec_weak();
1357 if self.inner().weak() == 0 {
1358 Global.deallocate(self.ptr.cast(), Layout::for_value(self.ptr.as_ref()));
1365 #[stable(feature = "rust1", since = "1.0.0")]
1366 impl<T: ?Sized> Clone for Rc<T> {
1367 /// Makes a clone of the `Rc` pointer.
1369 /// This creates another pointer to the same allocation, increasing the
1370 /// strong reference count.
1375 /// use std::rc::Rc;
1377 /// let five = Rc::new(5);
1379 /// let _ = Rc::clone(&five);
1382 fn clone(&self) -> Rc<T> {
1383 self.inner().inc_strong();
1384 Self::from_inner(self.ptr)
1388 #[stable(feature = "rust1", since = "1.0.0")]
1389 impl<T: Default> Default for Rc<T> {
1390 /// Creates a new `Rc<T>`, with the `Default` value for `T`.
1395 /// use std::rc::Rc;
1397 /// let x: Rc<i32> = Default::default();
1398 /// assert_eq!(*x, 0);
1401 fn default() -> Rc<T> {
1402 Rc::new(Default::default())
1406 #[stable(feature = "rust1", since = "1.0.0")]
1407 trait RcEqIdent<T: ?Sized + PartialEq> {
1408 fn eq(&self, other: &Rc<T>) -> bool;
1409 fn ne(&self, other: &Rc<T>) -> bool;
1412 #[stable(feature = "rust1", since = "1.0.0")]
1413 impl<T: ?Sized + PartialEq> RcEqIdent<T> for Rc<T> {
1415 default fn eq(&self, other: &Rc<T>) -> bool {
1420 default fn ne(&self, other: &Rc<T>) -> bool {
1425 // Hack to allow specializing on `Eq` even though `Eq` has a method.
1426 #[rustc_unsafe_specialization_marker]
1427 pub(crate) trait MarkerEq: PartialEq<Self> {}
1429 impl<T: Eq> MarkerEq for T {}
1431 /// We're doing this specialization here, and not as a more general optimization on `&T`, because it
1432 /// would otherwise add a cost to all equality checks on refs. We assume that `Rc`s are used to
1433 /// store large values, that are slow to clone, but also heavy to check for equality, causing this
1434 /// cost to pay off more easily. It's also more likely to have two `Rc` clones, that point to
1435 /// the same value, than two `&T`s.
1437 /// We can only do this when `T: Eq` as a `PartialEq` might be deliberately irreflexive.
1438 #[stable(feature = "rust1", since = "1.0.0")]
1439 impl<T: ?Sized + MarkerEq> RcEqIdent<T> for Rc<T> {
1441 fn eq(&self, other: &Rc<T>) -> bool {
1442 Rc::ptr_eq(self, other) || **self == **other
1446 fn ne(&self, other: &Rc<T>) -> bool {
1447 !Rc::ptr_eq(self, other) && **self != **other
1451 #[stable(feature = "rust1", since = "1.0.0")]
1452 impl<T: ?Sized + PartialEq> PartialEq for Rc<T> {
1453 /// Equality for two `Rc`s.
1455 /// Two `Rc`s are equal if their inner values are equal, even if they are
1456 /// stored in different allocation.
1458 /// If `T` also implements `Eq` (implying reflexivity of equality),
1459 /// two `Rc`s that point to the same allocation are
1465 /// use std::rc::Rc;
1467 /// let five = Rc::new(5);
1469 /// assert!(five == Rc::new(5));
1472 fn eq(&self, other: &Rc<T>) -> bool {
1473 RcEqIdent::eq(self, other)
1476 /// Inequality for two `Rc`s.
1478 /// Two `Rc`s are unequal if their inner values are unequal.
1480 /// If `T` also implements `Eq` (implying reflexivity of equality),
1481 /// two `Rc`s that point to the same allocation are
1487 /// use std::rc::Rc;
1489 /// let five = Rc::new(5);
1491 /// assert!(five != Rc::new(6));
1494 fn ne(&self, other: &Rc<T>) -> bool {
1495 RcEqIdent::ne(self, other)
1499 #[stable(feature = "rust1", since = "1.0.0")]
1500 impl<T: ?Sized + Eq> Eq for Rc<T> {}
1502 #[stable(feature = "rust1", since = "1.0.0")]
1503 impl<T: ?Sized + PartialOrd> PartialOrd for Rc<T> {
1504 /// Partial comparison for two `Rc`s.
1506 /// The two are compared by calling `partial_cmp()` on their inner values.
1511 /// use std::rc::Rc;
1512 /// use std::cmp::Ordering;
1514 /// let five = Rc::new(5);
1516 /// assert_eq!(Some(Ordering::Less), five.partial_cmp(&Rc::new(6)));
1519 fn partial_cmp(&self, other: &Rc<T>) -> Option<Ordering> {
1520 (**self).partial_cmp(&**other)
1523 /// Less-than comparison for two `Rc`s.
1525 /// The two are compared by calling `<` on their inner values.
1530 /// use std::rc::Rc;
1532 /// let five = Rc::new(5);
1534 /// assert!(five < Rc::new(6));
1537 fn lt(&self, other: &Rc<T>) -> bool {
1541 /// 'Less than or equal to' comparison for two `Rc`s.
1543 /// The two are compared by calling `<=` on their inner values.
1548 /// use std::rc::Rc;
1550 /// let five = Rc::new(5);
1552 /// assert!(five <= Rc::new(5));
1555 fn le(&self, other: &Rc<T>) -> bool {
1559 /// Greater-than comparison for two `Rc`s.
1561 /// The two are compared by calling `>` on their inner values.
1566 /// use std::rc::Rc;
1568 /// let five = Rc::new(5);
1570 /// assert!(five > Rc::new(4));
1573 fn gt(&self, other: &Rc<T>) -> bool {
1577 /// 'Greater than or equal to' comparison for two `Rc`s.
1579 /// The two are compared by calling `>=` on their inner values.
1584 /// use std::rc::Rc;
1586 /// let five = Rc::new(5);
1588 /// assert!(five >= Rc::new(5));
1591 fn ge(&self, other: &Rc<T>) -> bool {
1596 #[stable(feature = "rust1", since = "1.0.0")]
1597 impl<T: ?Sized + Ord> Ord for Rc<T> {
1598 /// Comparison for two `Rc`s.
1600 /// The two are compared by calling `cmp()` on their inner values.
1605 /// use std::rc::Rc;
1606 /// use std::cmp::Ordering;
1608 /// let five = Rc::new(5);
1610 /// assert_eq!(Ordering::Less, five.cmp(&Rc::new(6)));
1613 fn cmp(&self, other: &Rc<T>) -> Ordering {
1614 (**self).cmp(&**other)
1618 #[stable(feature = "rust1", since = "1.0.0")]
1619 impl<T: ?Sized + Hash> Hash for Rc<T> {
1620 fn hash<H: Hasher>(&self, state: &mut H) {
1621 (**self).hash(state);
1625 #[stable(feature = "rust1", since = "1.0.0")]
1626 impl<T: ?Sized + fmt::Display> fmt::Display for Rc<T> {
1627 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1628 fmt::Display::fmt(&**self, f)
1632 #[stable(feature = "rust1", since = "1.0.0")]
1633 impl<T: ?Sized + fmt::Debug> fmt::Debug for Rc<T> {
1634 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1635 fmt::Debug::fmt(&**self, f)
1639 #[stable(feature = "rust1", since = "1.0.0")]
1640 impl<T: ?Sized> fmt::Pointer for Rc<T> {
1641 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1642 fmt::Pointer::fmt(&(&**self as *const T), f)
1646 #[stable(feature = "from_for_ptrs", since = "1.6.0")]
1647 impl<T> From<T> for Rc<T> {
1648 fn from(t: T) -> Self {
1653 #[stable(feature = "shared_from_slice", since = "1.21.0")]
1654 impl<T: Clone> From<&[T]> for Rc<[T]> {
1656 fn from(v: &[T]) -> Rc<[T]> {
1657 <Self as RcFromSlice<T>>::from_slice(v)
1661 #[stable(feature = "shared_from_slice", since = "1.21.0")]
1662 impl From<&str> for Rc<str> {
1664 fn from(v: &str) -> Rc<str> {
1665 let rc = Rc::<[u8]>::from(v.as_bytes());
1666 unsafe { Rc::from_raw(Rc::into_raw(rc) as *const str) }
1670 #[stable(feature = "shared_from_slice", since = "1.21.0")]
1671 impl From<String> for Rc<str> {
1673 fn from(v: String) -> Rc<str> {
1678 #[stable(feature = "shared_from_slice", since = "1.21.0")]
1679 impl<T: ?Sized> From<Box<T>> for Rc<T> {
1681 fn from(v: Box<T>) -> Rc<T> {
1686 #[stable(feature = "shared_from_slice", since = "1.21.0")]
1687 impl<T> From<Vec<T>> for Rc<[T]> {
1689 fn from(mut v: Vec<T>) -> Rc<[T]> {
1691 let rc = Rc::copy_from_slice(&v);
1693 // Allow the Vec to free its memory, but not destroy its contents
1701 #[stable(feature = "shared_from_cow", since = "1.45.0")]
1702 impl<'a, B> From<Cow<'a, B>> for Rc<B>
1704 B: ToOwned + ?Sized,
1705 Rc<B>: From<&'a B> + From<B::Owned>,
1708 fn from(cow: Cow<'a, B>) -> Rc<B> {
1710 Cow::Borrowed(s) => Rc::from(s),
1711 Cow::Owned(s) => Rc::from(s),
1716 #[stable(feature = "boxed_slice_try_from", since = "1.43.0")]
1717 impl<T, const N: usize> TryFrom<Rc<[T]>> for Rc<[T; N]> {
1718 type Error = Rc<[T]>;
1720 fn try_from(boxed_slice: Rc<[T]>) -> Result<Self, Self::Error> {
1721 if boxed_slice.len() == N {
1722 Ok(unsafe { Rc::from_raw(Rc::into_raw(boxed_slice) as *mut [T; N]) })
1729 #[stable(feature = "shared_from_iter", since = "1.37.0")]
1730 impl<T> iter::FromIterator<T> for Rc<[T]> {
1731 /// Takes each element in the `Iterator` and collects it into an `Rc<[T]>`.
1733 /// # Performance characteristics
1735 /// ## The general case
1737 /// In the general case, collecting into `Rc<[T]>` is done by first
1738 /// collecting into a `Vec<T>`. That is, when writing the following:
1741 /// # use std::rc::Rc;
1742 /// let evens: Rc<[u8]> = (0..10).filter(|&x| x % 2 == 0).collect();
1743 /// # assert_eq!(&*evens, &[0, 2, 4, 6, 8]);
1746 /// this behaves as if we wrote:
1749 /// # use std::rc::Rc;
1750 /// let evens: Rc<[u8]> = (0..10).filter(|&x| x % 2 == 0)
1751 /// .collect::<Vec<_>>() // The first set of allocations happens here.
1752 /// .into(); // A second allocation for `Rc<[T]>` happens here.
1753 /// # assert_eq!(&*evens, &[0, 2, 4, 6, 8]);
1756 /// This will allocate as many times as needed for constructing the `Vec<T>`
1757 /// and then it will allocate once for turning the `Vec<T>` into the `Rc<[T]>`.
1759 /// ## Iterators of known length
1761 /// When your `Iterator` implements `TrustedLen` and is of an exact size,
1762 /// a single allocation will be made for the `Rc<[T]>`. For example:
1765 /// # use std::rc::Rc;
1766 /// let evens: Rc<[u8]> = (0..10).collect(); // Just a single allocation happens here.
1767 /// # assert_eq!(&*evens, &*(0..10).collect::<Vec<_>>());
1769 fn from_iter<I: iter::IntoIterator<Item = T>>(iter: I) -> Self {
1770 ToRcSlice::to_rc_slice(iter.into_iter())
1774 /// Specialization trait used for collecting into `Rc<[T]>`.
1775 trait ToRcSlice<T>: Iterator<Item = T> + Sized {
1776 fn to_rc_slice(self) -> Rc<[T]>;
1779 impl<T, I: Iterator<Item = T>> ToRcSlice<T> for I {
1780 default fn to_rc_slice(self) -> Rc<[T]> {
1781 self.collect::<Vec<T>>().into()
1785 impl<T, I: iter::TrustedLen<Item = T>> ToRcSlice<T> for I {
1786 fn to_rc_slice(self) -> Rc<[T]> {
1787 // This is the case for a `TrustedLen` iterator.
1788 let (low, high) = self.size_hint();
1789 if let Some(high) = high {
1793 "TrustedLen iterator's size hint is not exact: {:?}",
1798 // SAFETY: We need to ensure that the iterator has an exact length and we have.
1799 Rc::from_iter_exact(self, low)
1802 // Fall back to normal implementation.
1803 self.collect::<Vec<T>>().into()
1808 /// `Weak` is a version of [`Rc`] that holds a non-owning reference to the
1809 /// managed allocation. The allocation is accessed by calling [`upgrade`] on the `Weak`
1810 /// pointer, which returns an [`Option`]`<`[`Rc`]`<T>>`.
1812 /// Since a `Weak` reference does not count towards ownership, it will not
1813 /// prevent the value stored in the allocation from being dropped, and `Weak` itself makes no
1814 /// guarantees about the value still being present. Thus it may return [`None`]
1815 /// when [`upgrade`]d. Note however that a `Weak` reference *does* prevent the allocation
1816 /// itself (the backing store) from being deallocated.
1818 /// A `Weak` pointer is useful for keeping a temporary reference to the allocation
1819 /// managed by [`Rc`] without preventing its inner value from being dropped. It is also used to
1820 /// prevent circular references between [`Rc`] pointers, since mutual owning references
1821 /// would never allow either [`Rc`] to be dropped. For example, a tree could
1822 /// have strong [`Rc`] pointers from parent nodes to children, and `Weak`
1823 /// pointers from children back to their parents.
1825 /// The typical way to obtain a `Weak` pointer is to call [`Rc::downgrade`].
1827 /// [`upgrade`]: Weak::upgrade
1828 #[stable(feature = "rc_weak", since = "1.4.0")]
1829 pub struct Weak<T: ?Sized> {
1830 // This is a `NonNull` to allow optimizing the size of this type in enums,
1831 // but it is not necessarily a valid pointer.
1832 // `Weak::new` sets this to `usize::MAX` so that it doesn’t need
1833 // to allocate space on the heap. That's not a value a real pointer
1834 // will ever have because RcBox has alignment at least 2.
1835 // This is only possible when `T: Sized`; unsized `T` never dangle.
1836 ptr: NonNull<RcBox<T>>,
1839 #[stable(feature = "rc_weak", since = "1.4.0")]
1840 impl<T: ?Sized> !marker::Send for Weak<T> {}
1841 #[stable(feature = "rc_weak", since = "1.4.0")]
1842 impl<T: ?Sized> !marker::Sync for Weak<T> {}
1844 #[unstable(feature = "coerce_unsized", issue = "27732")]
1845 impl<T: ?Sized + Unsize<U>, U: ?Sized> CoerceUnsized<Weak<U>> for Weak<T> {}
1847 #[unstable(feature = "dispatch_from_dyn", issue = "none")]
1848 impl<T: ?Sized + Unsize<U>, U: ?Sized> DispatchFromDyn<Weak<U>> for Weak<T> {}
1851 /// Constructs a new `Weak<T>`, without allocating any memory.
1852 /// Calling [`upgrade`] on the return value always gives [`None`].
1854 /// [`upgrade`]: Weak::upgrade
1859 /// use std::rc::Weak;
1861 /// let empty: Weak<i64> = Weak::new();
1862 /// assert!(empty.upgrade().is_none());
1864 #[stable(feature = "downgraded_weak", since = "1.10.0")]
1865 pub fn new() -> Weak<T> {
1866 Weak { ptr: NonNull::new(usize::MAX as *mut RcBox<T>).expect("MAX is not 0") }
1870 pub(crate) fn is_dangling<T: ?Sized>(ptr: *mut T) -> bool {
1871 let address = ptr as *mut () as usize;
1872 address == usize::MAX
1875 /// Helper type to allow accessing the reference counts without
1876 /// making any assertions about the data field.
1877 struct WeakInner<'a> {
1878 weak: &'a Cell<usize>,
1879 strong: &'a Cell<usize>,
1882 impl<T: ?Sized> Weak<T> {
1883 /// Returns a raw pointer to the object `T` pointed to by this `Weak<T>`.
1885 /// The pointer is valid only if there are some strong references. The pointer may be dangling,
1886 /// unaligned or even [`null`] otherwise.
1891 /// use std::rc::Rc;
1894 /// let strong = Rc::new("hello".to_owned());
1895 /// let weak = Rc::downgrade(&strong);
1896 /// // Both point to the same object
1897 /// assert!(ptr::eq(&*strong, weak.as_ptr()));
1898 /// // The strong here keeps it alive, so we can still access the object.
1899 /// assert_eq!("hello", unsafe { &*weak.as_ptr() });
1902 /// // But not any more. We can do weak.as_ptr(), but accessing the pointer would lead to
1903 /// // undefined behaviour.
1904 /// // assert_eq!("hello", unsafe { &*weak.as_ptr() });
1907 /// [`null`]: core::ptr::null
1908 #[stable(feature = "rc_as_ptr", since = "1.45.0")]
1909 pub fn as_ptr(&self) -> *const T {
1910 let ptr: *mut RcBox<T> = NonNull::as_ptr(self.ptr);
1912 if is_dangling(ptr) {
1913 // If the pointer is dangling, we return the sentinel directly. This cannot be
1914 // a valid payload address, as the payload is at least as aligned as RcBox (usize).
1917 // SAFETY: if is_dangling returns false, then the pointer is dereferencable.
1918 // The payload may be dropped at this point, and we have to maintain provenance,
1919 // so use raw pointer manipulation.
1920 unsafe { ptr::addr_of_mut!((*ptr).value) }
1924 /// Consumes the `Weak<T>` and turns it into a raw pointer.
1926 /// This converts the weak pointer into a raw pointer, while still preserving the ownership of
1927 /// one weak reference (the weak count is not modified by this operation). It can be turned
1928 /// back into the `Weak<T>` with [`from_raw`].
1930 /// The same restrictions of accessing the target of the pointer as with
1931 /// [`as_ptr`] apply.
1936 /// use std::rc::{Rc, Weak};
1938 /// let strong = Rc::new("hello".to_owned());
1939 /// let weak = Rc::downgrade(&strong);
1940 /// let raw = weak.into_raw();
1942 /// assert_eq!(1, Rc::weak_count(&strong));
1943 /// assert_eq!("hello", unsafe { &*raw });
1945 /// drop(unsafe { Weak::from_raw(raw) });
1946 /// assert_eq!(0, Rc::weak_count(&strong));
1949 /// [`from_raw`]: Weak::from_raw
1950 /// [`as_ptr`]: Weak::as_ptr
1951 #[stable(feature = "weak_into_raw", since = "1.45.0")]
1952 pub fn into_raw(self) -> *const T {
1953 let result = self.as_ptr();
1958 /// Converts a raw pointer previously created by [`into_raw`] back into `Weak<T>`.
1960 /// This can be used to safely get a strong reference (by calling [`upgrade`]
1961 /// later) or to deallocate the weak count by dropping the `Weak<T>`.
1963 /// It takes ownership of one weak reference (with the exception of pointers created by [`new`],
1964 /// as these don't own anything; the method still works on them).
1968 /// The pointer must have originated from the [`into_raw`] and must still own its potential
1971 /// It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this
1972 /// takes ownership of one weak reference currently represented as a raw pointer (the weak
1973 /// count is not modified by this operation) and therefore it must be paired with a previous
1974 /// call to [`into_raw`].
1979 /// use std::rc::{Rc, Weak};
1981 /// let strong = Rc::new("hello".to_owned());
1983 /// let raw_1 = Rc::downgrade(&strong).into_raw();
1984 /// let raw_2 = Rc::downgrade(&strong).into_raw();
1986 /// assert_eq!(2, Rc::weak_count(&strong));
1988 /// assert_eq!("hello", &*unsafe { Weak::from_raw(raw_1) }.upgrade().unwrap());
1989 /// assert_eq!(1, Rc::weak_count(&strong));
1993 /// // Decrement the last weak count.
1994 /// assert!(unsafe { Weak::from_raw(raw_2) }.upgrade().is_none());
1997 /// [`into_raw`]: Weak::into_raw
1998 /// [`upgrade`]: Weak::upgrade
1999 /// [`new`]: Weak::new
2000 #[stable(feature = "weak_into_raw", since = "1.45.0")]
2001 pub unsafe fn from_raw(ptr: *const T) -> Self {
2002 // See Weak::as_ptr for context on how the input pointer is derived.
2004 let ptr = if is_dangling(ptr as *mut T) {
2005 // This is a dangling Weak.
2006 ptr as *mut RcBox<T>
2008 // Otherwise, we're guaranteed the pointer came from a nondangling Weak.
2009 // SAFETY: data_offset is safe to call, as ptr references a real (potentially dropped) T.
2010 let offset = unsafe { data_offset(ptr) };
2011 // Thus, we reverse the offset to get the whole RcBox.
2012 // SAFETY: the pointer originated from a Weak, so this offset is safe.
2013 unsafe { (ptr as *mut RcBox<T>).set_ptr_value((ptr as *mut u8).offset(-offset)) }
2016 // SAFETY: we now have recovered the original Weak pointer, so can create the Weak.
2017 Weak { ptr: unsafe { NonNull::new_unchecked(ptr) } }
2020 /// Attempts to upgrade the `Weak` pointer to an [`Rc`], delaying
2021 /// dropping of the inner value if successful.
2023 /// Returns [`None`] if the inner value has since been dropped.
2028 /// use std::rc::Rc;
2030 /// let five = Rc::new(5);
2032 /// let weak_five = Rc::downgrade(&five);
2034 /// let strong_five: Option<Rc<_>> = weak_five.upgrade();
2035 /// assert!(strong_five.is_some());
2037 /// // Destroy all strong pointers.
2038 /// drop(strong_five);
2041 /// assert!(weak_five.upgrade().is_none());
2043 #[stable(feature = "rc_weak", since = "1.4.0")]
2044 pub fn upgrade(&self) -> Option<Rc<T>> {
2045 let inner = self.inner()?;
2046 if inner.strong() == 0 {
2050 Some(Rc::from_inner(self.ptr))
2054 /// Gets the number of strong (`Rc`) pointers pointing to this allocation.
2056 /// If `self` was created using [`Weak::new`], this will return 0.
2057 #[stable(feature = "weak_counts", since = "1.41.0")]
2058 pub fn strong_count(&self) -> usize {
2059 if let Some(inner) = self.inner() { inner.strong() } else { 0 }
2062 /// Gets the number of `Weak` pointers pointing to this allocation.
2064 /// If no strong pointers remain, this will return zero.
2065 #[stable(feature = "weak_counts", since = "1.41.0")]
2066 pub fn weak_count(&self) -> usize {
2069 if inner.strong() > 0 {
2070 inner.weak() - 1 // subtract the implicit weak ptr
2078 /// Returns `None` when the pointer is dangling and there is no allocated `RcBox`,
2079 /// (i.e., when this `Weak` was created by `Weak::new`).
2081 fn inner(&self) -> Option<WeakInner<'_>> {
2082 if is_dangling(self.ptr.as_ptr()) {
2085 // We are careful to *not* create a reference covering the "data" field, as
2086 // the field may be mutated concurrently (for example, if the last `Rc`
2087 // is dropped, the data field will be dropped in-place).
2089 let ptr = self.ptr.as_ptr();
2090 WeakInner { strong: &(*ptr).strong, weak: &(*ptr).weak }
2095 /// Returns `true` if the two `Weak`s point to the same allocation (similar to
2096 /// [`ptr::eq`]), or if both don't point to any allocation
2097 /// (because they were created with `Weak::new()`).
2101 /// Since this compares pointers it means that `Weak::new()` will equal each
2102 /// other, even though they don't point to any allocation.
2107 /// use std::rc::Rc;
2109 /// let first_rc = Rc::new(5);
2110 /// let first = Rc::downgrade(&first_rc);
2111 /// let second = Rc::downgrade(&first_rc);
2113 /// assert!(first.ptr_eq(&second));
2115 /// let third_rc = Rc::new(5);
2116 /// let third = Rc::downgrade(&third_rc);
2118 /// assert!(!first.ptr_eq(&third));
2121 /// Comparing `Weak::new`.
2124 /// use std::rc::{Rc, Weak};
2126 /// let first = Weak::new();
2127 /// let second = Weak::new();
2128 /// assert!(first.ptr_eq(&second));
2130 /// let third_rc = Rc::new(());
2131 /// let third = Rc::downgrade(&third_rc);
2132 /// assert!(!first.ptr_eq(&third));
2135 /// [`ptr::eq`]: core::ptr::eq
2137 #[stable(feature = "weak_ptr_eq", since = "1.39.0")]
2138 pub fn ptr_eq(&self, other: &Self) -> bool {
2139 self.ptr.as_ptr() == other.ptr.as_ptr()
2143 #[stable(feature = "rc_weak", since = "1.4.0")]
2144 impl<T: ?Sized> Drop for Weak<T> {
2145 /// Drops the `Weak` pointer.
2150 /// use std::rc::{Rc, Weak};
2154 /// impl Drop for Foo {
2155 /// fn drop(&mut self) {
2156 /// println!("dropped!");
2160 /// let foo = Rc::new(Foo);
2161 /// let weak_foo = Rc::downgrade(&foo);
2162 /// let other_weak_foo = Weak::clone(&weak_foo);
2164 /// drop(weak_foo); // Doesn't print anything
2165 /// drop(foo); // Prints "dropped!"
2167 /// assert!(other_weak_foo.upgrade().is_none());
2169 fn drop(&mut self) {
2170 let inner = if let Some(inner) = self.inner() { inner } else { return };
2173 // the weak count starts at 1, and will only go to zero if all
2174 // the strong pointers have disappeared.
2175 if inner.weak() == 0 {
2177 Global.deallocate(self.ptr.cast(), Layout::for_value_raw(self.ptr.as_ptr()));
2183 #[stable(feature = "rc_weak", since = "1.4.0")]
2184 impl<T: ?Sized> Clone for Weak<T> {
2185 /// Makes a clone of the `Weak` pointer that points to the same allocation.
2190 /// use std::rc::{Rc, Weak};
2192 /// let weak_five = Rc::downgrade(&Rc::new(5));
2194 /// let _ = Weak::clone(&weak_five);
2197 fn clone(&self) -> Weak<T> {
2198 if let Some(inner) = self.inner() {
2201 Weak { ptr: self.ptr }
2205 #[stable(feature = "rc_weak", since = "1.4.0")]
2206 impl<T: ?Sized + fmt::Debug> fmt::Debug for Weak<T> {
2207 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2212 #[stable(feature = "downgraded_weak", since = "1.10.0")]
2213 impl<T> Default for Weak<T> {
2214 /// Constructs a new `Weak<T>`, allocating memory for `T` without initializing
2215 /// it. Calling [`upgrade`] on the return value always gives [`None`].
2217 /// [`None`]: Option
2218 /// [`upgrade`]: Weak::upgrade
2223 /// use std::rc::Weak;
2225 /// let empty: Weak<i64> = Default::default();
2226 /// assert!(empty.upgrade().is_none());
2228 fn default() -> Weak<T> {
2233 // NOTE: We checked_add here to deal with mem::forget safely. In particular
2234 // if you mem::forget Rcs (or Weaks), the ref-count can overflow, and then
2235 // you can free the allocation while outstanding Rcs (or Weaks) exist.
2236 // We abort because this is such a degenerate scenario that we don't care about
2237 // what happens -- no real program should ever experience this.
2239 // This should have negligible overhead since you don't actually need to
2240 // clone these much in Rust thanks to ownership and move-semantics.
2244 fn weak_ref(&self) -> &Cell<usize>;
2245 fn strong_ref(&self) -> &Cell<usize>;
2248 fn strong(&self) -> usize {
2249 self.strong_ref().get()
2253 fn inc_strong(&self) {
2254 let strong = self.strong();
2256 // We want to abort on overflow instead of dropping the value.
2257 // The reference count will never be zero when this is called;
2258 // nevertheless, we insert an abort here to hint LLVM at
2259 // an otherwise missed optimization.
2260 if strong == 0 || strong == usize::MAX {
2263 self.strong_ref().set(strong + 1);
2267 fn dec_strong(&self) {
2268 self.strong_ref().set(self.strong() - 1);
2272 fn weak(&self) -> usize {
2273 self.weak_ref().get()
2277 fn inc_weak(&self) {
2278 let weak = self.weak();
2280 // We want to abort on overflow instead of dropping the value.
2281 // The reference count will never be zero when this is called;
2282 // nevertheless, we insert an abort here to hint LLVM at
2283 // an otherwise missed optimization.
2284 if weak == 0 || weak == usize::MAX {
2287 self.weak_ref().set(weak + 1);
2291 fn dec_weak(&self) {
2292 self.weak_ref().set(self.weak() - 1);
2296 impl<T: ?Sized> RcInnerPtr for RcBox<T> {
2298 fn weak_ref(&self) -> &Cell<usize> {
2303 fn strong_ref(&self) -> &Cell<usize> {
2308 impl<'a> RcInnerPtr for WeakInner<'a> {
2310 fn weak_ref(&self) -> &Cell<usize> {
2315 fn strong_ref(&self) -> &Cell<usize> {
2320 #[stable(feature = "rust1", since = "1.0.0")]
2321 impl<T: ?Sized> borrow::Borrow<T> for Rc<T> {
2322 fn borrow(&self) -> &T {
2327 #[stable(since = "1.5.0", feature = "smart_ptr_as_ref")]
2328 impl<T: ?Sized> AsRef<T> for Rc<T> {
2329 fn as_ref(&self) -> &T {
2334 #[stable(feature = "pin", since = "1.33.0")]
2335 impl<T: ?Sized> Unpin for Rc<T> {}
2337 /// Get the offset within an `RcBox` for the payload behind a pointer.
2341 /// The pointer must point to (and have valid metadata for) a previously
2342 /// valid instance of T, but the T is allowed to be dropped.
2343 unsafe fn data_offset<T: ?Sized>(ptr: *const T) -> isize {
2344 // Align the unsized value to the end of the RcBox.
2345 // Because RcBox is repr(C), it will always be the last field in memory.
2346 // SAFETY: since the only unsized types possible are slices, trait objects,
2347 // and extern types, the input safety requirement is currently enough to
2348 // satisfy the requirements of align_of_val_raw; this is an implementation
2349 // detail of the language that may not be relied upon outside of std.
2350 unsafe { data_offset_align(align_of_val_raw(ptr)) }
2354 fn data_offset_align(align: usize) -> isize {
2355 let layout = Layout::new::<RcBox<()>>();
2356 (layout.size() + layout.padding_needed_for(align)) as isize