1 //! Manually manage memory through raw pointers.
3 //! *[See also the pointer primitive types](pointer).*
7 //! Many functions in this module take raw pointers as arguments and read from
8 //! or write to them. For this to be safe, these pointers must be *valid*.
9 //! Whether a pointer is valid depends on the operation it is used for
10 //! (read or write), and the extent of the memory that is accessed (i.e.,
11 //! how many bytes are read/written). Most functions use `*mut T` and `*const T`
12 //! to access only a single value, in which case the documentation omits the size
13 //! and implicitly assumes it to be `size_of::<T>()` bytes.
15 //! The precise rules for validity are not determined yet. The guarantees that are
16 //! provided at this point are very minimal:
18 //! * A [null] pointer is *never* valid, not even for accesses of [size zero][zst].
19 //! * For a pointer to be valid, it is necessary, but not always sufficient, that the pointer
20 //! be *dereferenceable*: the memory range of the given size starting at the pointer must all be
21 //! within the bounds of a single allocated object. Note that in Rust,
22 //! every (stack-allocated) variable is considered a separate allocated object.
23 //! * Even for operations of [size zero][zst], the pointer must not be pointing to deallocated
24 //! memory, i.e., deallocation makes pointers invalid even for zero-sized operations. However,
25 //! casting any non-zero integer *literal* to a pointer is valid for zero-sized accesses, even if
26 //! some memory happens to exist at that address and gets deallocated. This corresponds to writing
27 //! your own allocator: allocating zero-sized objects is not very hard. The canonical way to
28 //! obtain a pointer that is valid for zero-sized accesses is [`NonNull::dangling`].
29 //! * All accesses performed by functions in this module are *non-atomic* in the sense
30 //! of [atomic operations] used to synchronize between threads. This means it is
31 //! undefined behavior to perform two concurrent accesses to the same location from different
32 //! threads unless both accesses only read from memory. Notice that this explicitly
33 //! includes [`read_volatile`] and [`write_volatile`]: Volatile accesses cannot
34 //! be used for inter-thread synchronization.
35 //! * The result of casting a reference to a pointer is valid for as long as the
36 //! underlying object is live and no reference (just raw pointers) is used to
37 //! access the same memory.
39 //! These axioms, along with careful use of [`offset`] for pointer arithmetic,
40 //! are enough to correctly implement many useful things in unsafe code. Stronger guarantees
41 //! will be provided eventually, as the [aliasing] rules are being determined. For more
42 //! information, see the [book] as well as the section in the reference devoted
43 //! to [undefined behavior][ub].
47 //! Valid raw pointers as defined above are not necessarily properly aligned (where
48 //! "proper" alignment is defined by the pointee type, i.e., `*const T` must be
49 //! aligned to `mem::align_of::<T>()`). However, most functions require their
50 //! arguments to be properly aligned, and will explicitly state
51 //! this requirement in their documentation. Notable exceptions to this are
52 //! [`read_unaligned`] and [`write_unaligned`].
54 //! When a function requires proper alignment, it does so even if the access
55 //! has size 0, i.e., even if memory is not actually touched. Consider using
56 //! [`NonNull::dangling`] in such cases.
58 //! ## Allocated object
60 //! For several operations, such as [`offset`] or field projections (`expr.field`), the notion of an
61 //! "allocated object" becomes relevant. An allocated object is a contiguous region of memory.
62 //! Common examples of allocated objects include stack-allocated variables (each variable is a
63 //! separate allocated object), heap allocations (each allocation created by the global allocator is
64 //! a separate allocated object), and `static` variables.
66 //! [aliasing]: ../../nomicon/aliasing.html
67 //! [book]: ../../book/ch19-01-unsafe-rust.html#dereferencing-a-raw-pointer
68 //! [ub]: ../../reference/behavior-considered-undefined.html
69 //! [zst]: ../../nomicon/exotic-sizes.html#zero-sized-types-zsts
70 //! [atomic operations]: crate::sync::atomic
71 //! [`offset`]: pointer::offset
73 #![stable(feature = "rust1", since = "1.0.0")]
75 use crate::cmp::Ordering;
78 use crate::intrinsics::{
79 self, assert_unsafe_precondition, is_aligned_and_not_null, is_nonoverlapping,
82 use crate::mem::{self, MaybeUninit};
84 #[stable(feature = "rust1", since = "1.0.0")]
86 pub use crate::intrinsics::copy_nonoverlapping;
88 #[stable(feature = "rust1", since = "1.0.0")]
90 pub use crate::intrinsics::copy;
92 #[stable(feature = "rust1", since = "1.0.0")]
94 pub use crate::intrinsics::write_bytes;
97 pub(crate) use metadata::PtrRepr;
98 #[unstable(feature = "ptr_metadata", issue = "81513")]
99 pub use metadata::{from_raw_parts, from_raw_parts_mut, metadata, DynMetadata, Pointee, Thin};
102 #[stable(feature = "nonnull", since = "1.25.0")]
103 pub use non_null::NonNull;
106 #[unstable(feature = "ptr_internals", issue = "none")]
107 pub use unique::Unique;
112 /// Executes the destructor (if any) of the pointed-to value.
114 /// This is semantically equivalent to calling [`ptr::read`] and discarding
115 /// the result, but has the following advantages:
117 /// * It is *required* to use `drop_in_place` to drop unsized types like
118 /// trait objects, because they can't be read out onto the stack and
119 /// dropped normally.
121 /// * It is friendlier to the optimizer to do this over [`ptr::read`] when
122 /// dropping manually allocated memory (e.g., in the implementations of
123 /// `Box`/`Rc`/`Vec`), as the compiler doesn't need to prove that it's
124 /// sound to elide the copy.
126 /// * It can be used to drop [pinned] data when `T` is not `repr(packed)`
127 /// (pinned data must not be moved before it is dropped).
129 /// Unaligned values cannot be dropped in place, they must be copied to an aligned
130 /// location first using [`ptr::read_unaligned`]. For packed structs, this move is
131 /// done automatically by the compiler. This means the fields of packed structs
132 /// are not dropped in-place.
134 /// [`ptr::read`]: self::read
135 /// [`ptr::read_unaligned`]: self::read_unaligned
136 /// [pinned]: crate::pin
140 /// Behavior is undefined if any of the following conditions are violated:
142 /// * `to_drop` must be [valid] for both reads and writes.
144 /// * `to_drop` must be properly aligned.
146 /// * The value `to_drop` points to must be valid for dropping, which may mean it must uphold
147 /// additional invariants - this is type-dependent.
149 /// Additionally, if `T` is not [`Copy`], using the pointed-to value after
150 /// calling `drop_in_place` can cause undefined behavior. Note that `*to_drop =
151 /// foo` counts as a use because it will cause the value to be dropped
152 /// again. [`write()`] can be used to overwrite data without causing it to be
155 /// Note that even if `T` has size `0`, the pointer must be non-null and properly aligned.
157 /// [valid]: self#safety
161 /// Manually remove the last item from a vector:
167 /// let last = Rc::new(1);
168 /// let weak = Rc::downgrade(&last);
170 /// let mut v = vec![Rc::new(0), last];
173 /// // Get a raw pointer to the last element in `v`.
174 /// let ptr = &mut v[1] as *mut _;
175 /// // Shorten `v` to prevent the last item from being dropped. We do that first,
176 /// // to prevent issues if the `drop_in_place` below panics.
178 /// // Without a call `drop_in_place`, the last item would never be dropped,
179 /// // and the memory it manages would be leaked.
180 /// ptr::drop_in_place(ptr);
183 /// assert_eq!(v, &[0.into()]);
185 /// // Ensure that the last item was dropped.
186 /// assert!(weak.upgrade().is_none());
188 #[stable(feature = "drop_in_place", since = "1.8.0")]
189 #[lang = "drop_in_place"]
190 #[allow(unconditional_recursion)]
191 pub unsafe fn drop_in_place<T: ?Sized>(to_drop: *mut T) {
192 // Code here does not matter - this is replaced by the
193 // real drop glue by the compiler.
195 // SAFETY: see comment above
196 unsafe { drop_in_place(to_drop) }
199 /// Creates a null raw pointer.
206 /// let p: *const i32 = ptr::null();
207 /// assert!(p.is_null());
211 #[stable(feature = "rust1", since = "1.0.0")]
213 #[rustc_const_stable(feature = "const_ptr_null", since = "1.24.0")]
214 #[rustc_diagnostic_item = "ptr_null"]
215 pub const fn null<T>() -> *const T {
219 /// Creates a null mutable raw pointer.
226 /// let p: *mut i32 = ptr::null_mut();
227 /// assert!(p.is_null());
231 #[stable(feature = "rust1", since = "1.0.0")]
233 #[rustc_const_stable(feature = "const_ptr_null", since = "1.24.0")]
234 #[rustc_diagnostic_item = "ptr_null_mut"]
235 pub const fn null_mut<T>() -> *mut T {
239 /// Forms a raw slice from a pointer and a length.
241 /// The `len` argument is the number of **elements**, not the number of bytes.
243 /// This function is safe, but actually using the return value is unsafe.
244 /// See the documentation of [`slice::from_raw_parts`] for slice safety requirements.
246 /// [`slice::from_raw_parts`]: crate::slice::from_raw_parts
253 /// // create a slice pointer when starting out with a pointer to the first element
254 /// let x = [5, 6, 7];
255 /// let raw_pointer = x.as_ptr();
256 /// let slice = ptr::slice_from_raw_parts(raw_pointer, 3);
257 /// assert_eq!(unsafe { &*slice }[2], 7);
260 #[stable(feature = "slice_from_raw_parts", since = "1.42.0")]
261 #[rustc_const_unstable(feature = "const_slice_from_raw_parts", issue = "67456")]
262 pub const fn slice_from_raw_parts<T>(data: *const T, len: usize) -> *const [T] {
263 from_raw_parts(data.cast(), len)
266 /// Performs the same functionality as [`slice_from_raw_parts`], except that a
267 /// raw mutable slice is returned, as opposed to a raw immutable slice.
269 /// See the documentation of [`slice_from_raw_parts`] for more details.
271 /// This function is safe, but actually using the return value is unsafe.
272 /// See the documentation of [`slice::from_raw_parts_mut`] for slice safety requirements.
274 /// [`slice::from_raw_parts_mut`]: crate::slice::from_raw_parts_mut
281 /// let x = &mut [5, 6, 7];
282 /// let raw_pointer = x.as_mut_ptr();
283 /// let slice = ptr::slice_from_raw_parts_mut(raw_pointer, 3);
286 /// (*slice)[2] = 99; // assign a value at an index in the slice
289 /// assert_eq!(unsafe { &*slice }[2], 99);
292 #[stable(feature = "slice_from_raw_parts", since = "1.42.0")]
293 #[rustc_const_unstable(feature = "const_slice_from_raw_parts", issue = "67456")]
294 pub const fn slice_from_raw_parts_mut<T>(data: *mut T, len: usize) -> *mut [T] {
295 from_raw_parts_mut(data.cast(), len)
298 /// Swaps the values at two mutable locations of the same type, without
299 /// deinitializing either.
301 /// But for the following two exceptions, this function is semantically
302 /// equivalent to [`mem::swap`]:
304 /// * It operates on raw pointers instead of references. When references are
305 /// available, [`mem::swap`] should be preferred.
307 /// * The two pointed-to values may overlap. If the values do overlap, then the
308 /// overlapping region of memory from `x` will be used. This is demonstrated
309 /// in the second example below.
313 /// Behavior is undefined if any of the following conditions are violated:
315 /// * Both `x` and `y` must be [valid] for both reads and writes.
317 /// * Both `x` and `y` must be properly aligned.
319 /// Note that even if `T` has size `0`, the pointers must be non-null and properly aligned.
321 /// [valid]: self#safety
325 /// Swapping two non-overlapping regions:
330 /// let mut array = [0, 1, 2, 3];
332 /// let x = array[0..].as_mut_ptr() as *mut [u32; 2]; // this is `array[0..2]`
333 /// let y = array[2..].as_mut_ptr() as *mut [u32; 2]; // this is `array[2..4]`
337 /// assert_eq!([2, 3, 0, 1], array);
341 /// Swapping two overlapping regions:
346 /// let mut array: [i32; 4] = [0, 1, 2, 3];
348 /// let array_ptr: *mut i32 = array.as_mut_ptr();
350 /// let x = array_ptr as *mut [i32; 3]; // this is `array[0..3]`
351 /// let y = unsafe { array_ptr.add(1) } as *mut [i32; 3]; // this is `array[1..4]`
355 /// // The indices `1..3` of the slice overlap between `x` and `y`.
356 /// // Reasonable results would be for to them be `[2, 3]`, so that indices `0..3` are
357 /// // `[1, 2, 3]` (matching `y` before the `swap`); or for them to be `[0, 1]`
358 /// // so that indices `1..4` are `[0, 1, 2]` (matching `x` before the `swap`).
359 /// // This implementation is defined to make the latter choice.
360 /// assert_eq!([1, 0, 1, 2], array);
364 #[stable(feature = "rust1", since = "1.0.0")]
365 #[rustc_const_unstable(feature = "const_swap", issue = "83163")]
366 pub const unsafe fn swap<T>(x: *mut T, y: *mut T) {
367 // Give ourselves some scratch space to work with.
368 // We do not have to worry about drops: `MaybeUninit` does nothing when dropped.
369 let mut tmp = MaybeUninit::<T>::uninit();
372 // SAFETY: the caller must guarantee that `x` and `y` are
373 // valid for writes and properly aligned. `tmp` cannot be
374 // overlapping either `x` or `y` because `tmp` was just allocated
375 // on the stack as a separate allocated object.
377 copy_nonoverlapping(x, tmp.as_mut_ptr(), 1);
378 copy(y, x, 1); // `x` and `y` may overlap
379 copy_nonoverlapping(tmp.as_ptr(), y, 1);
383 /// Swaps `count * size_of::<T>()` bytes between the two regions of memory
384 /// beginning at `x` and `y`. The two regions must *not* overlap.
388 /// Behavior is undefined if any of the following conditions are violated:
390 /// * Both `x` and `y` must be [valid] for both reads and writes of `count *
391 /// size_of::<T>()` bytes.
393 /// * Both `x` and `y` must be properly aligned.
395 /// * The region of memory beginning at `x` with a size of `count *
396 /// size_of::<T>()` bytes must *not* overlap with the region of memory
397 /// beginning at `y` with the same size.
399 /// Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`,
400 /// the pointers must be non-null and properly aligned.
402 /// [valid]: self#safety
411 /// let mut x = [1, 2, 3, 4];
412 /// let mut y = [7, 8, 9];
415 /// ptr::swap_nonoverlapping(x.as_mut_ptr(), y.as_mut_ptr(), 2);
418 /// assert_eq!(x, [7, 8, 3, 4]);
419 /// assert_eq!(y, [1, 2, 9]);
422 #[stable(feature = "swap_nonoverlapping", since = "1.27.0")]
423 #[rustc_const_unstable(feature = "const_swap", issue = "83163")]
424 pub const unsafe fn swap_nonoverlapping<T>(x: *mut T, y: *mut T, count: usize) {
426 macro_rules! attempt_swap_as_chunks {
428 if mem::align_of::<T>() >= mem::align_of::<$ChunkTy>()
429 && mem::size_of::<T>() % mem::size_of::<$ChunkTy>() == 0
431 let x: *mut MaybeUninit<$ChunkTy> = x.cast();
432 let y: *mut MaybeUninit<$ChunkTy> = y.cast();
433 let count = count * (mem::size_of::<T>() / mem::size_of::<$ChunkTy>());
434 // SAFETY: these are the same bytes that the caller promised were
435 // ok, just typed as `MaybeUninit<ChunkTy>`s instead of as `T`s.
436 // The `if` condition above ensures that we're not violating
437 // alignment requirements, and that the division is exact so
438 // that we don't lose any bytes off the end.
439 return unsafe { swap_nonoverlapping_simple(x, y, count) };
444 // SAFETY: the caller must guarantee that `x` and `y` are
445 // valid for writes and properly aligned.
447 assert_unsafe_precondition!(
448 is_aligned_and_not_null(x)
449 && is_aligned_and_not_null(y)
450 && is_nonoverlapping(x, y, count)
454 // NOTE(scottmcm) MIRI is disabled here as reading in smaller units is a
455 // pessimization for it. Also, if the type contains any unaligned pointers,
456 // copying those over multiple reads is difficult to support.
459 // Split up the slice into small power-of-two-sized chunks that LLVM is able
460 // to vectorize (unless it's a special type with more-than-pointer alignment,
461 // because we don't want to pessimize things like slices of SIMD vectors.)
462 if mem::align_of::<T>() <= mem::size_of::<usize>()
463 && (!mem::size_of::<T>().is_power_of_two()
464 || mem::size_of::<T>() > mem::size_of::<usize>() * 2)
466 attempt_swap_as_chunks!(usize);
467 attempt_swap_as_chunks!(u8);
471 // SAFETY: Same preconditions as this function
472 unsafe { swap_nonoverlapping_simple(x, y, count) }
475 /// Same behaviour and safety conditions as [`swap_nonoverlapping`]
477 /// LLVM can vectorize this (at least it can for the power-of-two-sized types
478 /// `swap_nonoverlapping` tries to use) so no need to manually SIMD it.
480 #[rustc_const_unstable(feature = "const_swap", issue = "83163")]
481 const unsafe fn swap_nonoverlapping_simple<T>(x: *mut T, y: *mut T, count: usize) {
485 // SAFETY: By precondition, `i` is in-bounds because it's below `n`
486 unsafe { &mut *x.add(i) };
488 // SAFETY: By precondition, `i` is in-bounds because it's below `n`
489 // and it's distinct from `x` since the ranges are non-overlapping
490 unsafe { &mut *y.add(i) };
491 mem::swap_simple(x, y);
497 /// Moves `src` into the pointed `dst`, returning the previous `dst` value.
499 /// Neither value is dropped.
501 /// This function is semantically equivalent to [`mem::replace`] except that it
502 /// operates on raw pointers instead of references. When references are
503 /// available, [`mem::replace`] should be preferred.
507 /// Behavior is undefined if any of the following conditions are violated:
509 /// * `dst` must be [valid] for both reads and writes.
511 /// * `dst` must be properly aligned.
513 /// * `dst` must point to a properly initialized value of type `T`.
515 /// Note that even if `T` has size `0`, the pointer must be non-null and properly aligned.
517 /// [valid]: self#safety
524 /// let mut rust = vec!['b', 'u', 's', 't'];
526 /// // `mem::replace` would have the same effect without requiring the unsafe
529 /// ptr::replace(&mut rust[0], 'r')
532 /// assert_eq!(b, 'b');
533 /// assert_eq!(rust, &['r', 'u', 's', 't']);
536 #[stable(feature = "rust1", since = "1.0.0")]
537 #[rustc_const_unstable(feature = "const_replace", issue = "83164")]
538 pub const unsafe fn replace<T>(dst: *mut T, mut src: T) -> T {
539 // SAFETY: the caller must guarantee that `dst` is valid to be
540 // cast to a mutable reference (valid for writes, aligned, initialized),
541 // and cannot overlap `src` since `dst` must point to a distinct
544 assert_unsafe_precondition!(is_aligned_and_not_null(dst));
545 mem::swap(&mut *dst, &mut src); // cannot overlap
550 /// Reads the value from `src` without moving it. This leaves the
551 /// memory in `src` unchanged.
555 /// Behavior is undefined if any of the following conditions are violated:
557 /// * `src` must be [valid] for reads.
559 /// * `src` must be properly aligned. Use [`read_unaligned`] if this is not the
562 /// * `src` must point to a properly initialized value of type `T`.
564 /// Note that even if `T` has size `0`, the pointer must be non-null and properly aligned.
572 /// let y = &x as *const i32;
575 /// assert_eq!(std::ptr::read(y), 12);
579 /// Manually implement [`mem::swap`]:
584 /// fn swap<T>(a: &mut T, b: &mut T) {
586 /// // Create a bitwise copy of the value at `a` in `tmp`.
587 /// let tmp = ptr::read(a);
589 /// // Exiting at this point (either by explicitly returning or by
590 /// // calling a function which panics) would cause the value in `tmp` to
591 /// // be dropped while the same value is still referenced by `a`. This
592 /// // could trigger undefined behavior if `T` is not `Copy`.
594 /// // Create a bitwise copy of the value at `b` in `a`.
595 /// // This is safe because mutable references cannot alias.
596 /// ptr::copy_nonoverlapping(b, a, 1);
598 /// // As above, exiting here could trigger undefined behavior because
599 /// // the same value is referenced by `a` and `b`.
601 /// // Move `tmp` into `b`.
602 /// ptr::write(b, tmp);
604 /// // `tmp` has been moved (`write` takes ownership of its second argument),
605 /// // so nothing is dropped implicitly here.
609 /// let mut foo = "foo".to_owned();
610 /// let mut bar = "bar".to_owned();
612 /// swap(&mut foo, &mut bar);
614 /// assert_eq!(foo, "bar");
615 /// assert_eq!(bar, "foo");
618 /// ## Ownership of the Returned Value
620 /// `read` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`].
621 /// If `T` is not [`Copy`], using both the returned value and the value at
622 /// `*src` can violate memory safety. Note that assigning to `*src` counts as a
623 /// use because it will attempt to drop the value at `*src`.
625 /// [`write()`] can be used to overwrite data without causing it to be dropped.
630 /// let mut s = String::from("foo");
632 /// // `s2` now points to the same underlying memory as `s`.
633 /// let mut s2: String = ptr::read(&s);
635 /// assert_eq!(s2, "foo");
637 /// // Assigning to `s2` causes its original value to be dropped. Beyond
638 /// // this point, `s` must no longer be used, as the underlying memory has
640 /// s2 = String::default();
641 /// assert_eq!(s2, "");
643 /// // Assigning to `s` would cause the old value to be dropped again,
644 /// // resulting in undefined behavior.
645 /// // s = String::from("bar"); // ERROR
647 /// // `ptr::write` can be used to overwrite a value without dropping it.
648 /// ptr::write(&mut s, String::from("bar"));
651 /// assert_eq!(s, "bar");
654 /// [valid]: self#safety
656 #[stable(feature = "rust1", since = "1.0.0")]
657 #[rustc_const_unstable(feature = "const_ptr_read", issue = "80377")]
658 pub const unsafe fn read<T>(src: *const T) -> T {
659 // We are calling the intrinsics directly to avoid function calls in the generated code
660 // as `intrinsics::copy_nonoverlapping` is a wrapper function.
661 extern "rust-intrinsic" {
662 #[rustc_const_unstable(feature = "const_intrinsic_copy", issue = "80697")]
663 fn copy_nonoverlapping<T>(src: *const T, dst: *mut T, count: usize);
666 let mut tmp = MaybeUninit::<T>::uninit();
667 // SAFETY: the caller must guarantee that `src` is valid for reads.
668 // `src` cannot overlap `tmp` because `tmp` was just allocated on
669 // the stack as a separate allocated object.
671 // Also, since we just wrote a valid value into `tmp`, it is guaranteed
672 // to be properly initialized.
674 copy_nonoverlapping(src, tmp.as_mut_ptr(), 1);
679 /// Reads the value from `src` without moving it. This leaves the
680 /// memory in `src` unchanged.
682 /// Unlike [`read`], `read_unaligned` works with unaligned pointers.
686 /// Behavior is undefined if any of the following conditions are violated:
688 /// * `src` must be [valid] for reads.
690 /// * `src` must point to a properly initialized value of type `T`.
692 /// Like [`read`], `read_unaligned` creates a bitwise copy of `T`, regardless of
693 /// whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned
694 /// value and the value at `*src` can [violate memory safety][read-ownership].
696 /// Note that even if `T` has size `0`, the pointer must be non-null.
698 /// [read-ownership]: read#ownership-of-the-returned-value
699 /// [valid]: self#safety
701 /// ## On `packed` structs
703 /// Attempting to create a raw pointer to an `unaligned` struct field with
704 /// an expression such as `&packed.unaligned as *const FieldType` creates an
705 /// intermediate unaligned reference before converting that to a raw pointer.
706 /// That this reference is temporary and immediately cast is inconsequential
707 /// as the compiler always expects references to be properly aligned.
708 /// As a result, using `&packed.unaligned as *const FieldType` causes immediate
709 /// *undefined behavior* in your program.
711 /// Instead you must use the [`ptr::addr_of!`](addr_of) macro to
712 /// create the pointer. You may use that returned pointer together with this
715 /// An example of what not to do and how this relates to `read_unaligned` is:
718 /// #[repr(packed, C)]
724 /// let packed = Packed {
726 /// unaligned: 0x01020304,
729 /// // Take the address of a 32-bit integer which is not aligned.
730 /// // In contrast to `&packed.unaligned as *const _`, this has no undefined behavior.
731 /// let unaligned = std::ptr::addr_of!(packed.unaligned);
733 /// let v = unsafe { std::ptr::read_unaligned(unaligned) };
734 /// assert_eq!(v, 0x01020304);
737 /// Accessing unaligned fields directly with e.g. `packed.unaligned` is safe however.
741 /// Read a usize value from a byte buffer:
746 /// fn read_usize(x: &[u8]) -> usize {
747 /// assert!(x.len() >= mem::size_of::<usize>());
749 /// let ptr = x.as_ptr() as *const usize;
751 /// unsafe { ptr.read_unaligned() }
755 #[stable(feature = "ptr_unaligned", since = "1.17.0")]
756 #[rustc_const_unstable(feature = "const_ptr_read", issue = "80377")]
757 pub const unsafe fn read_unaligned<T>(src: *const T) -> T {
758 let mut tmp = MaybeUninit::<T>::uninit();
759 // SAFETY: the caller must guarantee that `src` is valid for reads.
760 // `src` cannot overlap `tmp` because `tmp` was just allocated on
761 // the stack as a separate allocated object.
763 // Also, since we just wrote a valid value into `tmp`, it is guaranteed
764 // to be properly initialized.
766 copy_nonoverlapping(src as *const u8, tmp.as_mut_ptr() as *mut u8, mem::size_of::<T>());
771 /// Overwrites a memory location with the given value without reading or
772 /// dropping the old value.
774 /// `write` does not drop the contents of `dst`. This is safe, but it could leak
775 /// allocations or resources, so care should be taken not to overwrite an object
776 /// that should be dropped.
778 /// Additionally, it does not drop `src`. Semantically, `src` is moved into the
779 /// location pointed to by `dst`.
781 /// This is appropriate for initializing uninitialized memory, or overwriting
782 /// memory that has previously been [`read`] from.
786 /// Behavior is undefined if any of the following conditions are violated:
788 /// * `dst` must be [valid] for writes.
790 /// * `dst` must be properly aligned. Use [`write_unaligned`] if this is not the
793 /// Note that even if `T` has size `0`, the pointer must be non-null and properly aligned.
795 /// [valid]: self#safety
803 /// let y = &mut x as *mut i32;
807 /// std::ptr::write(y, z);
808 /// assert_eq!(std::ptr::read(y), 12);
812 /// Manually implement [`mem::swap`]:
817 /// fn swap<T>(a: &mut T, b: &mut T) {
819 /// // Create a bitwise copy of the value at `a` in `tmp`.
820 /// let tmp = ptr::read(a);
822 /// // Exiting at this point (either by explicitly returning or by
823 /// // calling a function which panics) would cause the value in `tmp` to
824 /// // be dropped while the same value is still referenced by `a`. This
825 /// // could trigger undefined behavior if `T` is not `Copy`.
827 /// // Create a bitwise copy of the value at `b` in `a`.
828 /// // This is safe because mutable references cannot alias.
829 /// ptr::copy_nonoverlapping(b, a, 1);
831 /// // As above, exiting here could trigger undefined behavior because
832 /// // the same value is referenced by `a` and `b`.
834 /// // Move `tmp` into `b`.
835 /// ptr::write(b, tmp);
837 /// // `tmp` has been moved (`write` takes ownership of its second argument),
838 /// // so nothing is dropped implicitly here.
842 /// let mut foo = "foo".to_owned();
843 /// let mut bar = "bar".to_owned();
845 /// swap(&mut foo, &mut bar);
847 /// assert_eq!(foo, "bar");
848 /// assert_eq!(bar, "foo");
851 #[stable(feature = "rust1", since = "1.0.0")]
852 #[rustc_const_unstable(feature = "const_ptr_write", issue = "86302")]
853 pub const unsafe fn write<T>(dst: *mut T, src: T) {
854 // We are calling the intrinsics directly to avoid function calls in the generated code
855 // as `intrinsics::copy_nonoverlapping` is a wrapper function.
856 extern "rust-intrinsic" {
857 #[rustc_const_unstable(feature = "const_intrinsic_copy", issue = "80697")]
858 fn copy_nonoverlapping<T>(src: *const T, dst: *mut T, count: usize);
861 // SAFETY: the caller must guarantee that `dst` is valid for writes.
862 // `dst` cannot overlap `src` because the caller has mutable access
863 // to `dst` while `src` is owned by this function.
865 copy_nonoverlapping(&src as *const T, dst, 1);
866 intrinsics::forget(src);
870 /// Overwrites a memory location with the given value without reading or
871 /// dropping the old value.
873 /// Unlike [`write()`], the pointer may be unaligned.
875 /// `write_unaligned` does not drop the contents of `dst`. This is safe, but it
876 /// could leak allocations or resources, so care should be taken not to overwrite
877 /// an object that should be dropped.
879 /// Additionally, it does not drop `src`. Semantically, `src` is moved into the
880 /// location pointed to by `dst`.
882 /// This is appropriate for initializing uninitialized memory, or overwriting
883 /// memory that has previously been read with [`read_unaligned`].
887 /// Behavior is undefined if any of the following conditions are violated:
889 /// * `dst` must be [valid] for writes.
891 /// Note that even if `T` has size `0`, the pointer must be non-null.
893 /// [valid]: self#safety
895 /// ## On `packed` structs
897 /// Attempting to create a raw pointer to an `unaligned` struct field with
898 /// an expression such as `&packed.unaligned as *const FieldType` creates an
899 /// intermediate unaligned reference before converting that to a raw pointer.
900 /// That this reference is temporary and immediately cast is inconsequential
901 /// as the compiler always expects references to be properly aligned.
902 /// As a result, using `&packed.unaligned as *const FieldType` causes immediate
903 /// *undefined behavior* in your program.
905 /// Instead you must use the [`ptr::addr_of_mut!`](addr_of_mut)
906 /// macro to create the pointer. You may use that returned pointer together with
909 /// An example of how to do it and how this relates to `write_unaligned` is:
912 /// #[repr(packed, C)]
918 /// let mut packed: Packed = unsafe { std::mem::zeroed() };
920 /// // Take the address of a 32-bit integer which is not aligned.
921 /// // In contrast to `&packed.unaligned as *mut _`, this has no undefined behavior.
922 /// let unaligned = std::ptr::addr_of_mut!(packed.unaligned);
924 /// unsafe { std::ptr::write_unaligned(unaligned, 42) };
926 /// assert_eq!({packed.unaligned}, 42); // `{...}` forces copying the field instead of creating a reference.
929 /// Accessing unaligned fields directly with e.g. `packed.unaligned` is safe however
930 /// (as can be seen in the `assert_eq!` above).
934 /// Write a usize value to a byte buffer:
939 /// fn write_usize(x: &mut [u8], val: usize) {
940 /// assert!(x.len() >= mem::size_of::<usize>());
942 /// let ptr = x.as_mut_ptr() as *mut usize;
944 /// unsafe { ptr.write_unaligned(val) }
948 #[stable(feature = "ptr_unaligned", since = "1.17.0")]
949 #[rustc_const_unstable(feature = "const_ptr_write", issue = "86302")]
950 pub const unsafe fn write_unaligned<T>(dst: *mut T, src: T) {
951 // SAFETY: the caller must guarantee that `dst` is valid for writes.
952 // `dst` cannot overlap `src` because the caller has mutable access
953 // to `dst` while `src` is owned by this function.
955 copy_nonoverlapping(&src as *const T as *const u8, dst as *mut u8, mem::size_of::<T>());
956 // We are calling the intrinsic directly to avoid function calls in the generated code.
957 intrinsics::forget(src);
961 /// Performs a volatile read of the value from `src` without moving it. This
962 /// leaves the memory in `src` unchanged.
964 /// Volatile operations are intended to act on I/O memory, and are guaranteed
965 /// to not be elided or reordered by the compiler across other volatile
970 /// Rust does not currently have a rigorously and formally defined memory model,
971 /// so the precise semantics of what "volatile" means here is subject to change
972 /// over time. That being said, the semantics will almost always end up pretty
973 /// similar to [C11's definition of volatile][c11].
975 /// The compiler shouldn't change the relative order or number of volatile
976 /// memory operations. However, volatile memory operations on zero-sized types
977 /// (e.g., if a zero-sized type is passed to `read_volatile`) are noops
978 /// and may be ignored.
980 /// [c11]: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
984 /// Behavior is undefined if any of the following conditions are violated:
986 /// * `src` must be [valid] for reads.
988 /// * `src` must be properly aligned.
990 /// * `src` must point to a properly initialized value of type `T`.
992 /// Like [`read`], `read_volatile` creates a bitwise copy of `T`, regardless of
993 /// whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned
994 /// value and the value at `*src` can [violate memory safety][read-ownership].
995 /// However, storing non-[`Copy`] types in volatile memory is almost certainly
998 /// Note that even if `T` has size `0`, the pointer must be non-null and properly aligned.
1000 /// [valid]: self#safety
1001 /// [read-ownership]: read#ownership-of-the-returned-value
1003 /// Just like in C, whether an operation is volatile has no bearing whatsoever
1004 /// on questions involving concurrent access from multiple threads. Volatile
1005 /// accesses behave exactly like non-atomic accesses in that regard. In particular,
1006 /// a race between a `read_volatile` and any write operation to the same location
1007 /// is undefined behavior.
1015 /// let y = &x as *const i32;
1018 /// assert_eq!(std::ptr::read_volatile(y), 12);
1022 #[stable(feature = "volatile", since = "1.9.0")]
1023 pub unsafe fn read_volatile<T>(src: *const T) -> T {
1024 // SAFETY: the caller must uphold the safety contract for `volatile_load`.
1026 assert_unsafe_precondition!(is_aligned_and_not_null(src));
1027 intrinsics::volatile_load(src)
1031 /// Performs a volatile write of a memory location with the given value without
1032 /// reading or dropping the old value.
1034 /// Volatile operations are intended to act on I/O memory, and are guaranteed
1035 /// to not be elided or reordered by the compiler across other volatile
1038 /// `write_volatile` does not drop the contents of `dst`. This is safe, but it
1039 /// could leak allocations or resources, so care should be taken not to overwrite
1040 /// an object that should be dropped.
1042 /// Additionally, it does not drop `src`. Semantically, `src` is moved into the
1043 /// location pointed to by `dst`.
1047 /// Rust does not currently have a rigorously and formally defined memory model,
1048 /// so the precise semantics of what "volatile" means here is subject to change
1049 /// over time. That being said, the semantics will almost always end up pretty
1050 /// similar to [C11's definition of volatile][c11].
1052 /// The compiler shouldn't change the relative order or number of volatile
1053 /// memory operations. However, volatile memory operations on zero-sized types
1054 /// (e.g., if a zero-sized type is passed to `write_volatile`) are noops
1055 /// and may be ignored.
1057 /// [c11]: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
1061 /// Behavior is undefined if any of the following conditions are violated:
1063 /// * `dst` must be [valid] for writes.
1065 /// * `dst` must be properly aligned.
1067 /// Note that even if `T` has size `0`, the pointer must be non-null and properly aligned.
1069 /// [valid]: self#safety
1071 /// Just like in C, whether an operation is volatile has no bearing whatsoever
1072 /// on questions involving concurrent access from multiple threads. Volatile
1073 /// accesses behave exactly like non-atomic accesses in that regard. In particular,
1074 /// a race between a `write_volatile` and any other operation (reading or writing)
1075 /// on the same location is undefined behavior.
1083 /// let y = &mut x as *mut i32;
1087 /// std::ptr::write_volatile(y, z);
1088 /// assert_eq!(std::ptr::read_volatile(y), 12);
1092 #[stable(feature = "volatile", since = "1.9.0")]
1093 pub unsafe fn write_volatile<T>(dst: *mut T, src: T) {
1094 // SAFETY: the caller must uphold the safety contract for `volatile_store`.
1096 assert_unsafe_precondition!(is_aligned_and_not_null(dst));
1097 intrinsics::volatile_store(dst, src);
1101 /// Align pointer `p`.
1103 /// Calculate offset (in terms of elements of `stride` stride) that has to be applied
1104 /// to pointer `p` so that pointer `p` would get aligned to `a`.
1106 /// Note: This implementation has been carefully tailored to not panic. It is UB for this to panic.
1107 /// The only real change that can be made here is change of `INV_TABLE_MOD_16` and associated
1110 /// If we ever decide to make it possible to call the intrinsic with `a` that is not a
1111 /// power-of-two, it will probably be more prudent to just change to a naive implementation rather
1112 /// than trying to adapt this to accommodate that change.
1114 /// Any questions go to @nagisa.
1115 #[lang = "align_offset"]
1116 pub(crate) unsafe fn align_offset<T: Sized>(p: *const T, a: usize) -> usize {
1117 // FIXME(#75598): Direct use of these intrinsics improves codegen significantly at opt-level <=
1118 // 1, where the method versions of these operations are not inlined.
1120 unchecked_shl, unchecked_shr, unchecked_sub, wrapping_add, wrapping_mul, wrapping_sub,
1123 /// Calculate multiplicative modular inverse of `x` modulo `m`.
1125 /// This implementation is tailored for `align_offset` and has following preconditions:
1127 /// * `m` is a power-of-two;
1128 /// * `x < m`; (if `x ≥ m`, pass in `x % m` instead)
1130 /// Implementation of this function shall not panic. Ever.
1132 unsafe fn mod_inv(x: usize, m: usize) -> usize {
1133 /// Multiplicative modular inverse table modulo 2⁴ = 16.
1135 /// Note, that this table does not contain values where inverse does not exist (i.e., for
1136 /// `0⁻¹ mod 16`, `2⁻¹ mod 16`, etc.)
1137 const INV_TABLE_MOD_16: [u8; 8] = [1, 11, 13, 7, 9, 3, 5, 15];
1138 /// Modulo for which the `INV_TABLE_MOD_16` is intended.
1139 const INV_TABLE_MOD: usize = 16;
1141 const INV_TABLE_MOD_SQUARED: usize = INV_TABLE_MOD * INV_TABLE_MOD;
1143 let table_inverse = INV_TABLE_MOD_16[(x & (INV_TABLE_MOD - 1)) >> 1] as usize;
1144 // SAFETY: `m` is required to be a power-of-two, hence non-zero.
1145 let m_minus_one = unsafe { unchecked_sub(m, 1) };
1146 if m <= INV_TABLE_MOD {
1147 table_inverse & m_minus_one
1149 // We iterate "up" using the following formula:
1151 // $$ xy ≡ 1 (mod 2ⁿ) → xy (2 - xy) ≡ 1 (mod 2²ⁿ) $$
1153 // until 2²ⁿ ≥ m. Then we can reduce to our desired `m` by taking the result `mod m`.
1154 let mut inverse = table_inverse;
1155 let mut going_mod = INV_TABLE_MOD_SQUARED;
1157 // y = y * (2 - xy) mod n
1159 // Note, that we use wrapping operations here intentionally – the original formula
1160 // uses e.g., subtraction `mod n`. It is entirely fine to do them `mod
1161 // usize::MAX` instead, because we take the result `mod n` at the end
1163 inverse = wrapping_mul(inverse, wrapping_sub(2usize, wrapping_mul(x, inverse)));
1165 return inverse & m_minus_one;
1167 going_mod = wrapping_mul(going_mod, going_mod);
1172 let stride = mem::size_of::<T>();
1173 // SAFETY: `a` is a power-of-two, therefore non-zero.
1174 let a_minus_one = unsafe { unchecked_sub(a, 1) };
1176 // `stride == 1` case can be computed more simply through `-p (mod a)`, but doing so
1177 // inhibits LLVM's ability to select instructions like `lea`. Instead we compute
1179 // round_up_to_next_alignment(p, a) - p
1181 // which distributes operations around the load-bearing, but pessimizing `and` sufficiently
1182 // for LLVM to be able to utilize the various optimizations it knows about.
1183 return wrapping_sub(
1184 wrapping_add(p as usize, a_minus_one) & wrapping_sub(0, a),
1189 let pmoda = p as usize & a_minus_one;
1191 // Already aligned. Yay!
1193 } else if stride == 0 {
1194 // If the pointer is not aligned, and the element is zero-sized, then no amount of
1195 // elements will ever align the pointer.
1199 let smoda = stride & a_minus_one;
1200 // SAFETY: a is power-of-two hence non-zero. stride == 0 case is handled above.
1201 let gcdpow = unsafe { intrinsics::cttz_nonzero(stride).min(intrinsics::cttz_nonzero(a)) };
1202 // SAFETY: gcdpow has an upper-bound that’s at most the number of bits in a usize.
1203 let gcd = unsafe { unchecked_shl(1usize, gcdpow) };
1205 // SAFETY: gcd is always greater or equal to 1.
1206 if p as usize & unsafe { unchecked_sub(gcd, 1) } == 0 {
1207 // This branch solves for the following linear congruence equation:
1209 // ` p + so = 0 mod a `
1211 // `p` here is the pointer value, `s` - stride of `T`, `o` offset in `T`s, and `a` - the
1212 // requested alignment.
1214 // With `g = gcd(a, s)`, and the above condition asserting that `p` is also divisible by
1215 // `g`, we can denote `a' = a/g`, `s' = s/g`, `p' = p/g`, then this becomes equivalent to:
1217 // ` p' + s'o = 0 mod a' `
1218 // ` o = (a' - (p' mod a')) * (s'^-1 mod a') `
1220 // The first term is "the relative alignment of `p` to `a`" (divided by the `g`), the second
1221 // term is "how does incrementing `p` by `s` bytes change the relative alignment of `p`" (again
1223 // Division by `g` is necessary to make the inverse well formed if `a` and `s` are not
1226 // Furthermore, the result produced by this solution is not "minimal", so it is necessary
1227 // to take the result `o mod lcm(s, a)`. We can replace `lcm(s, a)` with just a `a'`.
1229 // SAFETY: `gcdpow` has an upper-bound not greater than the number of trailing 0-bits in
1231 let a2 = unsafe { unchecked_shr(a, gcdpow) };
1232 // SAFETY: `a2` is non-zero. Shifting `a` by `gcdpow` cannot shift out any of the set bits
1233 // in `a` (of which it has exactly one).
1234 let a2minus1 = unsafe { unchecked_sub(a2, 1) };
1235 // SAFETY: `gcdpow` has an upper-bound not greater than the number of trailing 0-bits in
1237 let s2 = unsafe { unchecked_shr(smoda, gcdpow) };
1238 // SAFETY: `gcdpow` has an upper-bound not greater than the number of trailing 0-bits in
1239 // `a`. Furthermore, the subtraction cannot overflow, because `a2 = a >> gcdpow` will
1240 // always be strictly greater than `(p % a) >> gcdpow`.
1241 let minusp2 = unsafe { unchecked_sub(a2, unchecked_shr(pmoda, gcdpow)) };
1242 // SAFETY: `a2` is a power-of-two, as proven above. `s2` is strictly less than `a2`
1243 // because `(s % a) >> gcdpow` is strictly less than `a >> gcdpow`.
1244 return wrapping_mul(minusp2, unsafe { mod_inv(s2, a2) }) & a2minus1;
1247 // Cannot be aligned at all.
1251 /// Compares raw pointers for equality.
1253 /// This is the same as using the `==` operator, but less generic:
1254 /// the arguments have to be `*const T` raw pointers,
1255 /// not anything that implements `PartialEq`.
1257 /// This can be used to compare `&T` references (which coerce to `*const T` implicitly)
1258 /// by their address rather than comparing the values they point to
1259 /// (which is what the `PartialEq for &T` implementation does).
1267 /// let other_five = 5;
1268 /// let five_ref = &five;
1269 /// let same_five_ref = &five;
1270 /// let other_five_ref = &other_five;
1272 /// assert!(five_ref == same_five_ref);
1273 /// assert!(ptr::eq(five_ref, same_five_ref));
1275 /// assert!(five_ref == other_five_ref);
1276 /// assert!(!ptr::eq(five_ref, other_five_ref));
1279 /// Slices are also compared by their length (fat pointers):
1282 /// let a = [1, 2, 3];
1283 /// assert!(std::ptr::eq(&a[..3], &a[..3]));
1284 /// assert!(!std::ptr::eq(&a[..2], &a[..3]));
1285 /// assert!(!std::ptr::eq(&a[0..2], &a[1..3]));
1288 /// Traits are also compared by their implementation:
1291 /// #[repr(transparent)]
1292 /// struct Wrapper { member: i32 }
1295 /// impl Trait for Wrapper {}
1296 /// impl Trait for i32 {}
1298 /// let wrapper = Wrapper { member: 10 };
1300 /// // Pointers have equal addresses.
1301 /// assert!(std::ptr::eq(
1302 /// &wrapper as *const Wrapper as *const u8,
1303 /// &wrapper.member as *const i32 as *const u8
1306 /// // Objects have equal addresses, but `Trait` has different implementations.
1307 /// assert!(!std::ptr::eq(
1308 /// &wrapper as &dyn Trait,
1309 /// &wrapper.member as &dyn Trait,
1311 /// assert!(!std::ptr::eq(
1312 /// &wrapper as &dyn Trait as *const dyn Trait,
1313 /// &wrapper.member as &dyn Trait as *const dyn Trait,
1316 /// // Converting the reference to a `*const u8` compares by address.
1317 /// assert!(std::ptr::eq(
1318 /// &wrapper as &dyn Trait as *const dyn Trait as *const u8,
1319 /// &wrapper.member as &dyn Trait as *const dyn Trait as *const u8,
1322 #[stable(feature = "ptr_eq", since = "1.17.0")]
1324 pub fn eq<T: ?Sized>(a: *const T, b: *const T) -> bool {
1328 /// Hash a raw pointer.
1330 /// This can be used to hash a `&T` reference (which coerces to `*const T` implicitly)
1331 /// by its address rather than the value it points to
1332 /// (which is what the `Hash for &T` implementation does).
1337 /// use std::collections::hash_map::DefaultHasher;
1338 /// use std::hash::{Hash, Hasher};
1342 /// let five_ref = &five;
1344 /// let mut hasher = DefaultHasher::new();
1345 /// ptr::hash(five_ref, &mut hasher);
1346 /// let actual = hasher.finish();
1348 /// let mut hasher = DefaultHasher::new();
1349 /// (five_ref as *const i32).hash(&mut hasher);
1350 /// let expected = hasher.finish();
1352 /// assert_eq!(actual, expected);
1354 #[stable(feature = "ptr_hash", since = "1.35.0")]
1355 pub fn hash<T: ?Sized, S: hash::Hasher>(hashee: *const T, into: &mut S) {
1356 use crate::hash::Hash;
1360 // Impls for function pointers
1361 macro_rules! fnptr_impls_safety_abi {
1362 ($FnTy: ty, $($Arg: ident),*) => {
1363 #[stable(feature = "fnptr_impls", since = "1.4.0")]
1364 impl<Ret, $($Arg),*> PartialEq for $FnTy {
1366 fn eq(&self, other: &Self) -> bool {
1367 *self as usize == *other as usize
1371 #[stable(feature = "fnptr_impls", since = "1.4.0")]
1372 impl<Ret, $($Arg),*> Eq for $FnTy {}
1374 #[stable(feature = "fnptr_impls", since = "1.4.0")]
1375 impl<Ret, $($Arg),*> PartialOrd for $FnTy {
1377 fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
1378 (*self as usize).partial_cmp(&(*other as usize))
1382 #[stable(feature = "fnptr_impls", since = "1.4.0")]
1383 impl<Ret, $($Arg),*> Ord for $FnTy {
1385 fn cmp(&self, other: &Self) -> Ordering {
1386 (*self as usize).cmp(&(*other as usize))
1390 #[stable(feature = "fnptr_impls", since = "1.4.0")]
1391 impl<Ret, $($Arg),*> hash::Hash for $FnTy {
1392 fn hash<HH: hash::Hasher>(&self, state: &mut HH) {
1393 state.write_usize(*self as usize)
1397 #[stable(feature = "fnptr_impls", since = "1.4.0")]
1398 impl<Ret, $($Arg),*> fmt::Pointer for $FnTy {
1399 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1400 // HACK: The intermediate cast as usize is required for AVR
1401 // so that the address space of the source function pointer
1402 // is preserved in the final function pointer.
1404 // https://github.com/avr-rust/rust/issues/143
1405 fmt::Pointer::fmt(&(*self as usize as *const ()), f)
1409 #[stable(feature = "fnptr_impls", since = "1.4.0")]
1410 impl<Ret, $($Arg),*> fmt::Debug for $FnTy {
1411 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1412 // HACK: The intermediate cast as usize is required for AVR
1413 // so that the address space of the source function pointer
1414 // is preserved in the final function pointer.
1416 // https://github.com/avr-rust/rust/issues/143
1417 fmt::Pointer::fmt(&(*self as usize as *const ()), f)
1423 macro_rules! fnptr_impls_args {
1424 ($($Arg: ident),+) => {
1425 fnptr_impls_safety_abi! { extern "Rust" fn($($Arg),+) -> Ret, $($Arg),+ }
1426 fnptr_impls_safety_abi! { extern "C" fn($($Arg),+) -> Ret, $($Arg),+ }
1427 fnptr_impls_safety_abi! { extern "C" fn($($Arg),+ , ...) -> Ret, $($Arg),+ }
1428 fnptr_impls_safety_abi! { unsafe extern "Rust" fn($($Arg),+) -> Ret, $($Arg),+ }
1429 fnptr_impls_safety_abi! { unsafe extern "C" fn($($Arg),+) -> Ret, $($Arg),+ }
1430 fnptr_impls_safety_abi! { unsafe extern "C" fn($($Arg),+ , ...) -> Ret, $($Arg),+ }
1433 // No variadic functions with 0 parameters
1434 fnptr_impls_safety_abi! { extern "Rust" fn() -> Ret, }
1435 fnptr_impls_safety_abi! { extern "C" fn() -> Ret, }
1436 fnptr_impls_safety_abi! { unsafe extern "Rust" fn() -> Ret, }
1437 fnptr_impls_safety_abi! { unsafe extern "C" fn() -> Ret, }
1441 fnptr_impls_args! {}
1442 fnptr_impls_args! { A }
1443 fnptr_impls_args! { A, B }
1444 fnptr_impls_args! { A, B, C }
1445 fnptr_impls_args! { A, B, C, D }
1446 fnptr_impls_args! { A, B, C, D, E }
1447 fnptr_impls_args! { A, B, C, D, E, F }
1448 fnptr_impls_args! { A, B, C, D, E, F, G }
1449 fnptr_impls_args! { A, B, C, D, E, F, G, H }
1450 fnptr_impls_args! { A, B, C, D, E, F, G, H, I }
1451 fnptr_impls_args! { A, B, C, D, E, F, G, H, I, J }
1452 fnptr_impls_args! { A, B, C, D, E, F, G, H, I, J, K }
1453 fnptr_impls_args! { A, B, C, D, E, F, G, H, I, J, K, L }
1455 /// Create a `const` raw pointer to a place, without creating an intermediate reference.
1457 /// Creating a reference with `&`/`&mut` is only allowed if the pointer is properly aligned
1458 /// and points to initialized data. For cases where those requirements do not hold,
1459 /// raw pointers should be used instead. However, `&expr as *const _` creates a reference
1460 /// before casting it to a raw pointer, and that reference is subject to the same rules
1461 /// as all other references. This macro can create a raw pointer *without* creating
1462 /// a reference first.
1464 /// Note, however, that the `expr` in `addr_of!(expr)` is still subject to all
1465 /// the usual rules. In particular, `addr_of!(*ptr::null())` is Undefined
1466 /// Behavior because it dereferences a null pointer.
1479 /// let packed = Packed { f1: 1, f2: 2 };
1480 /// // `&packed.f2` would create an unaligned reference, and thus be Undefined Behavior!
1481 /// let raw_f2 = ptr::addr_of!(packed.f2);
1482 /// assert_eq!(unsafe { raw_f2.read_unaligned() }, 2);
1485 /// See [`addr_of_mut`] for how to create a pointer to unininitialized data.
1486 /// Doing that with `addr_of` would not make much sense since one could only
1487 /// read the data, and that would be Undefined Behavior.
1488 #[stable(feature = "raw_ref_macros", since = "1.51.0")]
1489 #[rustc_macro_transparency = "semitransparent"]
1490 #[allow_internal_unstable(raw_ref_op)]
1491 pub macro addr_of($place:expr) {
1495 /// Create a `mut` raw pointer to a place, without creating an intermediate reference.
1497 /// Creating a reference with `&`/`&mut` is only allowed if the pointer is properly aligned
1498 /// and points to initialized data. For cases where those requirements do not hold,
1499 /// raw pointers should be used instead. However, `&mut expr as *mut _` creates a reference
1500 /// before casting it to a raw pointer, and that reference is subject to the same rules
1501 /// as all other references. This macro can create a raw pointer *without* creating
1502 /// a reference first.
1504 /// Note, however, that the `expr` in `addr_of_mut!(expr)` is still subject to all
1505 /// the usual rules. In particular, `addr_of_mut!(*ptr::null_mut())` is Undefined
1506 /// Behavior because it dereferences a null pointer.
1510 /// **Creating a pointer to unaligned data:**
1521 /// let mut packed = Packed { f1: 1, f2: 2 };
1522 /// // `&mut packed.f2` would create an unaligned reference, and thus be Undefined Behavior!
1523 /// let raw_f2 = ptr::addr_of_mut!(packed.f2);
1524 /// unsafe { raw_f2.write_unaligned(42); }
1525 /// assert_eq!({packed.f2}, 42); // `{...}` forces copying the field instead of creating a reference.
1528 /// **Creating a pointer to uninitialized data:**
1531 /// use std::{ptr, mem::MaybeUninit};
1537 /// let mut uninit = MaybeUninit::<Demo>::uninit();
1538 /// // `&uninit.as_mut().field` would create a reference to an uninitialized `bool`,
1539 /// // and thus be Undefined Behavior!
1540 /// let f1_ptr = unsafe { ptr::addr_of_mut!((*uninit.as_mut_ptr()).field) };
1541 /// unsafe { f1_ptr.write(true); }
1542 /// let init = unsafe { uninit.assume_init() };
1544 #[stable(feature = "raw_ref_macros", since = "1.51.0")]
1545 #[rustc_macro_transparency = "semitransparent"]
1546 #[allow_internal_unstable(raw_ref_op)]
1547 pub macro addr_of_mut($place:expr) {