# email addresses.
#
-Aaron Todd <github@opprobrio.us>
Aaron Power <theaaronepower@gmail.com> Erin Power <xampprocky@gmail.com>
+Aaron Todd <github@opprobrio.us>
Abhishek Chanda <abhishek.becs@gmail.com> Abhishek Chanda <abhishek@cloudscaling.com>
Adolfo Ochagavía <aochagavia92@gmail.com>
Adrien Tétar <adri-from-59@hotmail.fr>
Ariel Ben-Yehuda <arielb1@mail.tau.ac.il> arielb1 <arielb1@mail.tau.ac.il>
Austin Seipp <mad.one@gmail.com> <as@hacks.yi.org>
Aydin Kim <ladinjin@hanmail.net> aydin.kim <aydin.kim@samsung.com>
-Bastian Kauschke <bastian_kauschke@hotmail.de>
Barosl Lee <vcs@barosl.com> Barosl LEE <github@barosl.com>
+Bastian Kauschke <bastian_kauschke@hotmail.de>
Ben Alpert <ben@benalpert.com> <spicyjalapeno@gmail.com>
Ben Sago <ogham@users.noreply.github.com> Ben S <ogham@bsago.me>
Ben Sago <ogham@users.noreply.github.com> Ben S <ogham@users.noreply.github.com>
Brian Dawn <brian.t.dawn@gmail.com>
Brian Leibig <brian@brianleibig.com> Brian Leibig <brian.leibig@gmail.com>
Carl-Anton Ingmarsson <mail@carlanton.se> <ca.ingmarsson@gmail.com>
+Carol (Nichols || Goulding) <carol.nichols@gmail.com> <193874+carols10cents@users.noreply.github.com>
+Carol (Nichols || Goulding) <carol.nichols@gmail.com> <carol.nichols@gmail.com>
Carol (Nichols || Goulding) <carol.nichols@gmail.com> <cnichols@thinkthroughmath.com>
-Carol (Nichols || Goulding) <carol.nichols@gmail.com> Carol Nichols <carol.nichols@gmail.com>
Carol Willing <carolcode@willingconsulting.com>
Chris C Cerami <chrisccerami@users.noreply.github.com> Chris C Cerami <chrisccerami@gmail.com>
Chris Pressey <cpressey@gmail.com>
Chris Thorn <chris@thorn.co> Chris Thorn <thorn@thoughtbot.com>
Chris Vittal <christopher.vittal@gmail.com> Christopher Vittal <christopher.vittal@gmail.com>
-Christian Poveda <christianpoveda@protonmail.com> <z1mvader@protonmail.com>
Christian Poveda <christianpoveda@protonmail.com> <cn.poveda.ruiz@gmail.com>
+Christian Poveda <christianpoveda@protonmail.com> <z1mvader@protonmail.com>
Clark Gaebel <cg.wowus.cg@gmail.com> <cgaebel@mozilla.com>
Clinton Ryan <clint.ryan3@gmail.com>
Corey Richardson <corey@octayn.net> Elaine "See More" Nemo <corey@octayn.net>
Cyryl Płotnicki <cyplo@cyplo.net>
Damien Schoof <damien.schoof@gmail.com>
-Daniel Ramos <dan@daramos.com>
Daniel J Rollins <drollins@financialforce.com>
+Daniel Ramos <dan@daramos.com>
David Klein <david.klein@baesystemsdetica.com>
David Manescu <david.manescu@gmail.com> <dman2626@uni.sydney.edu.au>
David Ross <daboross@daboross.net>
Dylan Braithwaite <dylanbraithwaite1@gmail.com> <mail@dylanb.me>
Dzmitry Malyshau <kvarkus@gmail.com>
E. Dunham <edunham@mozilla.com> edunham <edunham@mozilla.com>
+Eduard-Mihai Burtescu <edy.burt@gmail.com>
Eduardo Bautista <me@eduardobautista.com> <=>
Eduardo Bautista <me@eduardobautista.com> <mail@eduardobautista.com>
-Eduard-Mihai Burtescu <edy.burt@gmail.com>
Elliott Slaughter <elliottslaughter@gmail.com> <eslaughter@mozilla.com>
Elly Fong-Jones <elly@leptoquark.net>
Eric Holk <eric.holk@gmail.com> <eholk@cs.indiana.edu>
Eric Holmes <eric@ejholmes.net>
Eric Reed <ecreed@cs.washington.edu> <ereed@mozilla.com>
Erick Tryzelaar <erick.tryzelaar@gmail.com> <etryzelaar@iqt.org>
-Esteban Küber <esteban@kuber.com.ar> <estebank@users.noreply.github.com>
Esteban Küber <esteban@kuber.com.ar> <esteban@commure.com>
+Esteban Küber <esteban@kuber.com.ar> <estebank@users.noreply.github.com>
Esteban Küber <esteban@kuber.com.ar> <github@kuber.com.ar>
Evgeny Sologubov
Falco Hirschenberger <falco.hirschenberger@gmail.com> <hirschen@itwm.fhg.de>
Ilyong Cho <ilyoan@gmail.com>
Ivan Ivaschenko <defuz.net@gmail.com>
J. J. Weber <jjweber@gmail.com>
+Jakub Adam Wieczorek <jakub.adam.wieczorek@gmail.com> <jakub.bukaj@yahoo.com>
Jakub Adam Wieczorek <jakub.adam.wieczorek@gmail.com> <jakub@jakub.cc>
Jakub Adam Wieczorek <jakub.adam.wieczorek@gmail.com> <jakubw@jakubw.net>
-Jakub Adam Wieczorek <jakub.adam.wieczorek@gmail.com> <jakub.bukaj@yahoo.com>
James Deng <cnjamesdeng@gmail.com> <cnJamesDeng@gmail.com>
James Miller <bladeon@gmail.com> <james@aatch.net>
James Perry <james.austin.perry@gmail.com>
Jihyun Yu <j.yu@navercorp.com> <yjh0502@gmail.com>
Jihyun Yu <j.yu@navercorp.com> jihyun <jihyun@nablecomm.com>
Jihyun Yu <j.yu@navercorp.com> Jihyun Yu <jihyun@nclab.kaist.ac.kr>
+João Oliveira <hello@jxs.pt> joaoxsouls <joaoxsouls@gmail.com>
Johann Hofmann <git@johann-hofmann.com> Johann <git@johann-hofmann.com>
John Clements <clements@racket-lang.org> <clements@brinckerhoff.org>
John Hodge <acessdev@gmail.com> John Hodge <tpg@mutabah.net>
Jonathan Turner <probata@hotmail.com>
Jorge Aparicio <japaric@linux.com> <japaricious@gmail.com>
Joseph Martin <pythoner6@gmail.com>
-João Oliveira <hello@jxs.pt> joaoxsouls <joaoxsouls@gmail.com>
+Joseph T. Lyons <JosephTLyons@gmail.com> <josephtlyons@gmail.com>
+Joseph T. Lyons <JosephTLyons@gmail.com> <JosephTLyons@users.noreply.github.com>
Junyoung Cho <june0.cho@samsung.com>
Jyun-Yan You <jyyou.tw@gmail.com> <jyyou@cs.nctu.edu.tw>
Kang Seonghoon <kang.seonghoon@mearie.org> <public+git@mearie.org>
Luke Metz <luke.metz@students.olin.edu>
Luqman Aden <me@luqman.ca> <laden@csclub.uwaterloo.ca>
Luqman Aden <me@luqman.ca> <laden@mozilla.com>
-NAKASHIMA, Makoto <makoto.nksm+github@gmail.com> <makoto.nksm@gmail.com>
-NAKASHIMA, Makoto <makoto.nksm+github@gmail.com> <makoto.nksm+github@gmail.com>
Marcell Pardavi <marcell.pardavi@gmail.com>
Margaret Meyerhofer <mmeyerho@andrew.cmu.edu> <mmeyerho@andrew>
Mark Rousskov <mark.simulacrum@gmail.com>
Mickaël Raybaud-Roig <raybaudroigm@gmail.com> m-r-r <raybaudroigm@gmail.com>
Ms2ger <ms2ger@gmail.com> <Ms2ger@gmail.com>
Mukilan Thiagarajan <mukilanthiagarajan@gmail.com>
+NAKASHIMA, Makoto <makoto.nksm+github@gmail.com> <makoto.nksm@gmail.com>
+NAKASHIMA, Makoto <makoto.nksm+github@gmail.com> <makoto.nksm+github@gmail.com>
Nathan West <Lucretiel@gmail.com> <lucretiel@gmail.com>
Nathan Wilson <wilnathan@gmail.com>
Nathaniel Herman <nherman@post.harvard.edu> Nathaniel Herman <nherman@college.harvard.edu>
Neil Pankey <npankey@gmail.com> <neil@wire.im>
-Nicole Mazzuca <npmazzuca@gmail.com>
Nick Platt <platt.nicholas@gmail.com>
+Nicole Mazzuca <npmazzuca@gmail.com>
Nif Ward <nif.ward@gmail.com>
Oliver Schneider <oliver.schneider@kit.edu> oli-obk <github6541940@oli-obk.de>
Oliver Schneider <oliver.schneider@kit.edu> Oliver 'ker' Schneider <rust19446194516@oli-obk.de>
Tim Joseph Dumol <tim@timdumol.com>
Torsten Weber <TorstenWeber12@gmail.com> <torstenweber12@gmail.com>
Ty Overby <ty@pre-alpha.com>
-Ulrik Sverdrup <bluss@users.noreply.github.com> bluss <bluss>
Ulrik Sverdrup <bluss@users.noreply.github.com> bluss <bluss@users.noreply.github.com>
+Ulrik Sverdrup <bluss@users.noreply.github.com> bluss <bluss>
Ulrik Sverdrup <bluss@users.noreply.github.com> Ulrik Sverdrup <root@localhost>
Vadim Petrochenkov <vadim.petrochenkov@gmail.com>
Vadim Petrochenkov <vadim.petrochenkov@gmail.com> petrochenkov <vadim.petrochenkov@gmail.com>
+++ /dev/null
-// ignore-tidy-filelength
-
-//! Manually manage memory through raw pointers.
-//!
-//! *[See also the pointer primitive types](../../std/primitive.pointer.html).*
-//!
-//! # Safety
-//!
-//! Many functions in this module take raw pointers as arguments and read from
-//! or write to them. For this to be safe, these pointers must be *valid*.
-//! Whether a pointer is valid depends on the operation it is used for
-//! (read or write), and the extent of the memory that is accessed (i.e.,
-//! how many bytes are read/written). Most functions use `*mut T` and `*const T`
-//! to access only a single value, in which case the documentation omits the size
-//! and implicitly assumes it to be `size_of::<T>()` bytes.
-//!
-//! The precise rules for validity are not determined yet. The guarantees that are
-//! provided at this point are very minimal:
-//!
-//! * A [null] pointer is *never* valid, not even for accesses of [size zero][zst].
-//! * All pointers (except for the null pointer) are valid for all operations of
-//! [size zero][zst].
-//! * All accesses performed by functions in this module are *non-atomic* in the sense
-//! of [atomic operations] used to synchronize between threads. This means it is
-//! undefined behavior to perform two concurrent accesses to the same location from different
-//! threads unless both accesses only read from memory. Notice that this explicitly
-//! includes [`read_volatile`] and [`write_volatile`]: Volatile accesses cannot
-//! be used for inter-thread synchronization.
-//! * The result of casting a reference to a pointer is valid for as long as the
-//! underlying object is live and no reference (just raw pointers) is used to
-//! access the same memory.
-//!
-//! These axioms, along with careful use of [`offset`] for pointer arithmetic,
-//! are enough to correctly implement many useful things in unsafe code. Stronger guarantees
-//! will be provided eventually, as the [aliasing] rules are being determined. For more
-//! information, see the [book] as well as the section in the reference devoted
-//! to [undefined behavior][ub].
-//!
-//! ## Alignment
-//!
-//! Valid raw pointers as defined above are not necessarily properly aligned (where
-//! "proper" alignment is defined by the pointee type, i.e., `*const T` must be
-//! aligned to `mem::align_of::<T>()`). However, most functions require their
-//! arguments to be properly aligned, and will explicitly state
-//! this requirement in their documentation. Notable exceptions to this are
-//! [`read_unaligned`] and [`write_unaligned`].
-//!
-//! When a function requires proper alignment, it does so even if the access
-//! has size 0, i.e., even if memory is not actually touched. Consider using
-//! [`NonNull::dangling`] in such cases.
-//!
-//! [aliasing]: ../../nomicon/aliasing.html
-//! [book]: ../../book/ch19-01-unsafe-rust.html#dereferencing-a-raw-pointer
-//! [ub]: ../../reference/behavior-considered-undefined.html
-//! [null]: ./fn.null.html
-//! [zst]: ../../nomicon/exotic-sizes.html#zero-sized-types-zsts
-//! [atomic operations]: ../../std/sync/atomic/index.html
-//! [`copy`]: ../../std/ptr/fn.copy.html
-//! [`offset`]: ../../std/primitive.pointer.html#method.offset
-//! [`read_unaligned`]: ./fn.read_unaligned.html
-//! [`write_unaligned`]: ./fn.write_unaligned.html
-//! [`read_volatile`]: ./fn.read_volatile.html
-//! [`write_volatile`]: ./fn.write_volatile.html
-//! [`NonNull::dangling`]: ./struct.NonNull.html#method.dangling
-
-#![stable(feature = "rust1", since = "1.0.0")]
-
-use crate::convert::From;
-use crate::intrinsics;
-use crate::ops::{CoerceUnsized, DispatchFromDyn};
-use crate::fmt;
-use crate::hash;
-use crate::marker::{PhantomData, Unsize};
-use crate::mem::{self, MaybeUninit};
-
-use crate::cmp::Ordering::{self, Less, Equal, Greater};
-
-#[stable(feature = "rust1", since = "1.0.0")]
-pub use crate::intrinsics::copy_nonoverlapping;
-
-#[stable(feature = "rust1", since = "1.0.0")]
-pub use crate::intrinsics::copy;
-
-#[stable(feature = "rust1", since = "1.0.0")]
-pub use crate::intrinsics::write_bytes;
-
-/// Executes the destructor (if any) of the pointed-to value.
-///
-/// This is semantically equivalent to calling [`ptr::read`] and discarding
-/// the result, but has the following advantages:
-///
-/// * It is *required* to use `drop_in_place` to drop unsized types like
-/// trait objects, because they can't be read out onto the stack and
-/// dropped normally.
-///
-/// * It is friendlier to the optimizer to do this over [`ptr::read`] when
-/// dropping manually allocated memory (e.g., when writing Box/Rc/Vec),
-/// as the compiler doesn't need to prove that it's sound to elide the
-/// copy.
-///
-/// [`ptr::read`]: ../ptr/fn.read.html
-///
-/// # Safety
-///
-/// Behavior is undefined if any of the following conditions are violated:
-///
-/// * `to_drop` must be [valid] for reads.
-///
-/// * `to_drop` must be properly aligned. See the example below for how to drop
-/// an unaligned pointer.
-///
-/// Additionally, if `T` is not [`Copy`], using the pointed-to value after
-/// calling `drop_in_place` can cause undefined behavior. Note that `*to_drop =
-/// foo` counts as a use because it will cause the value to be dropped
-/// again. [`write`] can be used to overwrite data without causing it to be
-/// dropped.
-///
-/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
-///
-/// [valid]: ../ptr/index.html#safety
-/// [`Copy`]: ../marker/trait.Copy.html
-/// [`write`]: ../ptr/fn.write.html
-///
-/// # Examples
-///
-/// Manually remove the last item from a vector:
-///
-/// ```
-/// use std::ptr;
-/// use std::rc::Rc;
-///
-/// let last = Rc::new(1);
-/// let weak = Rc::downgrade(&last);
-///
-/// let mut v = vec![Rc::new(0), last];
-///
-/// unsafe {
-/// // Get a raw pointer to the last element in `v`.
-/// let ptr = &mut v[1] as *mut _;
-/// // Shorten `v` to prevent the last item from being dropped. We do that first,
-/// // to prevent issues if the `drop_in_place` below panics.
-/// v.set_len(1);
-/// // Without a call `drop_in_place`, the last item would never be dropped,
-/// // and the memory it manages would be leaked.
-/// ptr::drop_in_place(ptr);
-/// }
-///
-/// assert_eq!(v, &[0.into()]);
-///
-/// // Ensure that the last item was dropped.
-/// assert!(weak.upgrade().is_none());
-/// ```
-///
-/// Unaligned values cannot be dropped in place, they must be copied to an aligned
-/// location first:
-/// ```
-/// use std::ptr;
-/// use std::mem::{self, MaybeUninit};
-///
-/// unsafe fn drop_after_copy<T>(to_drop: *mut T) {
-/// let mut copy: MaybeUninit<T> = MaybeUninit::uninit();
-/// ptr::copy(to_drop, copy.as_mut_ptr(), 1);
-/// drop(copy.assume_init());
-/// }
-///
-/// #[repr(packed, C)]
-/// struct Packed {
-/// _padding: u8,
-/// unaligned: Vec<i32>,
-/// }
-///
-/// let mut p = Packed { _padding: 0, unaligned: vec![42] };
-/// unsafe {
-/// drop_after_copy(&mut p.unaligned as *mut _);
-/// mem::forget(p);
-/// }
-/// ```
-///
-/// Notice that the compiler performs this copy automatically when dropping packed structs,
-/// i.e., you do not usually have to worry about such issues unless you call `drop_in_place`
-/// manually.
-#[stable(feature = "drop_in_place", since = "1.8.0")]
-#[inline(always)]
-pub unsafe fn drop_in_place<T: ?Sized>(to_drop: *mut T) {
- real_drop_in_place(&mut *to_drop)
-}
-
-// The real `drop_in_place` -- the one that gets called implicitly when variables go
-// out of scope -- should have a safe reference and not a raw pointer as argument
-// type. When we drop a local variable, we access it with a pointer that behaves
-// like a safe reference; transmuting that to a raw pointer does not mean we can
-// actually access it with raw pointers.
-#[lang = "drop_in_place"]
-#[allow(unconditional_recursion)]
-unsafe fn real_drop_in_place<T: ?Sized>(to_drop: &mut T) {
- // Code here does not matter - this is replaced by the
- // real drop glue by the compiler.
- real_drop_in_place(to_drop)
-}
-
-/// Creates a null raw pointer.
-///
-/// # Examples
-///
-/// ```
-/// use std::ptr;
-///
-/// let p: *const i32 = ptr::null();
-/// assert!(p.is_null());
-/// ```
-#[inline]
-#[stable(feature = "rust1", since = "1.0.0")]
-#[rustc_promotable]
-pub const fn null<T>() -> *const T { 0 as *const T }
-
-/// Creates a null mutable raw pointer.
-///
-/// # Examples
-///
-/// ```
-/// use std::ptr;
-///
-/// let p: *mut i32 = ptr::null_mut();
-/// assert!(p.is_null());
-/// ```
-#[inline]
-#[stable(feature = "rust1", since = "1.0.0")]
-#[rustc_promotable]
-pub const fn null_mut<T>() -> *mut T { 0 as *mut T }
-
-/// Swaps the values at two mutable locations of the same type, without
-/// deinitializing either.
-///
-/// But for the following two exceptions, this function is semantically
-/// equivalent to [`mem::swap`]:
-///
-/// * It operates on raw pointers instead of references. When references are
-/// available, [`mem::swap`] should be preferred.
-///
-/// * The two pointed-to values may overlap. If the values do overlap, then the
-/// overlapping region of memory from `x` will be used. This is demonstrated
-/// in the second example below.
-///
-/// [`mem::swap`]: ../mem/fn.swap.html
-///
-/// # Safety
-///
-/// Behavior is undefined if any of the following conditions are violated:
-///
-/// * Both `x` and `y` must be [valid] for reads and writes.
-///
-/// * Both `x` and `y` must be properly aligned.
-///
-/// Note that even if `T` has size `0`, the pointers must be non-NULL and properly aligned.
-///
-/// [valid]: ../ptr/index.html#safety
-///
-/// # Examples
-///
-/// Swapping two non-overlapping regions:
-///
-/// ```
-/// use std::ptr;
-///
-/// let mut array = [0, 1, 2, 3];
-///
-/// let x = array[0..].as_mut_ptr() as *mut [u32; 2]; // this is `array[0..2]`
-/// let y = array[2..].as_mut_ptr() as *mut [u32; 2]; // this is `array[2..4]`
-///
-/// unsafe {
-/// ptr::swap(x, y);
-/// assert_eq!([2, 3, 0, 1], array);
-/// }
-/// ```
-///
-/// Swapping two overlapping regions:
-///
-/// ```
-/// use std::ptr;
-///
-/// let mut array = [0, 1, 2, 3];
-///
-/// let x = array[0..].as_mut_ptr() as *mut [u32; 3]; // this is `array[0..3]`
-/// let y = array[1..].as_mut_ptr() as *mut [u32; 3]; // this is `array[1..4]`
-///
-/// unsafe {
-/// ptr::swap(x, y);
-/// // The indices `1..3` of the slice overlap between `x` and `y`.
-/// // Reasonable results would be for to them be `[2, 3]`, so that indices `0..3` are
-/// // `[1, 2, 3]` (matching `y` before the `swap`); or for them to be `[0, 1]`
-/// // so that indices `1..4` are `[0, 1, 2]` (matching `x` before the `swap`).
-/// // This implementation is defined to make the latter choice.
-/// assert_eq!([1, 0, 1, 2], array);
-/// }
-/// ```
-#[inline]
-#[stable(feature = "rust1", since = "1.0.0")]
-pub unsafe fn swap<T>(x: *mut T, y: *mut T) {
- // Give ourselves some scratch space to work with.
- // We do not have to worry about drops: `MaybeUninit` does nothing when dropped.
- let mut tmp = MaybeUninit::<T>::uninit();
-
- // Perform the swap
- copy_nonoverlapping(x, tmp.as_mut_ptr(), 1);
- copy(y, x, 1); // `x` and `y` may overlap
- copy_nonoverlapping(tmp.as_ptr(), y, 1);
-}
-
-/// Swaps `count * size_of::<T>()` bytes between the two regions of memory
-/// beginning at `x` and `y`. The two regions must *not* overlap.
-///
-/// # Safety
-///
-/// Behavior is undefined if any of the following conditions are violated:
-///
-/// * Both `x` and `y` must be [valid] for reads and writes of `count *
-/// size_of::<T>()` bytes.
-///
-/// * Both `x` and `y` must be properly aligned.
-///
-/// * The region of memory beginning at `x` with a size of `count *
-/// size_of::<T>()` bytes must *not* overlap with the region of memory
-/// beginning at `y` with the same size.
-///
-/// Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`,
-/// the pointers must be non-NULL and properly aligned.
-///
-/// [valid]: ../ptr/index.html#safety
-///
-/// # Examples
-///
-/// Basic usage:
-///
-/// ```
-/// use std::ptr;
-///
-/// let mut x = [1, 2, 3, 4];
-/// let mut y = [7, 8, 9];
-///
-/// unsafe {
-/// ptr::swap_nonoverlapping(x.as_mut_ptr(), y.as_mut_ptr(), 2);
-/// }
-///
-/// assert_eq!(x, [7, 8, 3, 4]);
-/// assert_eq!(y, [1, 2, 9]);
-/// ```
-#[inline]
-#[stable(feature = "swap_nonoverlapping", since = "1.27.0")]
-pub unsafe fn swap_nonoverlapping<T>(x: *mut T, y: *mut T, count: usize) {
- let x = x as *mut u8;
- let y = y as *mut u8;
- let len = mem::size_of::<T>() * count;
- swap_nonoverlapping_bytes(x, y, len)
-}
-
-#[inline]
-pub(crate) unsafe fn swap_nonoverlapping_one<T>(x: *mut T, y: *mut T) {
- // For types smaller than the block optimization below,
- // just swap directly to avoid pessimizing codegen.
- if mem::size_of::<T>() < 32 {
- let z = read(x);
- copy_nonoverlapping(y, x, 1);
- write(y, z);
- } else {
- swap_nonoverlapping(x, y, 1);
- }
-}
-
-#[inline]
-unsafe fn swap_nonoverlapping_bytes(x: *mut u8, y: *mut u8, len: usize) {
- // The approach here is to utilize simd to swap x & y efficiently. Testing reveals
- // that swapping either 32 bytes or 64 bytes at a time is most efficient for Intel
- // Haswell E processors. LLVM is more able to optimize if we give a struct a
- // #[repr(simd)], even if we don't actually use this struct directly.
- //
- // FIXME repr(simd) broken on emscripten and redox
- #[cfg_attr(not(any(target_os = "emscripten", target_os = "redox")), repr(simd))]
- struct Block(u64, u64, u64, u64);
- struct UnalignedBlock(u64, u64, u64, u64);
-
- let block_size = mem::size_of::<Block>();
-
- // Loop through x & y, copying them `Block` at a time
- // The optimizer should unroll the loop fully for most types
- // N.B. We can't use a for loop as the `range` impl calls `mem::swap` recursively
- let mut i = 0;
- while i + block_size <= len {
- // Create some uninitialized memory as scratch space
- // Declaring `t` here avoids aligning the stack when this loop is unused
- let mut t = mem::MaybeUninit::<Block>::uninit();
- let t = t.as_mut_ptr() as *mut u8;
- let x = x.add(i);
- let y = y.add(i);
-
- // Swap a block of bytes of x & y, using t as a temporary buffer
- // This should be optimized into efficient SIMD operations where available
- copy_nonoverlapping(x, t, block_size);
- copy_nonoverlapping(y, x, block_size);
- copy_nonoverlapping(t, y, block_size);
- i += block_size;
- }
-
- if i < len {
- // Swap any remaining bytes
- let mut t = mem::MaybeUninit::<UnalignedBlock>::uninit();
- let rem = len - i;
-
- let t = t.as_mut_ptr() as *mut u8;
- let x = x.add(i);
- let y = y.add(i);
-
- copy_nonoverlapping(x, t, rem);
- copy_nonoverlapping(y, x, rem);
- copy_nonoverlapping(t, y, rem);
- }
-}
-
-/// Moves `src` into the pointed `dst`, returning the previous `dst` value.
-///
-/// Neither value is dropped.
-///
-/// This function is semantically equivalent to [`mem::replace`] except that it
-/// operates on raw pointers instead of references. When references are
-/// available, [`mem::replace`] should be preferred.
-///
-/// [`mem::replace`]: ../mem/fn.replace.html
-///
-/// # Safety
-///
-/// Behavior is undefined if any of the following conditions are violated:
-///
-/// * `dst` must be [valid] for writes.
-///
-/// * `dst` must be properly aligned.
-///
-/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
-///
-/// [valid]: ../ptr/index.html#safety
-///
-/// # Examples
-///
-/// ```
-/// use std::ptr;
-///
-/// let mut rust = vec!['b', 'u', 's', 't'];
-///
-/// // `mem::replace` would have the same effect without requiring the unsafe
-/// // block.
-/// let b = unsafe {
-/// ptr::replace(&mut rust[0], 'r')
-/// };
-///
-/// assert_eq!(b, 'b');
-/// assert_eq!(rust, &['r', 'u', 's', 't']);
-/// ```
-#[inline]
-#[stable(feature = "rust1", since = "1.0.0")]
-pub unsafe fn replace<T>(dst: *mut T, mut src: T) -> T {
- mem::swap(&mut *dst, &mut src); // cannot overlap
- src
-}
-
-/// Reads the value from `src` without moving it. This leaves the
-/// memory in `src` unchanged.
-///
-/// # Safety
-///
-/// Behavior is undefined if any of the following conditions are violated:
-///
-/// * `src` must be [valid] for reads.
-///
-/// * `src` must be properly aligned. Use [`read_unaligned`] if this is not the
-/// case.
-///
-/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
-///
-/// # Examples
-///
-/// Basic usage:
-///
-/// ```
-/// let x = 12;
-/// let y = &x as *const i32;
-///
-/// unsafe {
-/// assert_eq!(std::ptr::read(y), 12);
-/// }
-/// ```
-///
-/// Manually implement [`mem::swap`]:
-///
-/// ```
-/// use std::ptr;
-///
-/// fn swap<T>(a: &mut T, b: &mut T) {
-/// unsafe {
-/// // Create a bitwise copy of the value at `a` in `tmp`.
-/// let tmp = ptr::read(a);
-///
-/// // Exiting at this point (either by explicitly returning or by
-/// // calling a function which panics) would cause the value in `tmp` to
-/// // be dropped while the same value is still referenced by `a`. This
-/// // could trigger undefined behavior if `T` is not `Copy`.
-///
-/// // Create a bitwise copy of the value at `b` in `a`.
-/// // This is safe because mutable references cannot alias.
-/// ptr::copy_nonoverlapping(b, a, 1);
-///
-/// // As above, exiting here could trigger undefined behavior because
-/// // the same value is referenced by `a` and `b`.
-///
-/// // Move `tmp` into `b`.
-/// ptr::write(b, tmp);
-///
-/// // `tmp` has been moved (`write` takes ownership of its second argument),
-/// // so nothing is dropped implicitly here.
-/// }
-/// }
-///
-/// let mut foo = "foo".to_owned();
-/// let mut bar = "bar".to_owned();
-///
-/// swap(&mut foo, &mut bar);
-///
-/// assert_eq!(foo, "bar");
-/// assert_eq!(bar, "foo");
-/// ```
-///
-/// ## Ownership of the Returned Value
-///
-/// `read` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`].
-/// If `T` is not [`Copy`], using both the returned value and the value at
-/// `*src` can violate memory safety. Note that assigning to `*src` counts as a
-/// use because it will attempt to drop the value at `*src`.
-///
-/// [`write`] can be used to overwrite data without causing it to be dropped.
-///
-/// ```
-/// use std::ptr;
-///
-/// let mut s = String::from("foo");
-/// unsafe {
-/// // `s2` now points to the same underlying memory as `s`.
-/// let mut s2: String = ptr::read(&s);
-///
-/// assert_eq!(s2, "foo");
-///
-/// // Assigning to `s2` causes its original value to be dropped. Beyond
-/// // this point, `s` must no longer be used, as the underlying memory has
-/// // been freed.
-/// s2 = String::default();
-/// assert_eq!(s2, "");
-///
-/// // Assigning to `s` would cause the old value to be dropped again,
-/// // resulting in undefined behavior.
-/// // s = String::from("bar"); // ERROR
-///
-/// // `ptr::write` can be used to overwrite a value without dropping it.
-/// ptr::write(&mut s, String::from("bar"));
-/// }
-///
-/// assert_eq!(s, "bar");
-/// ```
-///
-/// [`mem::swap`]: ../mem/fn.swap.html
-/// [valid]: ../ptr/index.html#safety
-/// [`Copy`]: ../marker/trait.Copy.html
-/// [`read_unaligned`]: ./fn.read_unaligned.html
-/// [`write`]: ./fn.write.html
-#[inline]
-#[stable(feature = "rust1", since = "1.0.0")]
-pub unsafe fn read<T>(src: *const T) -> T {
- let mut tmp = MaybeUninit::<T>::uninit();
- copy_nonoverlapping(src, tmp.as_mut_ptr(), 1);
- tmp.assume_init()
-}
-
-/// Reads the value from `src` without moving it. This leaves the
-/// memory in `src` unchanged.
-///
-/// Unlike [`read`], `read_unaligned` works with unaligned pointers.
-///
-/// # Safety
-///
-/// Behavior is undefined if any of the following conditions are violated:
-///
-/// * `src` must be [valid] for reads.
-///
-/// Like [`read`], `read_unaligned` creates a bitwise copy of `T`, regardless of
-/// whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned
-/// value and the value at `*src` can [violate memory safety][read-ownership].
-///
-/// Note that even if `T` has size `0`, the pointer must be non-NULL.
-///
-/// [`Copy`]: ../marker/trait.Copy.html
-/// [`read`]: ./fn.read.html
-/// [`write_unaligned`]: ./fn.write_unaligned.html
-/// [read-ownership]: ./fn.read.html#ownership-of-the-returned-value
-/// [valid]: ../ptr/index.html#safety
-///
-/// # Examples
-///
-/// Access members of a packed struct by reference:
-///
-/// ```
-/// use std::ptr;
-///
-/// #[repr(packed, C)]
-/// struct Packed {
-/// _padding: u8,
-/// unaligned: u32,
-/// }
-///
-/// let x = Packed {
-/// _padding: 0x00,
-/// unaligned: 0x01020304,
-/// };
-///
-/// let v = unsafe {
-/// // Take the address of a 32-bit integer which is not aligned.
-/// // This must be done as a raw pointer; unaligned references are invalid.
-/// let unaligned = &x.unaligned as *const u32;
-///
-/// // Dereferencing normally will emit an aligned load instruction,
-/// // causing undefined behavior.
-/// // let v = *unaligned; // ERROR
-///
-/// // Instead, use `read_unaligned` to read improperly aligned values.
-/// let v = ptr::read_unaligned(unaligned);
-///
-/// v
-/// };
-///
-/// // Accessing unaligned values directly is safe.
-/// assert!(x.unaligned == v);
-/// ```
-#[inline]
-#[stable(feature = "ptr_unaligned", since = "1.17.0")]
-pub unsafe fn read_unaligned<T>(src: *const T) -> T {
- let mut tmp = MaybeUninit::<T>::uninit();
- copy_nonoverlapping(src as *const u8,
- tmp.as_mut_ptr() as *mut u8,
- mem::size_of::<T>());
- tmp.assume_init()
-}
-
-/// Overwrites a memory location with the given value without reading or
-/// dropping the old value.
-///
-/// `write` does not drop the contents of `dst`. This is safe, but it could leak
-/// allocations or resources, so care should be taken not to overwrite an object
-/// that should be dropped.
-///
-/// Additionally, it does not drop `src`. Semantically, `src` is moved into the
-/// location pointed to by `dst`.
-///
-/// This is appropriate for initializing uninitialized memory, or overwriting
-/// memory that has previously been [`read`] from.
-///
-/// [`read`]: ./fn.read.html
-///
-/// # Safety
-///
-/// Behavior is undefined if any of the following conditions are violated:
-///
-/// * `dst` must be [valid] for writes.
-///
-/// * `dst` must be properly aligned. Use [`write_unaligned`] if this is not the
-/// case.
-///
-/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
-///
-/// [valid]: ../ptr/index.html#safety
-/// [`write_unaligned`]: ./fn.write_unaligned.html
-///
-/// # Examples
-///
-/// Basic usage:
-///
-/// ```
-/// let mut x = 0;
-/// let y = &mut x as *mut i32;
-/// let z = 12;
-///
-/// unsafe {
-/// std::ptr::write(y, z);
-/// assert_eq!(std::ptr::read(y), 12);
-/// }
-/// ```
-///
-/// Manually implement [`mem::swap`]:
-///
-/// ```
-/// use std::ptr;
-///
-/// fn swap<T>(a: &mut T, b: &mut T) {
-/// unsafe {
-/// // Create a bitwise copy of the value at `a` in `tmp`.
-/// let tmp = ptr::read(a);
-///
-/// // Exiting at this point (either by explicitly returning or by
-/// // calling a function which panics) would cause the value in `tmp` to
-/// // be dropped while the same value is still referenced by `a`. This
-/// // could trigger undefined behavior if `T` is not `Copy`.
-///
-/// // Create a bitwise copy of the value at `b` in `a`.
-/// // This is safe because mutable references cannot alias.
-/// ptr::copy_nonoverlapping(b, a, 1);
-///
-/// // As above, exiting here could trigger undefined behavior because
-/// // the same value is referenced by `a` and `b`.
-///
-/// // Move `tmp` into `b`.
-/// ptr::write(b, tmp);
-///
-/// // `tmp` has been moved (`write` takes ownership of its second argument),
-/// // so nothing is dropped implicitly here.
-/// }
-/// }
-///
-/// let mut foo = "foo".to_owned();
-/// let mut bar = "bar".to_owned();
-///
-/// swap(&mut foo, &mut bar);
-///
-/// assert_eq!(foo, "bar");
-/// assert_eq!(bar, "foo");
-/// ```
-///
-/// [`mem::swap`]: ../mem/fn.swap.html
-#[inline]
-#[stable(feature = "rust1", since = "1.0.0")]
-pub unsafe fn write<T>(dst: *mut T, src: T) {
- intrinsics::move_val_init(&mut *dst, src)
-}
-
-/// Overwrites a memory location with the given value without reading or
-/// dropping the old value.
-///
-/// Unlike [`write`], the pointer may be unaligned.
-///
-/// `write_unaligned` does not drop the contents of `dst`. This is safe, but it
-/// could leak allocations or resources, so care should be taken not to overwrite
-/// an object that should be dropped.
-///
-/// Additionally, it does not drop `src`. Semantically, `src` is moved into the
-/// location pointed to by `dst`.
-///
-/// This is appropriate for initializing uninitialized memory, or overwriting
-/// memory that has previously been read with [`read_unaligned`].
-///
-/// [`write`]: ./fn.write.html
-/// [`read_unaligned`]: ./fn.read_unaligned.html
-///
-/// # Safety
-///
-/// Behavior is undefined if any of the following conditions are violated:
-///
-/// * `dst` must be [valid] for writes.
-///
-/// Note that even if `T` has size `0`, the pointer must be non-NULL.
-///
-/// [valid]: ../ptr/index.html#safety
-///
-/// # Examples
-///
-/// Access fields in a packed struct:
-///
-/// ```
-/// use std::{mem, ptr};
-///
-/// #[repr(packed, C)]
-/// #[derive(Default)]
-/// struct Packed {
-/// _padding: u8,
-/// unaligned: u32,
-/// }
-///
-/// let v = 0x01020304;
-/// let mut x: Packed = unsafe { mem::zeroed() };
-///
-/// unsafe {
-/// // Take a reference to a 32-bit integer which is not aligned.
-/// let unaligned = &mut x.unaligned as *mut u32;
-///
-/// // Dereferencing normally will emit an aligned store instruction,
-/// // causing undefined behavior because the pointer is not aligned.
-/// // *unaligned = v; // ERROR
-///
-/// // Instead, use `write_unaligned` to write improperly aligned values.
-/// ptr::write_unaligned(unaligned, v);
-/// }
-///
-/// // Accessing unaligned values directly is safe.
-/// assert!(x.unaligned == v);
-/// ```
-#[inline]
-#[stable(feature = "ptr_unaligned", since = "1.17.0")]
-pub unsafe fn write_unaligned<T>(dst: *mut T, src: T) {
- copy_nonoverlapping(&src as *const T as *const u8,
- dst as *mut u8,
- mem::size_of::<T>());
- mem::forget(src);
-}
-
-/// Performs a volatile read of the value from `src` without moving it. This
-/// leaves the memory in `src` unchanged.
-///
-/// Volatile operations are intended to act on I/O memory, and are guaranteed
-/// to not be elided or reordered by the compiler across other volatile
-/// operations.
-///
-/// [`write_volatile`]: ./fn.write_volatile.html
-///
-/// # Notes
-///
-/// Rust does not currently have a rigorously and formally defined memory model,
-/// so the precise semantics of what "volatile" means here is subject to change
-/// over time. That being said, the semantics will almost always end up pretty
-/// similar to [C11's definition of volatile][c11].
-///
-/// The compiler shouldn't change the relative order or number of volatile
-/// memory operations. However, volatile memory operations on zero-sized types
-/// (e.g., if a zero-sized type is passed to `read_volatile`) are noops
-/// and may be ignored.
-///
-/// [c11]: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
-///
-/// # Safety
-///
-/// Behavior is undefined if any of the following conditions are violated:
-///
-/// * `src` must be [valid] for reads.
-///
-/// * `src` must be properly aligned.
-///
-/// Like [`read`], `read_volatile` creates a bitwise copy of `T`, regardless of
-/// whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned
-/// value and the value at `*src` can [violate memory safety][read-ownership].
-/// However, storing non-[`Copy`] types in volatile memory is almost certainly
-/// incorrect.
-///
-/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
-///
-/// [valid]: ../ptr/index.html#safety
-/// [`Copy`]: ../marker/trait.Copy.html
-/// [`read`]: ./fn.read.html
-/// [read-ownership]: ./fn.read.html#ownership-of-the-returned-value
-///
-/// Just like in C, whether an operation is volatile has no bearing whatsoever
-/// on questions involving concurrent access from multiple threads. Volatile
-/// accesses behave exactly like non-atomic accesses in that regard. In particular,
-/// a race between a `read_volatile` and any write operation to the same location
-/// is undefined behavior.
-///
-/// # Examples
-///
-/// Basic usage:
-///
-/// ```
-/// let x = 12;
-/// let y = &x as *const i32;
-///
-/// unsafe {
-/// assert_eq!(std::ptr::read_volatile(y), 12);
-/// }
-/// ```
-#[inline]
-#[stable(feature = "volatile", since = "1.9.0")]
-pub unsafe fn read_volatile<T>(src: *const T) -> T {
- intrinsics::volatile_load(src)
-}
-
-/// Performs a volatile write of a memory location with the given value without
-/// reading or dropping the old value.
-///
-/// Volatile operations are intended to act on I/O memory, and are guaranteed
-/// to not be elided or reordered by the compiler across other volatile
-/// operations.
-///
-/// `write_volatile` does not drop the contents of `dst`. This is safe, but it
-/// could leak allocations or resources, so care should be taken not to overwrite
-/// an object that should be dropped.
-///
-/// Additionally, it does not drop `src`. Semantically, `src` is moved into the
-/// location pointed to by `dst`.
-///
-/// [`read_volatile`]: ./fn.read_volatile.html
-///
-/// # Notes
-///
-/// Rust does not currently have a rigorously and formally defined memory model,
-/// so the precise semantics of what "volatile" means here is subject to change
-/// over time. That being said, the semantics will almost always end up pretty
-/// similar to [C11's definition of volatile][c11].
-///
-/// The compiler shouldn't change the relative order or number of volatile
-/// memory operations. However, volatile memory operations on zero-sized types
-/// (e.g., if a zero-sized type is passed to `write_volatile`) are noops
-/// and may be ignored.
-///
-/// [c11]: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
-///
-/// # Safety
-///
-/// Behavior is undefined if any of the following conditions are violated:
-///
-/// * `dst` must be [valid] for writes.
-///
-/// * `dst` must be properly aligned.
-///
-/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
-///
-/// [valid]: ../ptr/index.html#safety
-///
-/// Just like in C, whether an operation is volatile has no bearing whatsoever
-/// on questions involving concurrent access from multiple threads. Volatile
-/// accesses behave exactly like non-atomic accesses in that regard. In particular,
-/// a race between a `write_volatile` and any other operation (reading or writing)
-/// on the same location is undefined behavior.
-///
-/// # Examples
-///
-/// Basic usage:
-///
-/// ```
-/// let mut x = 0;
-/// let y = &mut x as *mut i32;
-/// let z = 12;
-///
-/// unsafe {
-/// std::ptr::write_volatile(y, z);
-/// assert_eq!(std::ptr::read_volatile(y), 12);
-/// }
-/// ```
-#[inline]
-#[stable(feature = "volatile", since = "1.9.0")]
-pub unsafe fn write_volatile<T>(dst: *mut T, src: T) {
- intrinsics::volatile_store(dst, src);
-}
-
-#[lang = "const_ptr"]
-impl<T: ?Sized> *const T {
- /// Returns `true` if the pointer is null.
- ///
- /// Note that unsized types have many possible null pointers, as only the
- /// raw data pointer is considered, not their length, vtable, etc.
- /// Therefore, two pointers that are null may still not compare equal to
- /// each other.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// let s: &str = "Follow the rabbit";
- /// let ptr: *const u8 = s.as_ptr();
- /// assert!(!ptr.is_null());
- /// ```
- #[stable(feature = "rust1", since = "1.0.0")]
- #[inline]
- pub fn is_null(self) -> bool {
- // Compare via a cast to a thin pointer, so fat pointers are only
- // considering their "data" part for null-ness.
- (self as *const u8) == null()
- }
-
- /// Cast to a pointer to a different type
- #[unstable(feature = "ptr_cast", issue = "60602")]
- #[inline]
- pub const fn cast<U>(self) -> *const U {
- self as _
- }
-
- /// Returns `None` if the pointer is null, or else returns a reference to
- /// the value wrapped in `Some`.
- ///
- /// # Safety
- ///
- /// While this method and its mutable counterpart are useful for
- /// null-safety, it is important to note that this is still an unsafe
- /// operation because the returned value could be pointing to invalid
- /// memory.
- ///
- /// Additionally, the lifetime `'a` returned is arbitrarily chosen and does
- /// not necessarily reflect the actual lifetime of the data.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// let ptr: *const u8 = &10u8 as *const u8;
- ///
- /// unsafe {
- /// if let Some(val_back) = ptr.as_ref() {
- /// println!("We got back the value: {}!", val_back);
- /// }
- /// }
- /// ```
- ///
- /// # Null-unchecked version
- ///
- /// If you are sure the pointer can never be null and are looking for some kind of
- /// `as_ref_unchecked` that returns the `&T` instead of `Option<&T>`, know that you can
- /// dereference the pointer directly.
- ///
- /// ```
- /// let ptr: *const u8 = &10u8 as *const u8;
- ///
- /// unsafe {
- /// let val_back = &*ptr;
- /// println!("We got back the value: {}!", val_back);
- /// }
- /// ```
- #[stable(feature = "ptr_as_ref", since = "1.9.0")]
- #[inline]
- pub unsafe fn as_ref<'a>(self) -> Option<&'a T> {
- if self.is_null() {
- None
- } else {
- Some(&*self)
- }
- }
-
- /// Calculates the offset from a pointer.
- ///
- /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
- /// offset of `3 * size_of::<T>()` bytes.
- ///
- /// # Safety
- ///
- /// If any of the following conditions are violated, the result is Undefined
- /// Behavior:
- ///
- /// * Both the starting and resulting pointer must be either in bounds or one
- /// byte past the end of the same allocated object.
- ///
- /// * The computed offset, **in bytes**, cannot overflow an `isize`.
- ///
- /// * The offset being in bounds cannot rely on "wrapping around" the address
- /// space. That is, the infinite-precision sum, **in bytes** must fit in a usize.
- ///
- /// The compiler and standard library generally tries to ensure allocations
- /// never reach a size where an offset is a concern. For instance, `Vec`
- /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
- /// `vec.as_ptr().add(vec.len())` is always safe.
- ///
- /// Most platforms fundamentally can't even construct such an allocation.
- /// For instance, no known 64-bit platform can ever serve a request
- /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
- /// However, some 32-bit and 16-bit platforms may successfully serve a request for
- /// more than `isize::MAX` bytes with things like Physical Address
- /// Extension. As such, memory acquired directly from allocators or memory
- /// mapped files *may* be too large to handle with this function.
- ///
- /// Consider using `wrapping_offset` instead if these constraints are
- /// difficult to satisfy. The only advantage of this method is that it
- /// enables more aggressive compiler optimizations.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// let s: &str = "123";
- /// let ptr: *const u8 = s.as_ptr();
- ///
- /// unsafe {
- /// println!("{}", *ptr.offset(1) as char);
- /// println!("{}", *ptr.offset(2) as char);
- /// }
- /// ```
- #[stable(feature = "rust1", since = "1.0.0")]
- #[inline]
- pub unsafe fn offset(self, count: isize) -> *const T where T: Sized {
- intrinsics::offset(self, count)
- }
-
- /// Calculates the offset from a pointer using wrapping arithmetic.
- ///
- /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
- /// offset of `3 * size_of::<T>()` bytes.
- ///
- /// # Safety
- ///
- /// The resulting pointer does not need to be in bounds, but it is
- /// potentially hazardous to dereference (which requires `unsafe`).
- /// In particular, the resulting pointer may *not* be used to access a
- /// different allocated object than the one `self` points to. In other
- /// words, `x.wrapping_offset(y.wrapping_offset_from(x))` is
- /// *not* the same as `y`, and dereferencing it is undefined behavior
- /// unless `x` and `y` point into the same allocated object.
- ///
- /// Always use `.offset(count)` instead when possible, because `offset`
- /// allows the compiler to optimize better. If you need to cross object
- /// boundaries, cast the pointer to an integer and do the arithmetic there.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// // Iterate using a raw pointer in increments of two elements
- /// let data = [1u8, 2, 3, 4, 5];
- /// let mut ptr: *const u8 = data.as_ptr();
- /// let step = 2;
- /// let end_rounded_up = ptr.wrapping_offset(6);
- ///
- /// // This loop prints "1, 3, 5, "
- /// while ptr != end_rounded_up {
- /// unsafe {
- /// print!("{}, ", *ptr);
- /// }
- /// ptr = ptr.wrapping_offset(step);
- /// }
- /// ```
- #[stable(feature = "ptr_wrapping_offset", since = "1.16.0")]
- #[inline]
- pub fn wrapping_offset(self, count: isize) -> *const T where T: Sized {
- unsafe {
- intrinsics::arith_offset(self, count)
- }
- }
-
- /// Calculates the distance between two pointers. The returned value is in
- /// units of T: the distance in bytes is divided by `mem::size_of::<T>()`.
- ///
- /// This function is the inverse of [`offset`].
- ///
- /// [`offset`]: #method.offset
- /// [`wrapping_offset_from`]: #method.wrapping_offset_from
- ///
- /// # Safety
- ///
- /// If any of the following conditions are violated, the result is Undefined
- /// Behavior:
- ///
- /// * Both the starting and other pointer must be either in bounds or one
- /// byte past the end of the same allocated object.
- ///
- /// * The distance between the pointers, **in bytes**, cannot overflow an `isize`.
- ///
- /// * The distance between the pointers, in bytes, must be an exact multiple
- /// of the size of `T`.
- ///
- /// * The distance being in bounds cannot rely on "wrapping around" the address space.
- ///
- /// The compiler and standard library generally try to ensure allocations
- /// never reach a size where an offset is a concern. For instance, `Vec`
- /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
- /// `ptr_into_vec.offset_from(vec.as_ptr())` is always safe.
- ///
- /// Most platforms fundamentally can't even construct such an allocation.
- /// For instance, no known 64-bit platform can ever serve a request
- /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
- /// However, some 32-bit and 16-bit platforms may successfully serve a request for
- /// more than `isize::MAX` bytes with things like Physical Address
- /// Extension. As such, memory acquired directly from allocators or memory
- /// mapped files *may* be too large to handle with this function.
- ///
- /// Consider using [`wrapping_offset_from`] instead if these constraints are
- /// difficult to satisfy. The only advantage of this method is that it
- /// enables more aggressive compiler optimizations.
- ///
- /// # Panics
- ///
- /// This function panics if `T` is a Zero-Sized Type ("ZST").
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// #![feature(ptr_offset_from)]
- ///
- /// let a = [0; 5];
- /// let ptr1: *const i32 = &a[1];
- /// let ptr2: *const i32 = &a[3];
- /// unsafe {
- /// assert_eq!(ptr2.offset_from(ptr1), 2);
- /// assert_eq!(ptr1.offset_from(ptr2), -2);
- /// assert_eq!(ptr1.offset(2), ptr2);
- /// assert_eq!(ptr2.offset(-2), ptr1);
- /// }
- /// ```
- #[unstable(feature = "ptr_offset_from", issue = "41079")]
- #[inline]
- pub unsafe fn offset_from(self, origin: *const T) -> isize where T: Sized {
- let pointee_size = mem::size_of::<T>();
- assert!(0 < pointee_size && pointee_size <= isize::max_value() as usize);
-
- // This is the same sequence that Clang emits for pointer subtraction.
- // It can be neither `nsw` nor `nuw` because the input is treated as
- // unsigned but then the output is treated as signed, so neither works.
- let d = isize::wrapping_sub(self as _, origin as _);
- intrinsics::exact_div(d, pointee_size as _)
- }
-
- /// Calculates the distance between two pointers. The returned value is in
- /// units of T: the distance in bytes is divided by `mem::size_of::<T>()`.
- ///
- /// If the address different between the two pointers is not a multiple of
- /// `mem::size_of::<T>()` then the result of the division is rounded towards
- /// zero.
- ///
- /// Though this method is safe for any two pointers, note that its result
- /// will be mostly useless if the two pointers aren't into the same allocated
- /// object, for example if they point to two different local variables.
- ///
- /// # Panics
- ///
- /// This function panics if `T` is a zero-sized type.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// #![feature(ptr_wrapping_offset_from)]
- ///
- /// let a = [0; 5];
- /// let ptr1: *const i32 = &a[1];
- /// let ptr2: *const i32 = &a[3];
- /// assert_eq!(ptr2.wrapping_offset_from(ptr1), 2);
- /// assert_eq!(ptr1.wrapping_offset_from(ptr2), -2);
- /// assert_eq!(ptr1.wrapping_offset(2), ptr2);
- /// assert_eq!(ptr2.wrapping_offset(-2), ptr1);
- ///
- /// let ptr1: *const i32 = 3 as _;
- /// let ptr2: *const i32 = 13 as _;
- /// assert_eq!(ptr2.wrapping_offset_from(ptr1), 2);
- /// ```
- #[unstable(feature = "ptr_wrapping_offset_from", issue = "41079")]
- #[inline]
- pub fn wrapping_offset_from(self, origin: *const T) -> isize where T: Sized {
- let pointee_size = mem::size_of::<T>();
- assert!(0 < pointee_size && pointee_size <= isize::max_value() as usize);
-
- let d = isize::wrapping_sub(self as _, origin as _);
- d.wrapping_div(pointee_size as _)
- }
-
- /// Calculates the offset from a pointer (convenience for `.offset(count as isize)`).
- ///
- /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
- /// offset of `3 * size_of::<T>()` bytes.
- ///
- /// # Safety
- ///
- /// If any of the following conditions are violated, the result is Undefined
- /// Behavior:
- ///
- /// * Both the starting and resulting pointer must be either in bounds or one
- /// byte past the end of the same allocated object.
- ///
- /// * The computed offset, **in bytes**, cannot overflow an `isize`.
- ///
- /// * The offset being in bounds cannot rely on "wrapping around" the address
- /// space. That is, the infinite-precision sum must fit in a `usize`.
- ///
- /// The compiler and standard library generally tries to ensure allocations
- /// never reach a size where an offset is a concern. For instance, `Vec`
- /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
- /// `vec.as_ptr().add(vec.len())` is always safe.
- ///
- /// Most platforms fundamentally can't even construct such an allocation.
- /// For instance, no known 64-bit platform can ever serve a request
- /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
- /// However, some 32-bit and 16-bit platforms may successfully serve a request for
- /// more than `isize::MAX` bytes with things like Physical Address
- /// Extension. As such, memory acquired directly from allocators or memory
- /// mapped files *may* be too large to handle with this function.
- ///
- /// Consider using `wrapping_offset` instead if these constraints are
- /// difficult to satisfy. The only advantage of this method is that it
- /// enables more aggressive compiler optimizations.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// let s: &str = "123";
- /// let ptr: *const u8 = s.as_ptr();
- ///
- /// unsafe {
- /// println!("{}", *ptr.add(1) as char);
- /// println!("{}", *ptr.add(2) as char);
- /// }
- /// ```
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn add(self, count: usize) -> Self
- where T: Sized,
- {
- self.offset(count as isize)
- }
-
- /// Calculates the offset from a pointer (convenience for
- /// `.offset((count as isize).wrapping_neg())`).
- ///
- /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
- /// offset of `3 * size_of::<T>()` bytes.
- ///
- /// # Safety
- ///
- /// If any of the following conditions are violated, the result is Undefined
- /// Behavior:
- ///
- /// * Both the starting and resulting pointer must be either in bounds or one
- /// byte past the end of the same allocated object.
- ///
- /// * The computed offset cannot exceed `isize::MAX` **bytes**.
- ///
- /// * The offset being in bounds cannot rely on "wrapping around" the address
- /// space. That is, the infinite-precision sum must fit in a usize.
- ///
- /// The compiler and standard library generally tries to ensure allocations
- /// never reach a size where an offset is a concern. For instance, `Vec`
- /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
- /// `vec.as_ptr().add(vec.len()).sub(vec.len())` is always safe.
- ///
- /// Most platforms fundamentally can't even construct such an allocation.
- /// For instance, no known 64-bit platform can ever serve a request
- /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
- /// However, some 32-bit and 16-bit platforms may successfully serve a request for
- /// more than `isize::MAX` bytes with things like Physical Address
- /// Extension. As such, memory acquired directly from allocators or memory
- /// mapped files *may* be too large to handle with this function.
- ///
- /// Consider using `wrapping_offset` instead if these constraints are
- /// difficult to satisfy. The only advantage of this method is that it
- /// enables more aggressive compiler optimizations.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// let s: &str = "123";
- ///
- /// unsafe {
- /// let end: *const u8 = s.as_ptr().add(3);
- /// println!("{}", *end.sub(1) as char);
- /// println!("{}", *end.sub(2) as char);
- /// }
- /// ```
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn sub(self, count: usize) -> Self
- where T: Sized,
- {
- self.offset((count as isize).wrapping_neg())
- }
-
- /// Calculates the offset from a pointer using wrapping arithmetic.
- /// (convenience for `.wrapping_offset(count as isize)`)
- ///
- /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
- /// offset of `3 * size_of::<T>()` bytes.
- ///
- /// # Safety
- ///
- /// The resulting pointer does not need to be in bounds, but it is
- /// potentially hazardous to dereference (which requires `unsafe`).
- ///
- /// Always use `.add(count)` instead when possible, because `add`
- /// allows the compiler to optimize better.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// // Iterate using a raw pointer in increments of two elements
- /// let data = [1u8, 2, 3, 4, 5];
- /// let mut ptr: *const u8 = data.as_ptr();
- /// let step = 2;
- /// let end_rounded_up = ptr.wrapping_add(6);
- ///
- /// // This loop prints "1, 3, 5, "
- /// while ptr != end_rounded_up {
- /// unsafe {
- /// print!("{}, ", *ptr);
- /// }
- /// ptr = ptr.wrapping_add(step);
- /// }
- /// ```
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub fn wrapping_add(self, count: usize) -> Self
- where T: Sized,
- {
- self.wrapping_offset(count as isize)
- }
-
- /// Calculates the offset from a pointer using wrapping arithmetic.
- /// (convenience for `.wrapping_offset((count as isize).wrapping_sub())`)
- ///
- /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
- /// offset of `3 * size_of::<T>()` bytes.
- ///
- /// # Safety
- ///
- /// The resulting pointer does not need to be in bounds, but it is
- /// potentially hazardous to dereference (which requires `unsafe`).
- ///
- /// Always use `.sub(count)` instead when possible, because `sub`
- /// allows the compiler to optimize better.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// // Iterate using a raw pointer in increments of two elements (backwards)
- /// let data = [1u8, 2, 3, 4, 5];
- /// let mut ptr: *const u8 = data.as_ptr();
- /// let start_rounded_down = ptr.wrapping_sub(2);
- /// ptr = ptr.wrapping_add(4);
- /// let step = 2;
- /// // This loop prints "5, 3, 1, "
- /// while ptr != start_rounded_down {
- /// unsafe {
- /// print!("{}, ", *ptr);
- /// }
- /// ptr = ptr.wrapping_sub(step);
- /// }
- /// ```
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub fn wrapping_sub(self, count: usize) -> Self
- where T: Sized,
- {
- self.wrapping_offset((count as isize).wrapping_neg())
- }
-
- /// Reads the value from `self` without moving it. This leaves the
- /// memory in `self` unchanged.
- ///
- /// See [`ptr::read`] for safety concerns and examples.
- ///
- /// [`ptr::read`]: ./ptr/fn.read.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn read(self) -> T
- where T: Sized,
- {
- read(self)
- }
-
- /// Performs a volatile read of the value from `self` without moving it. This
- /// leaves the memory in `self` unchanged.
- ///
- /// Volatile operations are intended to act on I/O memory, and are guaranteed
- /// to not be elided or reordered by the compiler across other volatile
- /// operations.
- ///
- /// See [`ptr::read_volatile`] for safety concerns and examples.
- ///
- /// [`ptr::read_volatile`]: ./ptr/fn.read_volatile.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn read_volatile(self) -> T
- where T: Sized,
- {
- read_volatile(self)
- }
-
- /// Reads the value from `self` without moving it. This leaves the
- /// memory in `self` unchanged.
- ///
- /// Unlike `read`, the pointer may be unaligned.
- ///
- /// See [`ptr::read_unaligned`] for safety concerns and examples.
- ///
- /// [`ptr::read_unaligned`]: ./ptr/fn.read_unaligned.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn read_unaligned(self) -> T
- where T: Sized,
- {
- read_unaligned(self)
- }
-
- /// Copies `count * size_of<T>` bytes from `self` to `dest`. The source
- /// and destination may overlap.
- ///
- /// NOTE: this has the *same* argument order as [`ptr::copy`].
- ///
- /// See [`ptr::copy`] for safety concerns and examples.
- ///
- /// [`ptr::copy`]: ./ptr/fn.copy.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn copy_to(self, dest: *mut T, count: usize)
- where T: Sized,
- {
- copy(self, dest, count)
- }
-
- /// Copies `count * size_of<T>` bytes from `self` to `dest`. The source
- /// and destination may *not* overlap.
- ///
- /// NOTE: this has the *same* argument order as [`ptr::copy_nonoverlapping`].
- ///
- /// See [`ptr::copy_nonoverlapping`] for safety concerns and examples.
- ///
- /// [`ptr::copy_nonoverlapping`]: ./ptr/fn.copy_nonoverlapping.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn copy_to_nonoverlapping(self, dest: *mut T, count: usize)
- where T: Sized,
- {
- copy_nonoverlapping(self, dest, count)
- }
-
- /// Computes the offset that needs to be applied to the pointer in order to make it aligned to
- /// `align`.
- ///
- /// If it is not possible to align the pointer, the implementation returns
- /// `usize::max_value()`.
- ///
- /// The offset is expressed in number of `T` elements, and not bytes. The value returned can be
- /// used with the `offset` or `offset_to` methods.
- ///
- /// There are no guarantees whatsover that offsetting the pointer will not overflow or go
- /// beyond the allocation that the pointer points into. It is up to the caller to ensure that
- /// the returned offset is correct in all terms other than alignment.
- ///
- /// # Panics
- ///
- /// The function panics if `align` is not a power-of-two.
- ///
- /// # Examples
- ///
- /// Accessing adjacent `u8` as `u16`
- ///
- /// ```
- /// # fn foo(n: usize) {
- /// # use std::mem::align_of;
- /// # unsafe {
- /// let x = [5u8, 6u8, 7u8, 8u8, 9u8];
- /// let ptr = &x[n] as *const u8;
- /// let offset = ptr.align_offset(align_of::<u16>());
- /// if offset < x.len() - n - 1 {
- /// let u16_ptr = ptr.add(offset) as *const u16;
- /// assert_ne!(*u16_ptr, 500);
- /// } else {
- /// // while the pointer can be aligned via `offset`, it would point
- /// // outside the allocation
- /// }
- /// # } }
- /// ```
- #[stable(feature = "align_offset", since = "1.36.0")]
- pub fn align_offset(self, align: usize) -> usize where T: Sized {
- if !align.is_power_of_two() {
- panic!("align_offset: align is not a power-of-two");
- }
- unsafe {
- align_offset(self, align)
- }
- }
-}
-
-
-#[lang = "mut_ptr"]
-impl<T: ?Sized> *mut T {
- /// Returns `true` if the pointer is null.
- ///
- /// Note that unsized types have many possible null pointers, as only the
- /// raw data pointer is considered, not their length, vtable, etc.
- /// Therefore, two pointers that are null may still not compare equal to
- /// each other.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// let mut s = [1, 2, 3];
- /// let ptr: *mut u32 = s.as_mut_ptr();
- /// assert!(!ptr.is_null());
- /// ```
- #[stable(feature = "rust1", since = "1.0.0")]
- #[inline]
- pub fn is_null(self) -> bool {
- // Compare via a cast to a thin pointer, so fat pointers are only
- // considering their "data" part for null-ness.
- (self as *mut u8) == null_mut()
- }
-
- /// Cast to a pointer to a different type
- #[unstable(feature = "ptr_cast", issue = "60602")]
- #[inline]
- pub const fn cast<U>(self) -> *mut U {
- self as _
- }
-
- /// Returns `None` if the pointer is null, or else returns a reference to
- /// the value wrapped in `Some`.
- ///
- /// # Safety
- ///
- /// While this method and its mutable counterpart are useful for
- /// null-safety, it is important to note that this is still an unsafe
- /// operation because the returned value could be pointing to invalid
- /// memory.
- ///
- /// Additionally, the lifetime `'a` returned is arbitrarily chosen and does
- /// not necessarily reflect the actual lifetime of the data.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// let ptr: *mut u8 = &mut 10u8 as *mut u8;
- ///
- /// unsafe {
- /// if let Some(val_back) = ptr.as_ref() {
- /// println!("We got back the value: {}!", val_back);
- /// }
- /// }
- /// ```
- ///
- /// # Null-unchecked version
- ///
- /// If you are sure the pointer can never be null and are looking for some kind of
- /// `as_ref_unchecked` that returns the `&T` instead of `Option<&T>`, know that you can
- /// dereference the pointer directly.
- ///
- /// ```
- /// let ptr: *mut u8 = &mut 10u8 as *mut u8;
- ///
- /// unsafe {
- /// let val_back = &*ptr;
- /// println!("We got back the value: {}!", val_back);
- /// }
- /// ```
- #[stable(feature = "ptr_as_ref", since = "1.9.0")]
- #[inline]
- pub unsafe fn as_ref<'a>(self) -> Option<&'a T> {
- if self.is_null() {
- None
- } else {
- Some(&*self)
- }
- }
-
- /// Calculates the offset from a pointer.
- ///
- /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
- /// offset of `3 * size_of::<T>()` bytes.
- ///
- /// # Safety
- ///
- /// If any of the following conditions are violated, the result is Undefined
- /// Behavior:
- ///
- /// * Both the starting and resulting pointer must be either in bounds or one
- /// byte past the end of the same allocated object.
- ///
- /// * The computed offset, **in bytes**, cannot overflow an `isize`.
- ///
- /// * The offset being in bounds cannot rely on "wrapping around" the address
- /// space. That is, the infinite-precision sum, **in bytes** must fit in a usize.
- ///
- /// The compiler and standard library generally tries to ensure allocations
- /// never reach a size where an offset is a concern. For instance, `Vec`
- /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
- /// `vec.as_ptr().add(vec.len())` is always safe.
- ///
- /// Most platforms fundamentally can't even construct such an allocation.
- /// For instance, no known 64-bit platform can ever serve a request
- /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
- /// However, some 32-bit and 16-bit platforms may successfully serve a request for
- /// more than `isize::MAX` bytes with things like Physical Address
- /// Extension. As such, memory acquired directly from allocators or memory
- /// mapped files *may* be too large to handle with this function.
- ///
- /// Consider using `wrapping_offset` instead if these constraints are
- /// difficult to satisfy. The only advantage of this method is that it
- /// enables more aggressive compiler optimizations.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// let mut s = [1, 2, 3];
- /// let ptr: *mut u32 = s.as_mut_ptr();
- ///
- /// unsafe {
- /// println!("{}", *ptr.offset(1));
- /// println!("{}", *ptr.offset(2));
- /// }
- /// ```
- #[stable(feature = "rust1", since = "1.0.0")]
- #[inline]
- pub unsafe fn offset(self, count: isize) -> *mut T where T: Sized {
- intrinsics::offset(self, count) as *mut T
- }
-
- /// Calculates the offset from a pointer using wrapping arithmetic.
- /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
- /// offset of `3 * size_of::<T>()` bytes.
- ///
- /// # Safety
- ///
- /// The resulting pointer does not need to be in bounds, but it is
- /// potentially hazardous to dereference (which requires `unsafe`).
- /// In particular, the resulting pointer may *not* be used to access a
- /// different allocated object than the one `self` points to. In other
- /// words, `x.wrapping_offset(y.wrapping_offset_from(x))` is
- /// *not* the same as `y`, and dereferencing it is undefined behavior
- /// unless `x` and `y` point into the same allocated object.
- ///
- /// Always use `.offset(count)` instead when possible, because `offset`
- /// allows the compiler to optimize better. If you need to cross object
- /// boundaries, cast the pointer to an integer and do the arithmetic there.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// // Iterate using a raw pointer in increments of two elements
- /// let mut data = [1u8, 2, 3, 4, 5];
- /// let mut ptr: *mut u8 = data.as_mut_ptr();
- /// let step = 2;
- /// let end_rounded_up = ptr.wrapping_offset(6);
- ///
- /// while ptr != end_rounded_up {
- /// unsafe {
- /// *ptr = 0;
- /// }
- /// ptr = ptr.wrapping_offset(step);
- /// }
- /// assert_eq!(&data, &[0, 2, 0, 4, 0]);
- /// ```
- #[stable(feature = "ptr_wrapping_offset", since = "1.16.0")]
- #[inline]
- pub fn wrapping_offset(self, count: isize) -> *mut T where T: Sized {
- unsafe {
- intrinsics::arith_offset(self, count) as *mut T
- }
- }
-
- /// Returns `None` if the pointer is null, or else returns a mutable
- /// reference to the value wrapped in `Some`.
- ///
- /// # Safety
- ///
- /// As with `as_ref`, this is unsafe because it cannot verify the validity
- /// of the returned pointer, nor can it ensure that the lifetime `'a`
- /// returned is indeed a valid lifetime for the contained data.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// let mut s = [1, 2, 3];
- /// let ptr: *mut u32 = s.as_mut_ptr();
- /// let first_value = unsafe { ptr.as_mut().unwrap() };
- /// *first_value = 4;
- /// println!("{:?}", s); // It'll print: "[4, 2, 3]".
- /// ```
- #[stable(feature = "ptr_as_ref", since = "1.9.0")]
- #[inline]
- pub unsafe fn as_mut<'a>(self) -> Option<&'a mut T> {
- if self.is_null() {
- None
- } else {
- Some(&mut *self)
- }
- }
-
- /// Calculates the distance between two pointers. The returned value is in
- /// units of T: the distance in bytes is divided by `mem::size_of::<T>()`.
- ///
- /// This function is the inverse of [`offset`].
- ///
- /// [`offset`]: #method.offset-1
- /// [`wrapping_offset_from`]: #method.wrapping_offset_from-1
- ///
- /// # Safety
- ///
- /// If any of the following conditions are violated, the result is Undefined
- /// Behavior:
- ///
- /// * Both the starting and other pointer must be either in bounds or one
- /// byte past the end of the same allocated object.
- ///
- /// * The distance between the pointers, **in bytes**, cannot overflow an `isize`.
- ///
- /// * The distance between the pointers, in bytes, must be an exact multiple
- /// of the size of `T`.
- ///
- /// * The distance being in bounds cannot rely on "wrapping around" the address space.
- ///
- /// The compiler and standard library generally try to ensure allocations
- /// never reach a size where an offset is a concern. For instance, `Vec`
- /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
- /// `ptr_into_vec.offset_from(vec.as_ptr())` is always safe.
- ///
- /// Most platforms fundamentally can't even construct such an allocation.
- /// For instance, no known 64-bit platform can ever serve a request
- /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
- /// However, some 32-bit and 16-bit platforms may successfully serve a request for
- /// more than `isize::MAX` bytes with things like Physical Address
- /// Extension. As such, memory acquired directly from allocators or memory
- /// mapped files *may* be too large to handle with this function.
- ///
- /// Consider using [`wrapping_offset_from`] instead if these constraints are
- /// difficult to satisfy. The only advantage of this method is that it
- /// enables more aggressive compiler optimizations.
- ///
- /// # Panics
- ///
- /// This function panics if `T` is a Zero-Sized Type ("ZST").
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// #![feature(ptr_offset_from)]
- ///
- /// let mut a = [0; 5];
- /// let ptr1: *mut i32 = &mut a[1];
- /// let ptr2: *mut i32 = &mut a[3];
- /// unsafe {
- /// assert_eq!(ptr2.offset_from(ptr1), 2);
- /// assert_eq!(ptr1.offset_from(ptr2), -2);
- /// assert_eq!(ptr1.offset(2), ptr2);
- /// assert_eq!(ptr2.offset(-2), ptr1);
- /// }
- /// ```
- #[unstable(feature = "ptr_offset_from", issue = "41079")]
- #[inline]
- pub unsafe fn offset_from(self, origin: *const T) -> isize where T: Sized {
- (self as *const T).offset_from(origin)
- }
-
- /// Calculates the distance between two pointers. The returned value is in
- /// units of T: the distance in bytes is divided by `mem::size_of::<T>()`.
- ///
- /// If the address different between the two pointers is not a multiple of
- /// `mem::size_of::<T>()` then the result of the division is rounded towards
- /// zero.
- ///
- /// Though this method is safe for any two pointers, note that its result
- /// will be mostly useless if the two pointers aren't into the same allocated
- /// object, for example if they point to two different local variables.
- ///
- /// # Panics
- ///
- /// This function panics if `T` is a zero-sized type.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// #![feature(ptr_wrapping_offset_from)]
- ///
- /// let mut a = [0; 5];
- /// let ptr1: *mut i32 = &mut a[1];
- /// let ptr2: *mut i32 = &mut a[3];
- /// assert_eq!(ptr2.wrapping_offset_from(ptr1), 2);
- /// assert_eq!(ptr1.wrapping_offset_from(ptr2), -2);
- /// assert_eq!(ptr1.wrapping_offset(2), ptr2);
- /// assert_eq!(ptr2.wrapping_offset(-2), ptr1);
- ///
- /// let ptr1: *mut i32 = 3 as _;
- /// let ptr2: *mut i32 = 13 as _;
- /// assert_eq!(ptr2.wrapping_offset_from(ptr1), 2);
- /// ```
- #[unstable(feature = "ptr_wrapping_offset_from", issue = "41079")]
- #[inline]
- pub fn wrapping_offset_from(self, origin: *const T) -> isize where T: Sized {
- (self as *const T).wrapping_offset_from(origin)
- }
-
- /// Calculates the offset from a pointer (convenience for `.offset(count as isize)`).
- ///
- /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
- /// offset of `3 * size_of::<T>()` bytes.
- ///
- /// # Safety
- ///
- /// If any of the following conditions are violated, the result is Undefined
- /// Behavior:
- ///
- /// * Both the starting and resulting pointer must be either in bounds or one
- /// byte past the end of the same allocated object.
- ///
- /// * The computed offset, **in bytes**, cannot overflow an `isize`.
- ///
- /// * The offset being in bounds cannot rely on "wrapping around" the address
- /// space. That is, the infinite-precision sum must fit in a `usize`.
- ///
- /// The compiler and standard library generally tries to ensure allocations
- /// never reach a size where an offset is a concern. For instance, `Vec`
- /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
- /// `vec.as_ptr().add(vec.len())` is always safe.
- ///
- /// Most platforms fundamentally can't even construct such an allocation.
- /// For instance, no known 64-bit platform can ever serve a request
- /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
- /// However, some 32-bit and 16-bit platforms may successfully serve a request for
- /// more than `isize::MAX` bytes with things like Physical Address
- /// Extension. As such, memory acquired directly from allocators or memory
- /// mapped files *may* be too large to handle with this function.
- ///
- /// Consider using `wrapping_offset` instead if these constraints are
- /// difficult to satisfy. The only advantage of this method is that it
- /// enables more aggressive compiler optimizations.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// let s: &str = "123";
- /// let ptr: *const u8 = s.as_ptr();
- ///
- /// unsafe {
- /// println!("{}", *ptr.add(1) as char);
- /// println!("{}", *ptr.add(2) as char);
- /// }
- /// ```
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn add(self, count: usize) -> Self
- where T: Sized,
- {
- self.offset(count as isize)
- }
-
- /// Calculates the offset from a pointer (convenience for
- /// `.offset((count as isize).wrapping_neg())`).
- ///
- /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
- /// offset of `3 * size_of::<T>()` bytes.
- ///
- /// # Safety
- ///
- /// If any of the following conditions are violated, the result is Undefined
- /// Behavior:
- ///
- /// * Both the starting and resulting pointer must be either in bounds or one
- /// byte past the end of the same allocated object.
- ///
- /// * The computed offset cannot exceed `isize::MAX` **bytes**.
- ///
- /// * The offset being in bounds cannot rely on "wrapping around" the address
- /// space. That is, the infinite-precision sum must fit in a usize.
- ///
- /// The compiler and standard library generally tries to ensure allocations
- /// never reach a size where an offset is a concern. For instance, `Vec`
- /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
- /// `vec.as_ptr().add(vec.len()).sub(vec.len())` is always safe.
- ///
- /// Most platforms fundamentally can't even construct such an allocation.
- /// For instance, no known 64-bit platform can ever serve a request
- /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
- /// However, some 32-bit and 16-bit platforms may successfully serve a request for
- /// more than `isize::MAX` bytes with things like Physical Address
- /// Extension. As such, memory acquired directly from allocators or memory
- /// mapped files *may* be too large to handle with this function.
- ///
- /// Consider using `wrapping_offset` instead if these constraints are
- /// difficult to satisfy. The only advantage of this method is that it
- /// enables more aggressive compiler optimizations.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// let s: &str = "123";
- ///
- /// unsafe {
- /// let end: *const u8 = s.as_ptr().add(3);
- /// println!("{}", *end.sub(1) as char);
- /// println!("{}", *end.sub(2) as char);
- /// }
- /// ```
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn sub(self, count: usize) -> Self
- where T: Sized,
- {
- self.offset((count as isize).wrapping_neg())
- }
-
- /// Calculates the offset from a pointer using wrapping arithmetic.
- /// (convenience for `.wrapping_offset(count as isize)`)
- ///
- /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
- /// offset of `3 * size_of::<T>()` bytes.
- ///
- /// # Safety
- ///
- /// The resulting pointer does not need to be in bounds, but it is
- /// potentially hazardous to dereference (which requires `unsafe`).
- ///
- /// Always use `.add(count)` instead when possible, because `add`
- /// allows the compiler to optimize better.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// // Iterate using a raw pointer in increments of two elements
- /// let data = [1u8, 2, 3, 4, 5];
- /// let mut ptr: *const u8 = data.as_ptr();
- /// let step = 2;
- /// let end_rounded_up = ptr.wrapping_add(6);
- ///
- /// // This loop prints "1, 3, 5, "
- /// while ptr != end_rounded_up {
- /// unsafe {
- /// print!("{}, ", *ptr);
- /// }
- /// ptr = ptr.wrapping_add(step);
- /// }
- /// ```
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub fn wrapping_add(self, count: usize) -> Self
- where T: Sized,
- {
- self.wrapping_offset(count as isize)
- }
-
- /// Calculates the offset from a pointer using wrapping arithmetic.
- /// (convenience for `.wrapping_offset((count as isize).wrapping_sub())`)
- ///
- /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
- /// offset of `3 * size_of::<T>()` bytes.
- ///
- /// # Safety
- ///
- /// The resulting pointer does not need to be in bounds, but it is
- /// potentially hazardous to dereference (which requires `unsafe`).
- ///
- /// Always use `.sub(count)` instead when possible, because `sub`
- /// allows the compiler to optimize better.
- ///
- /// # Examples
- ///
- /// Basic usage:
- ///
- /// ```
- /// // Iterate using a raw pointer in increments of two elements (backwards)
- /// let data = [1u8, 2, 3, 4, 5];
- /// let mut ptr: *const u8 = data.as_ptr();
- /// let start_rounded_down = ptr.wrapping_sub(2);
- /// ptr = ptr.wrapping_add(4);
- /// let step = 2;
- /// // This loop prints "5, 3, 1, "
- /// while ptr != start_rounded_down {
- /// unsafe {
- /// print!("{}, ", *ptr);
- /// }
- /// ptr = ptr.wrapping_sub(step);
- /// }
- /// ```
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub fn wrapping_sub(self, count: usize) -> Self
- where T: Sized,
- {
- self.wrapping_offset((count as isize).wrapping_neg())
- }
-
- /// Reads the value from `self` without moving it. This leaves the
- /// memory in `self` unchanged.
- ///
- /// See [`ptr::read`] for safety concerns and examples.
- ///
- /// [`ptr::read`]: ./ptr/fn.read.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn read(self) -> T
- where T: Sized,
- {
- read(self)
- }
-
- /// Performs a volatile read of the value from `self` without moving it. This
- /// leaves the memory in `self` unchanged.
- ///
- /// Volatile operations are intended to act on I/O memory, and are guaranteed
- /// to not be elided or reordered by the compiler across other volatile
- /// operations.
- ///
- /// See [`ptr::read_volatile`] for safety concerns and examples.
- ///
- /// [`ptr::read_volatile`]: ./ptr/fn.read_volatile.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn read_volatile(self) -> T
- where T: Sized,
- {
- read_volatile(self)
- }
-
- /// Reads the value from `self` without moving it. This leaves the
- /// memory in `self` unchanged.
- ///
- /// Unlike `read`, the pointer may be unaligned.
- ///
- /// See [`ptr::read_unaligned`] for safety concerns and examples.
- ///
- /// [`ptr::read_unaligned`]: ./ptr/fn.read_unaligned.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn read_unaligned(self) -> T
- where T: Sized,
- {
- read_unaligned(self)
- }
-
- /// Copies `count * size_of<T>` bytes from `self` to `dest`. The source
- /// and destination may overlap.
- ///
- /// NOTE: this has the *same* argument order as [`ptr::copy`].
- ///
- /// See [`ptr::copy`] for safety concerns and examples.
- ///
- /// [`ptr::copy`]: ./ptr/fn.copy.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn copy_to(self, dest: *mut T, count: usize)
- where T: Sized,
- {
- copy(self, dest, count)
- }
-
- /// Copies `count * size_of<T>` bytes from `self` to `dest`. The source
- /// and destination may *not* overlap.
- ///
- /// NOTE: this has the *same* argument order as [`ptr::copy_nonoverlapping`].
- ///
- /// See [`ptr::copy_nonoverlapping`] for safety concerns and examples.
- ///
- /// [`ptr::copy_nonoverlapping`]: ./ptr/fn.copy_nonoverlapping.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn copy_to_nonoverlapping(self, dest: *mut T, count: usize)
- where T: Sized,
- {
- copy_nonoverlapping(self, dest, count)
- }
-
- /// Copies `count * size_of<T>` bytes from `src` to `self`. The source
- /// and destination may overlap.
- ///
- /// NOTE: this has the *opposite* argument order of [`ptr::copy`].
- ///
- /// See [`ptr::copy`] for safety concerns and examples.
- ///
- /// [`ptr::copy`]: ./ptr/fn.copy.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn copy_from(self, src: *const T, count: usize)
- where T: Sized,
- {
- copy(src, self, count)
- }
-
- /// Copies `count * size_of<T>` bytes from `src` to `self`. The source
- /// and destination may *not* overlap.
- ///
- /// NOTE: this has the *opposite* argument order of [`ptr::copy_nonoverlapping`].
- ///
- /// See [`ptr::copy_nonoverlapping`] for safety concerns and examples.
- ///
- /// [`ptr::copy_nonoverlapping`]: ./ptr/fn.copy_nonoverlapping.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn copy_from_nonoverlapping(self, src: *const T, count: usize)
- where T: Sized,
- {
- copy_nonoverlapping(src, self, count)
- }
-
- /// Executes the destructor (if any) of the pointed-to value.
- ///
- /// See [`ptr::drop_in_place`] for safety concerns and examples.
- ///
- /// [`ptr::drop_in_place`]: ./ptr/fn.drop_in_place.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn drop_in_place(self) {
- drop_in_place(self)
- }
-
- /// Overwrites a memory location with the given value without reading or
- /// dropping the old value.
- ///
- /// See [`ptr::write`] for safety concerns and examples.
- ///
- /// [`ptr::write`]: ./ptr/fn.write.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn write(self, val: T)
- where T: Sized,
- {
- write(self, val)
- }
-
- /// Invokes memset on the specified pointer, setting `count * size_of::<T>()`
- /// bytes of memory starting at `self` to `val`.
- ///
- /// See [`ptr::write_bytes`] for safety concerns and examples.
- ///
- /// [`ptr::write_bytes`]: ./ptr/fn.write_bytes.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn write_bytes(self, val: u8, count: usize)
- where T: Sized,
- {
- write_bytes(self, val, count)
- }
-
- /// Performs a volatile write of a memory location with the given value without
- /// reading or dropping the old value.
- ///
- /// Volatile operations are intended to act on I/O memory, and are guaranteed
- /// to not be elided or reordered by the compiler across other volatile
- /// operations.
- ///
- /// See [`ptr::write_volatile`] for safety concerns and examples.
- ///
- /// [`ptr::write_volatile`]: ./ptr/fn.write_volatile.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn write_volatile(self, val: T)
- where T: Sized,
- {
- write_volatile(self, val)
- }
-
- /// Overwrites a memory location with the given value without reading or
- /// dropping the old value.
- ///
- /// Unlike `write`, the pointer may be unaligned.
- ///
- /// See [`ptr::write_unaligned`] for safety concerns and examples.
- ///
- /// [`ptr::write_unaligned`]: ./ptr/fn.write_unaligned.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn write_unaligned(self, val: T)
- where T: Sized,
- {
- write_unaligned(self, val)
- }
-
- /// Replaces the value at `self` with `src`, returning the old
- /// value, without dropping either.
- ///
- /// See [`ptr::replace`] for safety concerns and examples.
- ///
- /// [`ptr::replace`]: ./ptr/fn.replace.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn replace(self, src: T) -> T
- where T: Sized,
- {
- replace(self, src)
- }
-
- /// Swaps the values at two mutable locations of the same type, without
- /// deinitializing either. They may overlap, unlike `mem::swap` which is
- /// otherwise equivalent.
- ///
- /// See [`ptr::swap`] for safety concerns and examples.
- ///
- /// [`ptr::swap`]: ./ptr/fn.swap.html
- #[stable(feature = "pointer_methods", since = "1.26.0")]
- #[inline]
- pub unsafe fn swap(self, with: *mut T)
- where T: Sized,
- {
- swap(self, with)
- }
-
- /// Computes the offset that needs to be applied to the pointer in order to make it aligned to
- /// `align`.
- ///
- /// If it is not possible to align the pointer, the implementation returns
- /// `usize::max_value()`.
- ///
- /// The offset is expressed in number of `T` elements, and not bytes. The value returned can be
- /// used with the `offset` or `offset_to` methods.
- ///
- /// There are no guarantees whatsover that offsetting the pointer will not overflow or go
- /// beyond the allocation that the pointer points into. It is up to the caller to ensure that
- /// the returned offset is correct in all terms other than alignment.
- ///
- /// # Panics
- ///
- /// The function panics if `align` is not a power-of-two.
- ///
- /// # Examples
- ///
- /// Accessing adjacent `u8` as `u16`
- ///
- /// ```
- /// # fn foo(n: usize) {
- /// # use std::mem::align_of;
- /// # unsafe {
- /// let x = [5u8, 6u8, 7u8, 8u8, 9u8];
- /// let ptr = &x[n] as *const u8;
- /// let offset = ptr.align_offset(align_of::<u16>());
- /// if offset < x.len() - n - 1 {
- /// let u16_ptr = ptr.add(offset) as *const u16;
- /// assert_ne!(*u16_ptr, 500);
- /// } else {
- /// // while the pointer can be aligned via `offset`, it would point
- /// // outside the allocation
- /// }
- /// # } }
- /// ```
- #[stable(feature = "align_offset", since = "1.36.0")]
- pub fn align_offset(self, align: usize) -> usize where T: Sized {
- if !align.is_power_of_two() {
- panic!("align_offset: align is not a power-of-two");
- }
- unsafe {
- align_offset(self, align)
- }
- }
-}
-
-/// Align pointer `p`.
-///
-/// Calculate offset (in terms of elements of `stride` stride) that has to be applied
-/// to pointer `p` so that pointer `p` would get aligned to `a`.
-///
-/// Note: This implementation has been carefully tailored to not panic. It is UB for this to panic.
-/// The only real change that can be made here is change of `INV_TABLE_MOD_16` and associated
-/// constants.
-///
-/// If we ever decide to make it possible to call the intrinsic with `a` that is not a
-/// power-of-two, it will probably be more prudent to just change to a naive implementation rather
-/// than trying to adapt this to accommodate that change.
-///
-/// Any questions go to @nagisa.
-#[lang="align_offset"]
-pub(crate) unsafe fn align_offset<T: Sized>(p: *const T, a: usize) -> usize {
- /// Calculate multiplicative modular inverse of `x` modulo `m`.
- ///
- /// This implementation is tailored for align_offset and has following preconditions:
- ///
- /// * `m` is a power-of-two;
- /// * `x < m`; (if `x ≥ m`, pass in `x % m` instead)
- ///
- /// Implementation of this function shall not panic. Ever.
- #[inline]
- fn mod_inv(x: usize, m: usize) -> usize {
- /// Multiplicative modular inverse table modulo 2⁴ = 16.
- ///
- /// Note, that this table does not contain values where inverse does not exist (i.e., for
- /// `0⁻¹ mod 16`, `2⁻¹ mod 16`, etc.)
- const INV_TABLE_MOD_16: [u8; 8] = [1, 11, 13, 7, 9, 3, 5, 15];
- /// Modulo for which the `INV_TABLE_MOD_16` is intended.
- const INV_TABLE_MOD: usize = 16;
- /// INV_TABLE_MOD²
- const INV_TABLE_MOD_SQUARED: usize = INV_TABLE_MOD * INV_TABLE_MOD;
-
- let table_inverse = INV_TABLE_MOD_16[(x & (INV_TABLE_MOD - 1)) >> 1] as usize;
- if m <= INV_TABLE_MOD {
- table_inverse & (m - 1)
- } else {
- // We iterate "up" using the following formula:
- //
- // $$ xy ≡ 1 (mod 2ⁿ) → xy (2 - xy) ≡ 1 (mod 2²ⁿ) $$
- //
- // until 2²ⁿ ≥ m. Then we can reduce to our desired `m` by taking the result `mod m`.
- let mut inverse = table_inverse;
- let mut going_mod = INV_TABLE_MOD_SQUARED;
- loop {
- // y = y * (2 - xy) mod n
- //
- // Note, that we use wrapping operations here intentionally – the original formula
- // uses e.g., subtraction `mod n`. It is entirely fine to do them `mod
- // usize::max_value()` instead, because we take the result `mod n` at the end
- // anyway.
- inverse = inverse.wrapping_mul(
- 2usize.wrapping_sub(x.wrapping_mul(inverse))
- ) & (going_mod - 1);
- if going_mod > m {
- return inverse & (m - 1);
- }
- going_mod = going_mod.wrapping_mul(going_mod);
- }
- }
- }
-
- let stride = mem::size_of::<T>();
- let a_minus_one = a.wrapping_sub(1);
- let pmoda = p as usize & a_minus_one;
-
- if pmoda == 0 {
- // Already aligned. Yay!
- return 0;
- }
-
- if stride <= 1 {
- return if stride == 0 {
- // If the pointer is not aligned, and the element is zero-sized, then no amount of
- // elements will ever align the pointer.
- !0
- } else {
- a.wrapping_sub(pmoda)
- };
- }
-
- let smoda = stride & a_minus_one;
- // a is power-of-two so cannot be 0. stride = 0 is handled above.
- let gcdpow = intrinsics::cttz_nonzero(stride).min(intrinsics::cttz_nonzero(a));
- let gcd = 1usize << gcdpow;
-
- if p as usize & (gcd - 1) == 0 {
- // This branch solves for the following linear congruence equation:
- //
- // $$ p + so ≡ 0 mod a $$
- //
- // $p$ here is the pointer value, $s$ – stride of `T`, $o$ offset in `T`s, and $a$ – the
- // requested alignment.
- //
- // g = gcd(a, s)
- // o = (a - (p mod a))/g * ((s/g)⁻¹ mod a)
- //
- // The first term is “the relative alignment of p to a”, the second term is “how does
- // incrementing p by s bytes change the relative alignment of p”. Division by `g` is
- // necessary to make this equation well formed if $a$ and $s$ are not co-prime.
- //
- // Furthermore, the result produced by this solution is not “minimal”, so it is necessary
- // to take the result $o mod lcm(s, a)$. We can replace $lcm(s, a)$ with just a $a / g$.
- let j = a.wrapping_sub(pmoda) >> gcdpow;
- let k = smoda >> gcdpow;
- return intrinsics::unchecked_rem(j.wrapping_mul(mod_inv(k, a)), a >> gcdpow);
- }
-
- // Cannot be aligned at all.
- usize::max_value()
-}
-
-
-
-// Equality for pointers
-#[stable(feature = "rust1", since = "1.0.0")]
-impl<T: ?Sized> PartialEq for *const T {
- #[inline]
- fn eq(&self, other: &*const T) -> bool { *self == *other }
-}
-
-#[stable(feature = "rust1", since = "1.0.0")]
-impl<T: ?Sized> Eq for *const T {}
-
-#[stable(feature = "rust1", since = "1.0.0")]
-impl<T: ?Sized> PartialEq for *mut T {
- #[inline]
- fn eq(&self, other: &*mut T) -> bool { *self == *other }
-}
-
-#[stable(feature = "rust1", since = "1.0.0")]
-impl<T: ?Sized> Eq for *mut T {}
-
-/// Compares raw pointers for equality.
-///
-/// This is the same as using the `==` operator, but less generic:
-/// the arguments have to be `*const T` raw pointers,
-/// not anything that implements `PartialEq`.
-///
-/// This can be used to compare `&T` references (which coerce to `*const T` implicitly)
-/// by their address rather than comparing the values they point to
-/// (which is what the `PartialEq for &T` implementation does).
-///
-/// # Examples
-///
-/// ```
-/// use std::ptr;
-///
-/// let five = 5;
-/// let other_five = 5;
-/// let five_ref = &five;
-/// let same_five_ref = &five;
-/// let other_five_ref = &other_five;
-///
-/// assert!(five_ref == same_five_ref);
-/// assert!(ptr::eq(five_ref, same_five_ref));
-///
-/// assert!(five_ref == other_five_ref);
-/// assert!(!ptr::eq(five_ref, other_five_ref));
-/// ```
-///
-/// Slices are also compared by their length (fat pointers):
-///
-/// ```
-/// let a = [1, 2, 3];
-/// assert!(std::ptr::eq(&a[..3], &a[..3]));
-/// assert!(!std::ptr::eq(&a[..2], &a[..3]));
-/// assert!(!std::ptr::eq(&a[0..2], &a[1..3]));
-/// ```
-///
-/// Traits are also compared by their implementation:
-///
-/// ```
-/// #[repr(transparent)]
-/// struct Wrapper { member: i32 }
-///
-/// trait Trait {}
-/// impl Trait for Wrapper {}
-/// impl Trait for i32 {}
-///
-/// fn main() {
-/// let wrapper = Wrapper { member: 10 };
-///
-/// // Pointers have equal addresses.
-/// assert!(std::ptr::eq(
-/// &wrapper as *const Wrapper as *const u8,
-/// &wrapper.member as *const i32 as *const u8
-/// ));
-///
-/// // Objects have equal addresses, but `Trait` has different implementations.
-/// assert!(!std::ptr::eq(
-/// &wrapper as &dyn Trait,
-/// &wrapper.member as &dyn Trait,
-/// ));
-/// assert!(!std::ptr::eq(
-/// &wrapper as &dyn Trait as *const dyn Trait,
-/// &wrapper.member as &dyn Trait as *const dyn Trait,
-/// ));
-///
-/// // Converting the reference to a `*const u8` compares by address.
-/// assert!(std::ptr::eq(
-/// &wrapper as &dyn Trait as *const dyn Trait as *const u8,
-/// &wrapper.member as &dyn Trait as *const dyn Trait as *const u8,
-/// ));
-/// }
-/// ```
-#[stable(feature = "ptr_eq", since = "1.17.0")]
-#[inline]
-pub fn eq<T: ?Sized>(a: *const T, b: *const T) -> bool {
- a == b
-}
-
-/// Hash a raw pointer.
-///
-/// This can be used to hash a `&T` reference (which coerces to `*const T` implicitly)
-/// by its address rather than the value it points to
-/// (which is what the `Hash for &T` implementation does).
-///
-/// # Examples
-///
-/// ```
-/// use std::collections::hash_map::DefaultHasher;
-/// use std::hash::{Hash, Hasher};
-/// use std::ptr;
-///
-/// let five = 5;
-/// let five_ref = &five;
-///
-/// let mut hasher = DefaultHasher::new();
-/// ptr::hash(five_ref, &mut hasher);
-/// let actual = hasher.finish();
-///
-/// let mut hasher = DefaultHasher::new();
-/// (five_ref as *const i32).hash(&mut hasher);
-/// let expected = hasher.finish();
-///
-/// assert_eq!(actual, expected);
-/// ```
-#[stable(feature = "ptr_hash", since = "1.35.0")]
-pub fn hash<T: ?Sized, S: hash::Hasher>(hashee: *const T, into: &mut S) {
- use crate::hash::Hash;
- hashee.hash(into);
-}
-
-// Impls for function pointers
-macro_rules! fnptr_impls_safety_abi {
- ($FnTy: ty, $($Arg: ident),*) => {
- #[stable(feature = "fnptr_impls", since = "1.4.0")]
- impl<Ret, $($Arg),*> PartialEq for $FnTy {
- #[inline]
- fn eq(&self, other: &Self) -> bool {
- *self as usize == *other as usize
- }
- }
-
- #[stable(feature = "fnptr_impls", since = "1.4.0")]
- impl<Ret, $($Arg),*> Eq for $FnTy {}
-
- #[stable(feature = "fnptr_impls", since = "1.4.0")]
- impl<Ret, $($Arg),*> PartialOrd for $FnTy {
- #[inline]
- fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
- (*self as usize).partial_cmp(&(*other as usize))
- }
- }
-
- #[stable(feature = "fnptr_impls", since = "1.4.0")]
- impl<Ret, $($Arg),*> Ord for $FnTy {
- #[inline]
- fn cmp(&self, other: &Self) -> Ordering {
- (*self as usize).cmp(&(*other as usize))
- }
- }
-
- #[stable(feature = "fnptr_impls", since = "1.4.0")]
- impl<Ret, $($Arg),*> hash::Hash for $FnTy {
- fn hash<HH: hash::Hasher>(&self, state: &mut HH) {
- state.write_usize(*self as usize)
- }
- }
-
- #[stable(feature = "fnptr_impls", since = "1.4.0")]
- impl<Ret, $($Arg),*> fmt::Pointer for $FnTy {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- fmt::Pointer::fmt(&(*self as *const ()), f)
- }
- }
-
- #[stable(feature = "fnptr_impls", since = "1.4.0")]
- impl<Ret, $($Arg),*> fmt::Debug for $FnTy {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- fmt::Pointer::fmt(&(*self as *const ()), f)
- }
- }
- }
-}
-
-macro_rules! fnptr_impls_args {
- ($($Arg: ident),+) => {
- fnptr_impls_safety_abi! { extern "Rust" fn($($Arg),*) -> Ret, $($Arg),* }
- fnptr_impls_safety_abi! { extern "C" fn($($Arg),*) -> Ret, $($Arg),* }
- fnptr_impls_safety_abi! { extern "C" fn($($Arg),* , ...) -> Ret, $($Arg),* }
- fnptr_impls_safety_abi! { unsafe extern "Rust" fn($($Arg),*) -> Ret, $($Arg),* }
- fnptr_impls_safety_abi! { unsafe extern "C" fn($($Arg),*) -> Ret, $($Arg),* }
- fnptr_impls_safety_abi! { unsafe extern "C" fn($($Arg),* , ...) -> Ret, $($Arg),* }
- };
- () => {
- // No variadic functions with 0 parameters
- fnptr_impls_safety_abi! { extern "Rust" fn() -> Ret, }
- fnptr_impls_safety_abi! { extern "C" fn() -> Ret, }
- fnptr_impls_safety_abi! { unsafe extern "Rust" fn() -> Ret, }
- fnptr_impls_safety_abi! { unsafe extern "C" fn() -> Ret, }
- };
-}
-
-fnptr_impls_args! { }
-fnptr_impls_args! { A }
-fnptr_impls_args! { A, B }
-fnptr_impls_args! { A, B, C }
-fnptr_impls_args! { A, B, C, D }
-fnptr_impls_args! { A, B, C, D, E }
-fnptr_impls_args! { A, B, C, D, E, F }
-fnptr_impls_args! { A, B, C, D, E, F, G }
-fnptr_impls_args! { A, B, C, D, E, F, G, H }
-fnptr_impls_args! { A, B, C, D, E, F, G, H, I }
-fnptr_impls_args! { A, B, C, D, E, F, G, H, I, J }
-fnptr_impls_args! { A, B, C, D, E, F, G, H, I, J, K }
-fnptr_impls_args! { A, B, C, D, E, F, G, H, I, J, K, L }
-
-// Comparison for pointers
-#[stable(feature = "rust1", since = "1.0.0")]
-impl<T: ?Sized> Ord for *const T {
- #[inline]
- fn cmp(&self, other: &*const T) -> Ordering {
- if self < other {
- Less
- } else if self == other {
- Equal
- } else {
- Greater
- }
- }
-}
-
-#[stable(feature = "rust1", since = "1.0.0")]
-impl<T: ?Sized> PartialOrd for *const T {
- #[inline]
- fn partial_cmp(&self, other: &*const T) -> Option<Ordering> {
- Some(self.cmp(other))
- }
-
- #[inline]
- fn lt(&self, other: &*const T) -> bool { *self < *other }
-
- #[inline]
- fn le(&self, other: &*const T) -> bool { *self <= *other }
-
- #[inline]
- fn gt(&self, other: &*const T) -> bool { *self > *other }
-
- #[inline]
- fn ge(&self, other: &*const T) -> bool { *self >= *other }
-}
-
-#[stable(feature = "rust1", since = "1.0.0")]
-impl<T: ?Sized> Ord for *mut T {
- #[inline]
- fn cmp(&self, other: &*mut T) -> Ordering {
- if self < other {
- Less
- } else if self == other {
- Equal
- } else {
- Greater
- }
- }
-}
-
-#[stable(feature = "rust1", since = "1.0.0")]
-impl<T: ?Sized> PartialOrd for *mut T {
- #[inline]
- fn partial_cmp(&self, other: &*mut T) -> Option<Ordering> {
- Some(self.cmp(other))
- }
-
- #[inline]
- fn lt(&self, other: &*mut T) -> bool { *self < *other }
-
- #[inline]
- fn le(&self, other: &*mut T) -> bool { *self <= *other }
-
- #[inline]
- fn gt(&self, other: &*mut T) -> bool { *self > *other }
-
- #[inline]
- fn ge(&self, other: &*mut T) -> bool { *self >= *other }
-}
-
-/// A wrapper around a raw non-null `*mut T` that indicates that the possessor
-/// of this wrapper owns the referent. Useful for building abstractions like
-/// `Box<T>`, `Vec<T>`, `String`, and `HashMap<K, V>`.
-///
-/// Unlike `*mut T`, `Unique<T>` behaves "as if" it were an instance of `T`.
-/// It implements `Send`/`Sync` if `T` is `Send`/`Sync`. It also implies
-/// the kind of strong aliasing guarantees an instance of `T` can expect:
-/// the referent of the pointer should not be modified without a unique path to
-/// its owning Unique.
-///
-/// If you're uncertain of whether it's correct to use `Unique` for your purposes,
-/// consider using `NonNull`, which has weaker semantics.
-///
-/// Unlike `*mut T`, the pointer must always be non-null, even if the pointer
-/// is never dereferenced. This is so that enums may use this forbidden value
-/// as a discriminant -- `Option<Unique<T>>` has the same size as `Unique<T>`.
-/// However the pointer may still dangle if it isn't dereferenced.
-///
-/// Unlike `*mut T`, `Unique<T>` is covariant over `T`. This should always be correct
-/// for any type which upholds Unique's aliasing requirements.
-#[unstable(feature = "ptr_internals", issue = "0",
- reason = "use NonNull instead and consider PhantomData<T> \
- (if you also use #[may_dangle]), Send, and/or Sync")]
-#[doc(hidden)]
-#[repr(transparent)]
-#[rustc_layout_scalar_valid_range_start(1)]
-pub struct Unique<T: ?Sized> {
- pointer: *const T,
- // NOTE: this marker has no consequences for variance, but is necessary
- // for dropck to understand that we logically own a `T`.
- //
- // For details, see:
- // https://github.com/rust-lang/rfcs/blob/master/text/0769-sound-generic-drop.md#phantom-data
- _marker: PhantomData<T>,
-}
-
-#[unstable(feature = "ptr_internals", issue = "0")]
-impl<T: ?Sized> fmt::Debug for Unique<T> {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- fmt::Pointer::fmt(&self.as_ptr(), f)
- }
-}
-
-/// `Unique` pointers are `Send` if `T` is `Send` because the data they
-/// reference is unaliased. Note that this aliasing invariant is
-/// unenforced by the type system; the abstraction using the
-/// `Unique` must enforce it.
-#[unstable(feature = "ptr_internals", issue = "0")]
-unsafe impl<T: Send + ?Sized> Send for Unique<T> { }
-
-/// `Unique` pointers are `Sync` if `T` is `Sync` because the data they
-/// reference is unaliased. Note that this aliasing invariant is
-/// unenforced by the type system; the abstraction using the
-/// `Unique` must enforce it.
-#[unstable(feature = "ptr_internals", issue = "0")]
-unsafe impl<T: Sync + ?Sized> Sync for Unique<T> { }
-
-#[unstable(feature = "ptr_internals", issue = "0")]
-impl<T: Sized> Unique<T> {
- /// Creates a new `Unique` that is dangling, but well-aligned.
- ///
- /// This is useful for initializing types which lazily allocate, like
- /// `Vec::new` does.
- ///
- /// Note that the pointer value may potentially represent a valid pointer to
- /// a `T`, which means this must not be used as a "not yet initialized"
- /// sentinel value. Types that lazily allocate must track initialization by
- /// some other means.
- // FIXME: rename to dangling() to match NonNull?
- pub const fn empty() -> Self {
- unsafe {
- Unique::new_unchecked(mem::align_of::<T>() as *mut T)
- }
- }
-}
-
-#[unstable(feature = "ptr_internals", issue = "0")]
-impl<T: ?Sized> Unique<T> {
- /// Creates a new `Unique`.
- ///
- /// # Safety
- ///
- /// `ptr` must be non-null.
- pub const unsafe fn new_unchecked(ptr: *mut T) -> Self {
- Unique { pointer: ptr as _, _marker: PhantomData }
- }
-
- /// Creates a new `Unique` if `ptr` is non-null.
- pub fn new(ptr: *mut T) -> Option<Self> {
- if !ptr.is_null() {
- Some(unsafe { Unique { pointer: ptr as _, _marker: PhantomData } })
- } else {
- None
- }
- }
-
- /// Acquires the underlying `*mut` pointer.
- pub const fn as_ptr(self) -> *mut T {
- self.pointer as *mut T
- }
-
- /// Dereferences the content.
- ///
- /// The resulting lifetime is bound to self so this behaves "as if"
- /// it were actually an instance of T that is getting borrowed. If a longer
- /// (unbound) lifetime is needed, use `&*my_ptr.as_ptr()`.
- pub unsafe fn as_ref(&self) -> &T {
- &*self.as_ptr()
- }
-
- /// Mutably dereferences the content.
- ///
- /// The resulting lifetime is bound to self so this behaves "as if"
- /// it were actually an instance of T that is getting borrowed. If a longer
- /// (unbound) lifetime is needed, use `&mut *my_ptr.as_ptr()`.
- pub unsafe fn as_mut(&mut self) -> &mut T {
- &mut *self.as_ptr()
- }
-}
-
-#[unstable(feature = "ptr_internals", issue = "0")]
-impl<T: ?Sized> Clone for Unique<T> {
- fn clone(&self) -> Self {
- *self
- }
-}
-
-#[unstable(feature = "ptr_internals", issue = "0")]
-impl<T: ?Sized> Copy for Unique<T> { }
-
-#[unstable(feature = "ptr_internals", issue = "0")]
-impl<T: ?Sized, U: ?Sized> CoerceUnsized<Unique<U>> for Unique<T> where T: Unsize<U> { }
-
-#[unstable(feature = "ptr_internals", issue = "0")]
-impl<T: ?Sized, U: ?Sized> DispatchFromDyn<Unique<U>> for Unique<T> where T: Unsize<U> { }
-
-#[unstable(feature = "ptr_internals", issue = "0")]
-impl<T: ?Sized> fmt::Pointer for Unique<T> {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- fmt::Pointer::fmt(&self.as_ptr(), f)
- }
-}
-
-#[unstable(feature = "ptr_internals", issue = "0")]
-impl<T: ?Sized> From<&mut T> for Unique<T> {
- fn from(reference: &mut T) -> Self {
- unsafe { Unique { pointer: reference as *mut T, _marker: PhantomData } }
- }
-}
-
-#[unstable(feature = "ptr_internals", issue = "0")]
-impl<T: ?Sized> From<&T> for Unique<T> {
- fn from(reference: &T) -> Self {
- unsafe { Unique { pointer: reference as *const T, _marker: PhantomData } }
- }
-}
-
-#[unstable(feature = "ptr_internals", issue = "0")]
-impl<'a, T: ?Sized> From<NonNull<T>> for Unique<T> {
- fn from(p: NonNull<T>) -> Self {
- unsafe { Unique { pointer: p.pointer, _marker: PhantomData } }
- }
-}
-
-/// `*mut T` but non-zero and covariant.
-///
-/// This is often the correct thing to use when building data structures using
-/// raw pointers, but is ultimately more dangerous to use because of its additional
-/// properties. If you're not sure if you should use `NonNull<T>`, just use `*mut T`!
-///
-/// Unlike `*mut T`, the pointer must always be non-null, even if the pointer
-/// is never dereferenced. This is so that enums may use this forbidden value
-/// as a discriminant -- `Option<NonNull<T>>` has the same size as `*mut T`.
-/// However the pointer may still dangle if it isn't dereferenced.
-///
-/// Unlike `*mut T`, `NonNull<T>` is covariant over `T`. If this is incorrect
-/// for your use case, you should include some [`PhantomData`] in your type to
-/// provide invariance, such as `PhantomData<Cell<T>>` or `PhantomData<&'a mut T>`.
-/// Usually this won't be necessary; covariance is correct for most safe abstractions,
-/// such as `Box`, `Rc`, `Arc`, `Vec`, and `LinkedList`. This is the case because they
-/// provide a public API that follows the normal shared XOR mutable rules of Rust.
-///
-/// Notice that `NonNull<T>` has a `From` instance for `&T`. However, this does
-/// not change the fact that mutating through a (pointer derived from a) shared
-/// reference is undefined behavior unless the mutation happens inside an
-/// [`UnsafeCell<T>`]. The same goes for creating a mutable reference from a shared
-/// reference. When using this `From` instance without an `UnsafeCell<T>`,
-/// it is your responsibility to ensure that `as_mut` is never called, and `as_ptr`
-/// is never used for mutation.
-///
-/// [`PhantomData`]: ../marker/struct.PhantomData.html
-/// [`UnsafeCell<T>`]: ../cell/struct.UnsafeCell.html
-#[stable(feature = "nonnull", since = "1.25.0")]
-#[repr(transparent)]
-#[rustc_layout_scalar_valid_range_start(1)]
-#[cfg_attr(not(stage0), rustc_nonnull_optimization_guaranteed)]
-pub struct NonNull<T: ?Sized> {
- pointer: *const T,
-}
-
-/// `NonNull` pointers are not `Send` because the data they reference may be aliased.
-// N.B., this impl is unnecessary, but should provide better error messages.
-#[stable(feature = "nonnull", since = "1.25.0")]
-impl<T: ?Sized> !Send for NonNull<T> { }
-
-/// `NonNull` pointers are not `Sync` because the data they reference may be aliased.
-// N.B., this impl is unnecessary, but should provide better error messages.
-#[stable(feature = "nonnull", since = "1.25.0")]
-impl<T: ?Sized> !Sync for NonNull<T> { }
-
-impl<T: Sized> NonNull<T> {
- /// Creates a new `NonNull` that is dangling, but well-aligned.
- ///
- /// This is useful for initializing types which lazily allocate, like
- /// `Vec::new` does.
- ///
- /// Note that the pointer value may potentially represent a valid pointer to
- /// a `T`, which means this must not be used as a "not yet initialized"
- /// sentinel value. Types that lazily allocate must track initialization by
- /// some other means.
- #[stable(feature = "nonnull", since = "1.25.0")]
- #[inline]
- pub const fn dangling() -> Self {
- unsafe {
- let ptr = mem::align_of::<T>() as *mut T;
- NonNull::new_unchecked(ptr)
- }
- }
-}
-
-impl<T: ?Sized> NonNull<T> {
- /// Creates a new `NonNull`.
- ///
- /// # Safety
- ///
- /// `ptr` must be non-null.
- #[stable(feature = "nonnull", since = "1.25.0")]
- #[inline]
- pub const unsafe fn new_unchecked(ptr: *mut T) -> Self {
- NonNull { pointer: ptr as _ }
- }
-
- /// Creates a new `NonNull` if `ptr` is non-null.
- #[stable(feature = "nonnull", since = "1.25.0")]
- #[inline]
- pub fn new(ptr: *mut T) -> Option<Self> {
- if !ptr.is_null() {
- Some(unsafe { Self::new_unchecked(ptr) })
- } else {
- None
- }
- }
-
- /// Acquires the underlying `*mut` pointer.
- #[stable(feature = "nonnull", since = "1.25.0")]
- #[inline]
- pub const fn as_ptr(self) -> *mut T {
- self.pointer as *mut T
- }
-
- /// Dereferences the content.
- ///
- /// The resulting lifetime is bound to self so this behaves "as if"
- /// it were actually an instance of T that is getting borrowed. If a longer
- /// (unbound) lifetime is needed, use `&*my_ptr.as_ptr()`.
- #[stable(feature = "nonnull", since = "1.25.0")]
- #[inline]
- pub unsafe fn as_ref(&self) -> &T {
- &*self.as_ptr()
- }
-
- /// Mutably dereferences the content.
- ///
- /// The resulting lifetime is bound to self so this behaves "as if"
- /// it were actually an instance of T that is getting borrowed. If a longer
- /// (unbound) lifetime is needed, use `&mut *my_ptr.as_ptr()`.
- #[stable(feature = "nonnull", since = "1.25.0")]
- #[inline]
- pub unsafe fn as_mut(&mut self) -> &mut T {
- &mut *self.as_ptr()
- }
-
- /// Cast to a pointer of another type
- #[stable(feature = "nonnull_cast", since = "1.27.0")]
- #[inline]
- pub const fn cast<U>(self) -> NonNull<U> {
- unsafe {
- NonNull::new_unchecked(self.as_ptr() as *mut U)
- }
- }
-}
-
-#[stable(feature = "nonnull", since = "1.25.0")]
-impl<T: ?Sized> Clone for NonNull<T> {
- fn clone(&self) -> Self {
- *self
- }
-}
-
-#[stable(feature = "nonnull", since = "1.25.0")]
-impl<T: ?Sized> Copy for NonNull<T> { }
-
-#[unstable(feature = "coerce_unsized", issue = "27732")]
-impl<T: ?Sized, U: ?Sized> CoerceUnsized<NonNull<U>> for NonNull<T> where T: Unsize<U> { }
-
-#[unstable(feature = "dispatch_from_dyn", issue = "0")]
-impl<T: ?Sized, U: ?Sized> DispatchFromDyn<NonNull<U>> for NonNull<T> where T: Unsize<U> { }
-
-#[stable(feature = "nonnull", since = "1.25.0")]
-impl<T: ?Sized> fmt::Debug for NonNull<T> {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- fmt::Pointer::fmt(&self.as_ptr(), f)
- }
-}
-
-#[stable(feature = "nonnull", since = "1.25.0")]
-impl<T: ?Sized> fmt::Pointer for NonNull<T> {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- fmt::Pointer::fmt(&self.as_ptr(), f)
- }
-}
-
-#[stable(feature = "nonnull", since = "1.25.0")]
-impl<T: ?Sized> Eq for NonNull<T> {}
-
-#[stable(feature = "nonnull", since = "1.25.0")]
-impl<T: ?Sized> PartialEq for NonNull<T> {
- #[inline]
- fn eq(&self, other: &Self) -> bool {
- self.as_ptr() == other.as_ptr()
- }
-}
-
-#[stable(feature = "nonnull", since = "1.25.0")]
-impl<T: ?Sized> Ord for NonNull<T> {
- #[inline]
- fn cmp(&self, other: &Self) -> Ordering {
- self.as_ptr().cmp(&other.as_ptr())
- }
-}
-
-#[stable(feature = "nonnull", since = "1.25.0")]
-impl<T: ?Sized> PartialOrd for NonNull<T> {
- #[inline]
- fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
- self.as_ptr().partial_cmp(&other.as_ptr())
- }
-}
-
-#[stable(feature = "nonnull", since = "1.25.0")]
-impl<T: ?Sized> hash::Hash for NonNull<T> {
- #[inline]
- fn hash<H: hash::Hasher>(&self, state: &mut H) {
- self.as_ptr().hash(state)
- }
-}
-
-#[unstable(feature = "ptr_internals", issue = "0")]
-impl<T: ?Sized> From<Unique<T>> for NonNull<T> {
- #[inline]
- fn from(unique: Unique<T>) -> Self {
- unsafe { NonNull { pointer: unique.pointer } }
- }
-}
-
-#[stable(feature = "nonnull", since = "1.25.0")]
-impl<T: ?Sized> From<&mut T> for NonNull<T> {
- #[inline]
- fn from(reference: &mut T) -> Self {
- unsafe { NonNull { pointer: reference as *mut T } }
- }
-}
-
-#[stable(feature = "nonnull", since = "1.25.0")]
-impl<T: ?Sized> From<&T> for NonNull<T> {
- #[inline]
- fn from(reference: &T) -> Self {
- unsafe { NonNull { pointer: reference as *const T } }
- }
-}
--- /dev/null
+//! Manually manage memory through raw pointers.
+//!
+//! *[See also the pointer primitive types](../../std/primitive.pointer.html).*
+//!
+//! # Safety
+//!
+//! Many functions in this module take raw pointers as arguments and read from
+//! or write to them. For this to be safe, these pointers must be *valid*.
+//! Whether a pointer is valid depends on the operation it is used for
+//! (read or write), and the extent of the memory that is accessed (i.e.,
+//! how many bytes are read/written). Most functions use `*mut T` and `*const T`
+//! to access only a single value, in which case the documentation omits the size
+//! and implicitly assumes it to be `size_of::<T>()` bytes.
+//!
+//! The precise rules for validity are not determined yet. The guarantees that are
+//! provided at this point are very minimal:
+//!
+//! * A [null] pointer is *never* valid, not even for accesses of [size zero][zst].
+//! * All pointers (except for the null pointer) are valid for all operations of
+//! [size zero][zst].
+//! * All accesses performed by functions in this module are *non-atomic* in the sense
+//! of [atomic operations] used to synchronize between threads. This means it is
+//! undefined behavior to perform two concurrent accesses to the same location from different
+//! threads unless both accesses only read from memory. Notice that this explicitly
+//! includes [`read_volatile`] and [`write_volatile`]: Volatile accesses cannot
+//! be used for inter-thread synchronization.
+//! * The result of casting a reference to a pointer is valid for as long as the
+//! underlying object is live and no reference (just raw pointers) is used to
+//! access the same memory.
+//!
+//! These axioms, along with careful use of [`offset`] for pointer arithmetic,
+//! are enough to correctly implement many useful things in unsafe code. Stronger guarantees
+//! will be provided eventually, as the [aliasing] rules are being determined. For more
+//! information, see the [book] as well as the section in the reference devoted
+//! to [undefined behavior][ub].
+//!
+//! ## Alignment
+//!
+//! Valid raw pointers as defined above are not necessarily properly aligned (where
+//! "proper" alignment is defined by the pointee type, i.e., `*const T` must be
+//! aligned to `mem::align_of::<T>()`). However, most functions require their
+//! arguments to be properly aligned, and will explicitly state
+//! this requirement in their documentation. Notable exceptions to this are
+//! [`read_unaligned`] and [`write_unaligned`].
+//!
+//! When a function requires proper alignment, it does so even if the access
+//! has size 0, i.e., even if memory is not actually touched. Consider using
+//! [`NonNull::dangling`] in such cases.
+//!
+//! [aliasing]: ../../nomicon/aliasing.html
+//! [book]: ../../book/ch19-01-unsafe-rust.html#dereferencing-a-raw-pointer
+//! [ub]: ../../reference/behavior-considered-undefined.html
+//! [null]: ./fn.null.html
+//! [zst]: ../../nomicon/exotic-sizes.html#zero-sized-types-zsts
+//! [atomic operations]: ../../std/sync/atomic/index.html
+//! [`copy`]: ../../std/ptr/fn.copy.html
+//! [`offset`]: ../../std/primitive.pointer.html#method.offset
+//! [`read_unaligned`]: ./fn.read_unaligned.html
+//! [`write_unaligned`]: ./fn.write_unaligned.html
+//! [`read_volatile`]: ./fn.read_volatile.html
+//! [`write_volatile`]: ./fn.write_volatile.html
+//! [`NonNull::dangling`]: ./struct.NonNull.html#method.dangling
+
+#![stable(feature = "rust1", since = "1.0.0")]
+
+use crate::intrinsics;
+use crate::fmt;
+use crate::hash;
+use crate::mem::{self, MaybeUninit};
+use crate::cmp::Ordering::{self, Less, Equal, Greater};
+
+#[stable(feature = "rust1", since = "1.0.0")]
+pub use crate::intrinsics::copy_nonoverlapping;
+
+#[stable(feature = "rust1", since = "1.0.0")]
+pub use crate::intrinsics::copy;
+
+#[stable(feature = "rust1", since = "1.0.0")]
+pub use crate::intrinsics::write_bytes;
+
+mod non_null;
+#[stable(feature = "nonnull", since = "1.25.0")]
+pub use non_null::NonNull;
+
+mod unique;
+#[unstable(feature = "ptr_internals", issue = "0")]
+pub use unique::Unique;
+
+/// Executes the destructor (if any) of the pointed-to value.
+///
+/// This is semantically equivalent to calling [`ptr::read`] and discarding
+/// the result, but has the following advantages:
+///
+/// * It is *required* to use `drop_in_place` to drop unsized types like
+/// trait objects, because they can't be read out onto the stack and
+/// dropped normally.
+///
+/// * It is friendlier to the optimizer to do this over [`ptr::read`] when
+/// dropping manually allocated memory (e.g., when writing Box/Rc/Vec),
+/// as the compiler doesn't need to prove that it's sound to elide the
+/// copy.
+///
+/// [`ptr::read`]: ../ptr/fn.read.html
+///
+/// # Safety
+///
+/// Behavior is undefined if any of the following conditions are violated:
+///
+/// * `to_drop` must be [valid] for reads.
+///
+/// * `to_drop` must be properly aligned. See the example below for how to drop
+/// an unaligned pointer.
+///
+/// Additionally, if `T` is not [`Copy`], using the pointed-to value after
+/// calling `drop_in_place` can cause undefined behavior. Note that `*to_drop =
+/// foo` counts as a use because it will cause the value to be dropped
+/// again. [`write`] can be used to overwrite data without causing it to be
+/// dropped.
+///
+/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
+///
+/// [valid]: ../ptr/index.html#safety
+/// [`Copy`]: ../marker/trait.Copy.html
+/// [`write`]: ../ptr/fn.write.html
+///
+/// # Examples
+///
+/// Manually remove the last item from a vector:
+///
+/// ```
+/// use std::ptr;
+/// use std::rc::Rc;
+///
+/// let last = Rc::new(1);
+/// let weak = Rc::downgrade(&last);
+///
+/// let mut v = vec![Rc::new(0), last];
+///
+/// unsafe {
+/// // Get a raw pointer to the last element in `v`.
+/// let ptr = &mut v[1] as *mut _;
+/// // Shorten `v` to prevent the last item from being dropped. We do that first,
+/// // to prevent issues if the `drop_in_place` below panics.
+/// v.set_len(1);
+/// // Without a call `drop_in_place`, the last item would never be dropped,
+/// // and the memory it manages would be leaked.
+/// ptr::drop_in_place(ptr);
+/// }
+///
+/// assert_eq!(v, &[0.into()]);
+///
+/// // Ensure that the last item was dropped.
+/// assert!(weak.upgrade().is_none());
+/// ```
+///
+/// Unaligned values cannot be dropped in place, they must be copied to an aligned
+/// location first:
+/// ```
+/// use std::ptr;
+/// use std::mem::{self, MaybeUninit};
+///
+/// unsafe fn drop_after_copy<T>(to_drop: *mut T) {
+/// let mut copy: MaybeUninit<T> = MaybeUninit::uninit();
+/// ptr::copy(to_drop, copy.as_mut_ptr(), 1);
+/// drop(copy.assume_init());
+/// }
+///
+/// #[repr(packed, C)]
+/// struct Packed {
+/// _padding: u8,
+/// unaligned: Vec<i32>,
+/// }
+///
+/// let mut p = Packed { _padding: 0, unaligned: vec![42] };
+/// unsafe {
+/// drop_after_copy(&mut p.unaligned as *mut _);
+/// mem::forget(p);
+/// }
+/// ```
+///
+/// Notice that the compiler performs this copy automatically when dropping packed structs,
+/// i.e., you do not usually have to worry about such issues unless you call `drop_in_place`
+/// manually.
+#[stable(feature = "drop_in_place", since = "1.8.0")]
+#[inline(always)]
+pub unsafe fn drop_in_place<T: ?Sized>(to_drop: *mut T) {
+ real_drop_in_place(&mut *to_drop)
+}
+
+// The real `drop_in_place` -- the one that gets called implicitly when variables go
+// out of scope -- should have a safe reference and not a raw pointer as argument
+// type. When we drop a local variable, we access it with a pointer that behaves
+// like a safe reference; transmuting that to a raw pointer does not mean we can
+// actually access it with raw pointers.
+#[lang = "drop_in_place"]
+#[allow(unconditional_recursion)]
+unsafe fn real_drop_in_place<T: ?Sized>(to_drop: &mut T) {
+ // Code here does not matter - this is replaced by the
+ // real drop glue by the compiler.
+ real_drop_in_place(to_drop)
+}
+
+/// Creates a null raw pointer.
+///
+/// # Examples
+///
+/// ```
+/// use std::ptr;
+///
+/// let p: *const i32 = ptr::null();
+/// assert!(p.is_null());
+/// ```
+#[inline]
+#[stable(feature = "rust1", since = "1.0.0")]
+#[rustc_promotable]
+pub const fn null<T>() -> *const T { 0 as *const T }
+
+/// Creates a null mutable raw pointer.
+///
+/// # Examples
+///
+/// ```
+/// use std::ptr;
+///
+/// let p: *mut i32 = ptr::null_mut();
+/// assert!(p.is_null());
+/// ```
+#[inline]
+#[stable(feature = "rust1", since = "1.0.0")]
+#[rustc_promotable]
+pub const fn null_mut<T>() -> *mut T { 0 as *mut T }
+
+/// Swaps the values at two mutable locations of the same type, without
+/// deinitializing either.
+///
+/// But for the following two exceptions, this function is semantically
+/// equivalent to [`mem::swap`]:
+///
+/// * It operates on raw pointers instead of references. When references are
+/// available, [`mem::swap`] should be preferred.
+///
+/// * The two pointed-to values may overlap. If the values do overlap, then the
+/// overlapping region of memory from `x` will be used. This is demonstrated
+/// in the second example below.
+///
+/// [`mem::swap`]: ../mem/fn.swap.html
+///
+/// # Safety
+///
+/// Behavior is undefined if any of the following conditions are violated:
+///
+/// * Both `x` and `y` must be [valid] for reads and writes.
+///
+/// * Both `x` and `y` must be properly aligned.
+///
+/// Note that even if `T` has size `0`, the pointers must be non-NULL and properly aligned.
+///
+/// [valid]: ../ptr/index.html#safety
+///
+/// # Examples
+///
+/// Swapping two non-overlapping regions:
+///
+/// ```
+/// use std::ptr;
+///
+/// let mut array = [0, 1, 2, 3];
+///
+/// let x = array[0..].as_mut_ptr() as *mut [u32; 2]; // this is `array[0..2]`
+/// let y = array[2..].as_mut_ptr() as *mut [u32; 2]; // this is `array[2..4]`
+///
+/// unsafe {
+/// ptr::swap(x, y);
+/// assert_eq!([2, 3, 0, 1], array);
+/// }
+/// ```
+///
+/// Swapping two overlapping regions:
+///
+/// ```
+/// use std::ptr;
+///
+/// let mut array = [0, 1, 2, 3];
+///
+/// let x = array[0..].as_mut_ptr() as *mut [u32; 3]; // this is `array[0..3]`
+/// let y = array[1..].as_mut_ptr() as *mut [u32; 3]; // this is `array[1..4]`
+///
+/// unsafe {
+/// ptr::swap(x, y);
+/// // The indices `1..3` of the slice overlap between `x` and `y`.
+/// // Reasonable results would be for to them be `[2, 3]`, so that indices `0..3` are
+/// // `[1, 2, 3]` (matching `y` before the `swap`); or for them to be `[0, 1]`
+/// // so that indices `1..4` are `[0, 1, 2]` (matching `x` before the `swap`).
+/// // This implementation is defined to make the latter choice.
+/// assert_eq!([1, 0, 1, 2], array);
+/// }
+/// ```
+#[inline]
+#[stable(feature = "rust1", since = "1.0.0")]
+pub unsafe fn swap<T>(x: *mut T, y: *mut T) {
+ // Give ourselves some scratch space to work with.
+ // We do not have to worry about drops: `MaybeUninit` does nothing when dropped.
+ let mut tmp = MaybeUninit::<T>::uninit();
+
+ // Perform the swap
+ copy_nonoverlapping(x, tmp.as_mut_ptr(), 1);
+ copy(y, x, 1); // `x` and `y` may overlap
+ copy_nonoverlapping(tmp.as_ptr(), y, 1);
+}
+
+/// Swaps `count * size_of::<T>()` bytes between the two regions of memory
+/// beginning at `x` and `y`. The two regions must *not* overlap.
+///
+/// # Safety
+///
+/// Behavior is undefined if any of the following conditions are violated:
+///
+/// * Both `x` and `y` must be [valid] for reads and writes of `count *
+/// size_of::<T>()` bytes.
+///
+/// * Both `x` and `y` must be properly aligned.
+///
+/// * The region of memory beginning at `x` with a size of `count *
+/// size_of::<T>()` bytes must *not* overlap with the region of memory
+/// beginning at `y` with the same size.
+///
+/// Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`,
+/// the pointers must be non-NULL and properly aligned.
+///
+/// [valid]: ../ptr/index.html#safety
+///
+/// # Examples
+///
+/// Basic usage:
+///
+/// ```
+/// use std::ptr;
+///
+/// let mut x = [1, 2, 3, 4];
+/// let mut y = [7, 8, 9];
+///
+/// unsafe {
+/// ptr::swap_nonoverlapping(x.as_mut_ptr(), y.as_mut_ptr(), 2);
+/// }
+///
+/// assert_eq!(x, [7, 8, 3, 4]);
+/// assert_eq!(y, [1, 2, 9]);
+/// ```
+#[inline]
+#[stable(feature = "swap_nonoverlapping", since = "1.27.0")]
+pub unsafe fn swap_nonoverlapping<T>(x: *mut T, y: *mut T, count: usize) {
+ let x = x as *mut u8;
+ let y = y as *mut u8;
+ let len = mem::size_of::<T>() * count;
+ swap_nonoverlapping_bytes(x, y, len)
+}
+
+#[inline]
+pub(crate) unsafe fn swap_nonoverlapping_one<T>(x: *mut T, y: *mut T) {
+ // For types smaller than the block optimization below,
+ // just swap directly to avoid pessimizing codegen.
+ if mem::size_of::<T>() < 32 {
+ let z = read(x);
+ copy_nonoverlapping(y, x, 1);
+ write(y, z);
+ } else {
+ swap_nonoverlapping(x, y, 1);
+ }
+}
+
+#[inline]
+unsafe fn swap_nonoverlapping_bytes(x: *mut u8, y: *mut u8, len: usize) {
+ // The approach here is to utilize simd to swap x & y efficiently. Testing reveals
+ // that swapping either 32 bytes or 64 bytes at a time is most efficient for Intel
+ // Haswell E processors. LLVM is more able to optimize if we give a struct a
+ // #[repr(simd)], even if we don't actually use this struct directly.
+ //
+ // FIXME repr(simd) broken on emscripten and redox
+ #[cfg_attr(not(any(target_os = "emscripten", target_os = "redox")), repr(simd))]
+ struct Block(u64, u64, u64, u64);
+ struct UnalignedBlock(u64, u64, u64, u64);
+
+ let block_size = mem::size_of::<Block>();
+
+ // Loop through x & y, copying them `Block` at a time
+ // The optimizer should unroll the loop fully for most types
+ // N.B. We can't use a for loop as the `range` impl calls `mem::swap` recursively
+ let mut i = 0;
+ while i + block_size <= len {
+ // Create some uninitialized memory as scratch space
+ // Declaring `t` here avoids aligning the stack when this loop is unused
+ let mut t = mem::MaybeUninit::<Block>::uninit();
+ let t = t.as_mut_ptr() as *mut u8;
+ let x = x.add(i);
+ let y = y.add(i);
+
+ // Swap a block of bytes of x & y, using t as a temporary buffer
+ // This should be optimized into efficient SIMD operations where available
+ copy_nonoverlapping(x, t, block_size);
+ copy_nonoverlapping(y, x, block_size);
+ copy_nonoverlapping(t, y, block_size);
+ i += block_size;
+ }
+
+ if i < len {
+ // Swap any remaining bytes
+ let mut t = mem::MaybeUninit::<UnalignedBlock>::uninit();
+ let rem = len - i;
+
+ let t = t.as_mut_ptr() as *mut u8;
+ let x = x.add(i);
+ let y = y.add(i);
+
+ copy_nonoverlapping(x, t, rem);
+ copy_nonoverlapping(y, x, rem);
+ copy_nonoverlapping(t, y, rem);
+ }
+}
+
+/// Moves `src` into the pointed `dst`, returning the previous `dst` value.
+///
+/// Neither value is dropped.
+///
+/// This function is semantically equivalent to [`mem::replace`] except that it
+/// operates on raw pointers instead of references. When references are
+/// available, [`mem::replace`] should be preferred.
+///
+/// [`mem::replace`]: ../mem/fn.replace.html
+///
+/// # Safety
+///
+/// Behavior is undefined if any of the following conditions are violated:
+///
+/// * `dst` must be [valid] for writes.
+///
+/// * `dst` must be properly aligned.
+///
+/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
+///
+/// [valid]: ../ptr/index.html#safety
+///
+/// # Examples
+///
+/// ```
+/// use std::ptr;
+///
+/// let mut rust = vec!['b', 'u', 's', 't'];
+///
+/// // `mem::replace` would have the same effect without requiring the unsafe
+/// // block.
+/// let b = unsafe {
+/// ptr::replace(&mut rust[0], 'r')
+/// };
+///
+/// assert_eq!(b, 'b');
+/// assert_eq!(rust, &['r', 'u', 's', 't']);
+/// ```
+#[inline]
+#[stable(feature = "rust1", since = "1.0.0")]
+pub unsafe fn replace<T>(dst: *mut T, mut src: T) -> T {
+ mem::swap(&mut *dst, &mut src); // cannot overlap
+ src
+}
+
+/// Reads the value from `src` without moving it. This leaves the
+/// memory in `src` unchanged.
+///
+/// # Safety
+///
+/// Behavior is undefined if any of the following conditions are violated:
+///
+/// * `src` must be [valid] for reads.
+///
+/// * `src` must be properly aligned. Use [`read_unaligned`] if this is not the
+/// case.
+///
+/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
+///
+/// # Examples
+///
+/// Basic usage:
+///
+/// ```
+/// let x = 12;
+/// let y = &x as *const i32;
+///
+/// unsafe {
+/// assert_eq!(std::ptr::read(y), 12);
+/// }
+/// ```
+///
+/// Manually implement [`mem::swap`]:
+///
+/// ```
+/// use std::ptr;
+///
+/// fn swap<T>(a: &mut T, b: &mut T) {
+/// unsafe {
+/// // Create a bitwise copy of the value at `a` in `tmp`.
+/// let tmp = ptr::read(a);
+///
+/// // Exiting at this point (either by explicitly returning or by
+/// // calling a function which panics) would cause the value in `tmp` to
+/// // be dropped while the same value is still referenced by `a`. This
+/// // could trigger undefined behavior if `T` is not `Copy`.
+///
+/// // Create a bitwise copy of the value at `b` in `a`.
+/// // This is safe because mutable references cannot alias.
+/// ptr::copy_nonoverlapping(b, a, 1);
+///
+/// // As above, exiting here could trigger undefined behavior because
+/// // the same value is referenced by `a` and `b`.
+///
+/// // Move `tmp` into `b`.
+/// ptr::write(b, tmp);
+///
+/// // `tmp` has been moved (`write` takes ownership of its second argument),
+/// // so nothing is dropped implicitly here.
+/// }
+/// }
+///
+/// let mut foo = "foo".to_owned();
+/// let mut bar = "bar".to_owned();
+///
+/// swap(&mut foo, &mut bar);
+///
+/// assert_eq!(foo, "bar");
+/// assert_eq!(bar, "foo");
+/// ```
+///
+/// ## Ownership of the Returned Value
+///
+/// `read` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`].
+/// If `T` is not [`Copy`], using both the returned value and the value at
+/// `*src` can violate memory safety. Note that assigning to `*src` counts as a
+/// use because it will attempt to drop the value at `*src`.
+///
+/// [`write`] can be used to overwrite data without causing it to be dropped.
+///
+/// ```
+/// use std::ptr;
+///
+/// let mut s = String::from("foo");
+/// unsafe {
+/// // `s2` now points to the same underlying memory as `s`.
+/// let mut s2: String = ptr::read(&s);
+///
+/// assert_eq!(s2, "foo");
+///
+/// // Assigning to `s2` causes its original value to be dropped. Beyond
+/// // this point, `s` must no longer be used, as the underlying memory has
+/// // been freed.
+/// s2 = String::default();
+/// assert_eq!(s2, "");
+///
+/// // Assigning to `s` would cause the old value to be dropped again,
+/// // resulting in undefined behavior.
+/// // s = String::from("bar"); // ERROR
+///
+/// // `ptr::write` can be used to overwrite a value without dropping it.
+/// ptr::write(&mut s, String::from("bar"));
+/// }
+///
+/// assert_eq!(s, "bar");
+/// ```
+///
+/// [`mem::swap`]: ../mem/fn.swap.html
+/// [valid]: ../ptr/index.html#safety
+/// [`Copy`]: ../marker/trait.Copy.html
+/// [`read_unaligned`]: ./fn.read_unaligned.html
+/// [`write`]: ./fn.write.html
+#[inline]
+#[stable(feature = "rust1", since = "1.0.0")]
+pub unsafe fn read<T>(src: *const T) -> T {
+ let mut tmp = MaybeUninit::<T>::uninit();
+ copy_nonoverlapping(src, tmp.as_mut_ptr(), 1);
+ tmp.assume_init()
+}
+
+/// Reads the value from `src` without moving it. This leaves the
+/// memory in `src` unchanged.
+///
+/// Unlike [`read`], `read_unaligned` works with unaligned pointers.
+///
+/// # Safety
+///
+/// Behavior is undefined if any of the following conditions are violated:
+///
+/// * `src` must be [valid] for reads.
+///
+/// Like [`read`], `read_unaligned` creates a bitwise copy of `T`, regardless of
+/// whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned
+/// value and the value at `*src` can [violate memory safety][read-ownership].
+///
+/// Note that even if `T` has size `0`, the pointer must be non-NULL.
+///
+/// [`Copy`]: ../marker/trait.Copy.html
+/// [`read`]: ./fn.read.html
+/// [`write_unaligned`]: ./fn.write_unaligned.html
+/// [read-ownership]: ./fn.read.html#ownership-of-the-returned-value
+/// [valid]: ../ptr/index.html#safety
+///
+/// # Examples
+///
+/// Access members of a packed struct by reference:
+///
+/// ```
+/// use std::ptr;
+///
+/// #[repr(packed, C)]
+/// struct Packed {
+/// _padding: u8,
+/// unaligned: u32,
+/// }
+///
+/// let x = Packed {
+/// _padding: 0x00,
+/// unaligned: 0x01020304,
+/// };
+///
+/// let v = unsafe {
+/// // Take the address of a 32-bit integer which is not aligned.
+/// // This must be done as a raw pointer; unaligned references are invalid.
+/// let unaligned = &x.unaligned as *const u32;
+///
+/// // Dereferencing normally will emit an aligned load instruction,
+/// // causing undefined behavior.
+/// // let v = *unaligned; // ERROR
+///
+/// // Instead, use `read_unaligned` to read improperly aligned values.
+/// let v = ptr::read_unaligned(unaligned);
+///
+/// v
+/// };
+///
+/// // Accessing unaligned values directly is safe.
+/// assert!(x.unaligned == v);
+/// ```
+#[inline]
+#[stable(feature = "ptr_unaligned", since = "1.17.0")]
+pub unsafe fn read_unaligned<T>(src: *const T) -> T {
+ let mut tmp = MaybeUninit::<T>::uninit();
+ copy_nonoverlapping(src as *const u8,
+ tmp.as_mut_ptr() as *mut u8,
+ mem::size_of::<T>());
+ tmp.assume_init()
+}
+
+/// Overwrites a memory location with the given value without reading or
+/// dropping the old value.
+///
+/// `write` does not drop the contents of `dst`. This is safe, but it could leak
+/// allocations or resources, so care should be taken not to overwrite an object
+/// that should be dropped.
+///
+/// Additionally, it does not drop `src`. Semantically, `src` is moved into the
+/// location pointed to by `dst`.
+///
+/// This is appropriate for initializing uninitialized memory, or overwriting
+/// memory that has previously been [`read`] from.
+///
+/// [`read`]: ./fn.read.html
+///
+/// # Safety
+///
+/// Behavior is undefined if any of the following conditions are violated:
+///
+/// * `dst` must be [valid] for writes.
+///
+/// * `dst` must be properly aligned. Use [`write_unaligned`] if this is not the
+/// case.
+///
+/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
+///
+/// [valid]: ../ptr/index.html#safety
+/// [`write_unaligned`]: ./fn.write_unaligned.html
+///
+/// # Examples
+///
+/// Basic usage:
+///
+/// ```
+/// let mut x = 0;
+/// let y = &mut x as *mut i32;
+/// let z = 12;
+///
+/// unsafe {
+/// std::ptr::write(y, z);
+/// assert_eq!(std::ptr::read(y), 12);
+/// }
+/// ```
+///
+/// Manually implement [`mem::swap`]:
+///
+/// ```
+/// use std::ptr;
+///
+/// fn swap<T>(a: &mut T, b: &mut T) {
+/// unsafe {
+/// // Create a bitwise copy of the value at `a` in `tmp`.
+/// let tmp = ptr::read(a);
+///
+/// // Exiting at this point (either by explicitly returning or by
+/// // calling a function which panics) would cause the value in `tmp` to
+/// // be dropped while the same value is still referenced by `a`. This
+/// // could trigger undefined behavior if `T` is not `Copy`.
+///
+/// // Create a bitwise copy of the value at `b` in `a`.
+/// // This is safe because mutable references cannot alias.
+/// ptr::copy_nonoverlapping(b, a, 1);
+///
+/// // As above, exiting here could trigger undefined behavior because
+/// // the same value is referenced by `a` and `b`.
+///
+/// // Move `tmp` into `b`.
+/// ptr::write(b, tmp);
+///
+/// // `tmp` has been moved (`write` takes ownership of its second argument),
+/// // so nothing is dropped implicitly here.
+/// }
+/// }
+///
+/// let mut foo = "foo".to_owned();
+/// let mut bar = "bar".to_owned();
+///
+/// swap(&mut foo, &mut bar);
+///
+/// assert_eq!(foo, "bar");
+/// assert_eq!(bar, "foo");
+/// ```
+///
+/// [`mem::swap`]: ../mem/fn.swap.html
+#[inline]
+#[stable(feature = "rust1", since = "1.0.0")]
+pub unsafe fn write<T>(dst: *mut T, src: T) {
+ intrinsics::move_val_init(&mut *dst, src)
+}
+
+/// Overwrites a memory location with the given value without reading or
+/// dropping the old value.
+///
+/// Unlike [`write`], the pointer may be unaligned.
+///
+/// `write_unaligned` does not drop the contents of `dst`. This is safe, but it
+/// could leak allocations or resources, so care should be taken not to overwrite
+/// an object that should be dropped.
+///
+/// Additionally, it does not drop `src`. Semantically, `src` is moved into the
+/// location pointed to by `dst`.
+///
+/// This is appropriate for initializing uninitialized memory, or overwriting
+/// memory that has previously been read with [`read_unaligned`].
+///
+/// [`write`]: ./fn.write.html
+/// [`read_unaligned`]: ./fn.read_unaligned.html
+///
+/// # Safety
+///
+/// Behavior is undefined if any of the following conditions are violated:
+///
+/// * `dst` must be [valid] for writes.
+///
+/// Note that even if `T` has size `0`, the pointer must be non-NULL.
+///
+/// [valid]: ../ptr/index.html#safety
+///
+/// # Examples
+///
+/// Access fields in a packed struct:
+///
+/// ```
+/// use std::{mem, ptr};
+///
+/// #[repr(packed, C)]
+/// #[derive(Default)]
+/// struct Packed {
+/// _padding: u8,
+/// unaligned: u32,
+/// }
+///
+/// let v = 0x01020304;
+/// let mut x: Packed = unsafe { mem::zeroed() };
+///
+/// unsafe {
+/// // Take a reference to a 32-bit integer which is not aligned.
+/// let unaligned = &mut x.unaligned as *mut u32;
+///
+/// // Dereferencing normally will emit an aligned store instruction,
+/// // causing undefined behavior because the pointer is not aligned.
+/// // *unaligned = v; // ERROR
+///
+/// // Instead, use `write_unaligned` to write improperly aligned values.
+/// ptr::write_unaligned(unaligned, v);
+/// }
+///
+/// // Accessing unaligned values directly is safe.
+/// assert!(x.unaligned == v);
+/// ```
+#[inline]
+#[stable(feature = "ptr_unaligned", since = "1.17.0")]
+pub unsafe fn write_unaligned<T>(dst: *mut T, src: T) {
+ copy_nonoverlapping(&src as *const T as *const u8,
+ dst as *mut u8,
+ mem::size_of::<T>());
+ mem::forget(src);
+}
+
+/// Performs a volatile read of the value from `src` without moving it. This
+/// leaves the memory in `src` unchanged.
+///
+/// Volatile operations are intended to act on I/O memory, and are guaranteed
+/// to not be elided or reordered by the compiler across other volatile
+/// operations.
+///
+/// [`write_volatile`]: ./fn.write_volatile.html
+///
+/// # Notes
+///
+/// Rust does not currently have a rigorously and formally defined memory model,
+/// so the precise semantics of what "volatile" means here is subject to change
+/// over time. That being said, the semantics will almost always end up pretty
+/// similar to [C11's definition of volatile][c11].
+///
+/// The compiler shouldn't change the relative order or number of volatile
+/// memory operations. However, volatile memory operations on zero-sized types
+/// (e.g., if a zero-sized type is passed to `read_volatile`) are noops
+/// and may be ignored.
+///
+/// [c11]: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
+///
+/// # Safety
+///
+/// Behavior is undefined if any of the following conditions are violated:
+///
+/// * `src` must be [valid] for reads.
+///
+/// * `src` must be properly aligned.
+///
+/// Like [`read`], `read_volatile` creates a bitwise copy of `T`, regardless of
+/// whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned
+/// value and the value at `*src` can [violate memory safety][read-ownership].
+/// However, storing non-[`Copy`] types in volatile memory is almost certainly
+/// incorrect.
+///
+/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
+///
+/// [valid]: ../ptr/index.html#safety
+/// [`Copy`]: ../marker/trait.Copy.html
+/// [`read`]: ./fn.read.html
+/// [read-ownership]: ./fn.read.html#ownership-of-the-returned-value
+///
+/// Just like in C, whether an operation is volatile has no bearing whatsoever
+/// on questions involving concurrent access from multiple threads. Volatile
+/// accesses behave exactly like non-atomic accesses in that regard. In particular,
+/// a race between a `read_volatile` and any write operation to the same location
+/// is undefined behavior.
+///
+/// # Examples
+///
+/// Basic usage:
+///
+/// ```
+/// let x = 12;
+/// let y = &x as *const i32;
+///
+/// unsafe {
+/// assert_eq!(std::ptr::read_volatile(y), 12);
+/// }
+/// ```
+#[inline]
+#[stable(feature = "volatile", since = "1.9.0")]
+pub unsafe fn read_volatile<T>(src: *const T) -> T {
+ intrinsics::volatile_load(src)
+}
+
+/// Performs a volatile write of a memory location with the given value without
+/// reading or dropping the old value.
+///
+/// Volatile operations are intended to act on I/O memory, and are guaranteed
+/// to not be elided or reordered by the compiler across other volatile
+/// operations.
+///
+/// `write_volatile` does not drop the contents of `dst`. This is safe, but it
+/// could leak allocations or resources, so care should be taken not to overwrite
+/// an object that should be dropped.
+///
+/// Additionally, it does not drop `src`. Semantically, `src` is moved into the
+/// location pointed to by `dst`.
+///
+/// [`read_volatile`]: ./fn.read_volatile.html
+///
+/// # Notes
+///
+/// Rust does not currently have a rigorously and formally defined memory model,
+/// so the precise semantics of what "volatile" means here is subject to change
+/// over time. That being said, the semantics will almost always end up pretty
+/// similar to [C11's definition of volatile][c11].
+///
+/// The compiler shouldn't change the relative order or number of volatile
+/// memory operations. However, volatile memory operations on zero-sized types
+/// (e.g., if a zero-sized type is passed to `write_volatile`) are noops
+/// and may be ignored.
+///
+/// [c11]: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
+///
+/// # Safety
+///
+/// Behavior is undefined if any of the following conditions are violated:
+///
+/// * `dst` must be [valid] for writes.
+///
+/// * `dst` must be properly aligned.
+///
+/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
+///
+/// [valid]: ../ptr/index.html#safety
+///
+/// Just like in C, whether an operation is volatile has no bearing whatsoever
+/// on questions involving concurrent access from multiple threads. Volatile
+/// accesses behave exactly like non-atomic accesses in that regard. In particular,
+/// a race between a `write_volatile` and any other operation (reading or writing)
+/// on the same location is undefined behavior.
+///
+/// # Examples
+///
+/// Basic usage:
+///
+/// ```
+/// let mut x = 0;
+/// let y = &mut x as *mut i32;
+/// let z = 12;
+///
+/// unsafe {
+/// std::ptr::write_volatile(y, z);
+/// assert_eq!(std::ptr::read_volatile(y), 12);
+/// }
+/// ```
+#[inline]
+#[stable(feature = "volatile", since = "1.9.0")]
+pub unsafe fn write_volatile<T>(dst: *mut T, src: T) {
+ intrinsics::volatile_store(dst, src);
+}
+
+#[lang = "const_ptr"]
+impl<T: ?Sized> *const T {
+ /// Returns `true` if the pointer is null.
+ ///
+ /// Note that unsized types have many possible null pointers, as only the
+ /// raw data pointer is considered, not their length, vtable, etc.
+ /// Therefore, two pointers that are null may still not compare equal to
+ /// each other.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// let s: &str = "Follow the rabbit";
+ /// let ptr: *const u8 = s.as_ptr();
+ /// assert!(!ptr.is_null());
+ /// ```
+ #[stable(feature = "rust1", since = "1.0.0")]
+ #[inline]
+ pub fn is_null(self) -> bool {
+ // Compare via a cast to a thin pointer, so fat pointers are only
+ // considering their "data" part for null-ness.
+ (self as *const u8) == null()
+ }
+
+ /// Cast to a pointer to a different type
+ #[unstable(feature = "ptr_cast", issue = "60602")]
+ #[inline]
+ pub const fn cast<U>(self) -> *const U {
+ self as _
+ }
+
+ /// Returns `None` if the pointer is null, or else returns a reference to
+ /// the value wrapped in `Some`.
+ ///
+ /// # Safety
+ ///
+ /// While this method and its mutable counterpart are useful for
+ /// null-safety, it is important to note that this is still an unsafe
+ /// operation because the returned value could be pointing to invalid
+ /// memory.
+ ///
+ /// Additionally, the lifetime `'a` returned is arbitrarily chosen and does
+ /// not necessarily reflect the actual lifetime of the data.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// let ptr: *const u8 = &10u8 as *const u8;
+ ///
+ /// unsafe {
+ /// if let Some(val_back) = ptr.as_ref() {
+ /// println!("We got back the value: {}!", val_back);
+ /// }
+ /// }
+ /// ```
+ ///
+ /// # Null-unchecked version
+ ///
+ /// If you are sure the pointer can never be null and are looking for some kind of
+ /// `as_ref_unchecked` that returns the `&T` instead of `Option<&T>`, know that you can
+ /// dereference the pointer directly.
+ ///
+ /// ```
+ /// let ptr: *const u8 = &10u8 as *const u8;
+ ///
+ /// unsafe {
+ /// let val_back = &*ptr;
+ /// println!("We got back the value: {}!", val_back);
+ /// }
+ /// ```
+ #[stable(feature = "ptr_as_ref", since = "1.9.0")]
+ #[inline]
+ pub unsafe fn as_ref<'a>(self) -> Option<&'a T> {
+ if self.is_null() {
+ None
+ } else {
+ Some(&*self)
+ }
+ }
+
+ /// Calculates the offset from a pointer.
+ ///
+ /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
+ /// offset of `3 * size_of::<T>()` bytes.
+ ///
+ /// # Safety
+ ///
+ /// If any of the following conditions are violated, the result is Undefined
+ /// Behavior:
+ ///
+ /// * Both the starting and resulting pointer must be either in bounds or one
+ /// byte past the end of the same allocated object.
+ ///
+ /// * The computed offset, **in bytes**, cannot overflow an `isize`.
+ ///
+ /// * The offset being in bounds cannot rely on "wrapping around" the address
+ /// space. That is, the infinite-precision sum, **in bytes** must fit in a usize.
+ ///
+ /// The compiler and standard library generally tries to ensure allocations
+ /// never reach a size where an offset is a concern. For instance, `Vec`
+ /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
+ /// `vec.as_ptr().add(vec.len())` is always safe.
+ ///
+ /// Most platforms fundamentally can't even construct such an allocation.
+ /// For instance, no known 64-bit platform can ever serve a request
+ /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
+ /// However, some 32-bit and 16-bit platforms may successfully serve a request for
+ /// more than `isize::MAX` bytes with things like Physical Address
+ /// Extension. As such, memory acquired directly from allocators or memory
+ /// mapped files *may* be too large to handle with this function.
+ ///
+ /// Consider using `wrapping_offset` instead if these constraints are
+ /// difficult to satisfy. The only advantage of this method is that it
+ /// enables more aggressive compiler optimizations.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// let s: &str = "123";
+ /// let ptr: *const u8 = s.as_ptr();
+ ///
+ /// unsafe {
+ /// println!("{}", *ptr.offset(1) as char);
+ /// println!("{}", *ptr.offset(2) as char);
+ /// }
+ /// ```
+ #[stable(feature = "rust1", since = "1.0.0")]
+ #[inline]
+ pub unsafe fn offset(self, count: isize) -> *const T where T: Sized {
+ intrinsics::offset(self, count)
+ }
+
+ /// Calculates the offset from a pointer using wrapping arithmetic.
+ ///
+ /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
+ /// offset of `3 * size_of::<T>()` bytes.
+ ///
+ /// # Safety
+ ///
+ /// The resulting pointer does not need to be in bounds, but it is
+ /// potentially hazardous to dereference (which requires `unsafe`).
+ /// In particular, the resulting pointer may *not* be used to access a
+ /// different allocated object than the one `self` points to. In other
+ /// words, `x.wrapping_offset(y.wrapping_offset_from(x))` is
+ /// *not* the same as `y`, and dereferencing it is undefined behavior
+ /// unless `x` and `y` point into the same allocated object.
+ ///
+ /// Always use `.offset(count)` instead when possible, because `offset`
+ /// allows the compiler to optimize better. If you need to cross object
+ /// boundaries, cast the pointer to an integer and do the arithmetic there.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// // Iterate using a raw pointer in increments of two elements
+ /// let data = [1u8, 2, 3, 4, 5];
+ /// let mut ptr: *const u8 = data.as_ptr();
+ /// let step = 2;
+ /// let end_rounded_up = ptr.wrapping_offset(6);
+ ///
+ /// // This loop prints "1, 3, 5, "
+ /// while ptr != end_rounded_up {
+ /// unsafe {
+ /// print!("{}, ", *ptr);
+ /// }
+ /// ptr = ptr.wrapping_offset(step);
+ /// }
+ /// ```
+ #[stable(feature = "ptr_wrapping_offset", since = "1.16.0")]
+ #[inline]
+ pub fn wrapping_offset(self, count: isize) -> *const T where T: Sized {
+ unsafe {
+ intrinsics::arith_offset(self, count)
+ }
+ }
+
+ /// Calculates the distance between two pointers. The returned value is in
+ /// units of T: the distance in bytes is divided by `mem::size_of::<T>()`.
+ ///
+ /// This function is the inverse of [`offset`].
+ ///
+ /// [`offset`]: #method.offset
+ /// [`wrapping_offset_from`]: #method.wrapping_offset_from
+ ///
+ /// # Safety
+ ///
+ /// If any of the following conditions are violated, the result is Undefined
+ /// Behavior:
+ ///
+ /// * Both the starting and other pointer must be either in bounds or one
+ /// byte past the end of the same allocated object.
+ ///
+ /// * The distance between the pointers, **in bytes**, cannot overflow an `isize`.
+ ///
+ /// * The distance between the pointers, in bytes, must be an exact multiple
+ /// of the size of `T`.
+ ///
+ /// * The distance being in bounds cannot rely on "wrapping around" the address space.
+ ///
+ /// The compiler and standard library generally try to ensure allocations
+ /// never reach a size where an offset is a concern. For instance, `Vec`
+ /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
+ /// `ptr_into_vec.offset_from(vec.as_ptr())` is always safe.
+ ///
+ /// Most platforms fundamentally can't even construct such an allocation.
+ /// For instance, no known 64-bit platform can ever serve a request
+ /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
+ /// However, some 32-bit and 16-bit platforms may successfully serve a request for
+ /// more than `isize::MAX` bytes with things like Physical Address
+ /// Extension. As such, memory acquired directly from allocators or memory
+ /// mapped files *may* be too large to handle with this function.
+ ///
+ /// Consider using [`wrapping_offset_from`] instead if these constraints are
+ /// difficult to satisfy. The only advantage of this method is that it
+ /// enables more aggressive compiler optimizations.
+ ///
+ /// # Panics
+ ///
+ /// This function panics if `T` is a Zero-Sized Type ("ZST").
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// #![feature(ptr_offset_from)]
+ ///
+ /// let a = [0; 5];
+ /// let ptr1: *const i32 = &a[1];
+ /// let ptr2: *const i32 = &a[3];
+ /// unsafe {
+ /// assert_eq!(ptr2.offset_from(ptr1), 2);
+ /// assert_eq!(ptr1.offset_from(ptr2), -2);
+ /// assert_eq!(ptr1.offset(2), ptr2);
+ /// assert_eq!(ptr2.offset(-2), ptr1);
+ /// }
+ /// ```
+ #[unstable(feature = "ptr_offset_from", issue = "41079")]
+ #[inline]
+ pub unsafe fn offset_from(self, origin: *const T) -> isize where T: Sized {
+ let pointee_size = mem::size_of::<T>();
+ assert!(0 < pointee_size && pointee_size <= isize::max_value() as usize);
+
+ // This is the same sequence that Clang emits for pointer subtraction.
+ // It can be neither `nsw` nor `nuw` because the input is treated as
+ // unsigned but then the output is treated as signed, so neither works.
+ let d = isize::wrapping_sub(self as _, origin as _);
+ intrinsics::exact_div(d, pointee_size as _)
+ }
+
+ /// Calculates the distance between two pointers. The returned value is in
+ /// units of T: the distance in bytes is divided by `mem::size_of::<T>()`.
+ ///
+ /// If the address different between the two pointers is not a multiple of
+ /// `mem::size_of::<T>()` then the result of the division is rounded towards
+ /// zero.
+ ///
+ /// Though this method is safe for any two pointers, note that its result
+ /// will be mostly useless if the two pointers aren't into the same allocated
+ /// object, for example if they point to two different local variables.
+ ///
+ /// # Panics
+ ///
+ /// This function panics if `T` is a zero-sized type.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// #![feature(ptr_wrapping_offset_from)]
+ ///
+ /// let a = [0; 5];
+ /// let ptr1: *const i32 = &a[1];
+ /// let ptr2: *const i32 = &a[3];
+ /// assert_eq!(ptr2.wrapping_offset_from(ptr1), 2);
+ /// assert_eq!(ptr1.wrapping_offset_from(ptr2), -2);
+ /// assert_eq!(ptr1.wrapping_offset(2), ptr2);
+ /// assert_eq!(ptr2.wrapping_offset(-2), ptr1);
+ ///
+ /// let ptr1: *const i32 = 3 as _;
+ /// let ptr2: *const i32 = 13 as _;
+ /// assert_eq!(ptr2.wrapping_offset_from(ptr1), 2);
+ /// ```
+ #[unstable(feature = "ptr_wrapping_offset_from", issue = "41079")]
+ #[inline]
+ pub fn wrapping_offset_from(self, origin: *const T) -> isize where T: Sized {
+ let pointee_size = mem::size_of::<T>();
+ assert!(0 < pointee_size && pointee_size <= isize::max_value() as usize);
+
+ let d = isize::wrapping_sub(self as _, origin as _);
+ d.wrapping_div(pointee_size as _)
+ }
+
+ /// Calculates the offset from a pointer (convenience for `.offset(count as isize)`).
+ ///
+ /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
+ /// offset of `3 * size_of::<T>()` bytes.
+ ///
+ /// # Safety
+ ///
+ /// If any of the following conditions are violated, the result is Undefined
+ /// Behavior:
+ ///
+ /// * Both the starting and resulting pointer must be either in bounds or one
+ /// byte past the end of the same allocated object.
+ ///
+ /// * The computed offset, **in bytes**, cannot overflow an `isize`.
+ ///
+ /// * The offset being in bounds cannot rely on "wrapping around" the address
+ /// space. That is, the infinite-precision sum must fit in a `usize`.
+ ///
+ /// The compiler and standard library generally tries to ensure allocations
+ /// never reach a size where an offset is a concern. For instance, `Vec`
+ /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
+ /// `vec.as_ptr().add(vec.len())` is always safe.
+ ///
+ /// Most platforms fundamentally can't even construct such an allocation.
+ /// For instance, no known 64-bit platform can ever serve a request
+ /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
+ /// However, some 32-bit and 16-bit platforms may successfully serve a request for
+ /// more than `isize::MAX` bytes with things like Physical Address
+ /// Extension. As such, memory acquired directly from allocators or memory
+ /// mapped files *may* be too large to handle with this function.
+ ///
+ /// Consider using `wrapping_offset` instead if these constraints are
+ /// difficult to satisfy. The only advantage of this method is that it
+ /// enables more aggressive compiler optimizations.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// let s: &str = "123";
+ /// let ptr: *const u8 = s.as_ptr();
+ ///
+ /// unsafe {
+ /// println!("{}", *ptr.add(1) as char);
+ /// println!("{}", *ptr.add(2) as char);
+ /// }
+ /// ```
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn add(self, count: usize) -> Self
+ where T: Sized,
+ {
+ self.offset(count as isize)
+ }
+
+ /// Calculates the offset from a pointer (convenience for
+ /// `.offset((count as isize).wrapping_neg())`).
+ ///
+ /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
+ /// offset of `3 * size_of::<T>()` bytes.
+ ///
+ /// # Safety
+ ///
+ /// If any of the following conditions are violated, the result is Undefined
+ /// Behavior:
+ ///
+ /// * Both the starting and resulting pointer must be either in bounds or one
+ /// byte past the end of the same allocated object.
+ ///
+ /// * The computed offset cannot exceed `isize::MAX` **bytes**.
+ ///
+ /// * The offset being in bounds cannot rely on "wrapping around" the address
+ /// space. That is, the infinite-precision sum must fit in a usize.
+ ///
+ /// The compiler and standard library generally tries to ensure allocations
+ /// never reach a size where an offset is a concern. For instance, `Vec`
+ /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
+ /// `vec.as_ptr().add(vec.len()).sub(vec.len())` is always safe.
+ ///
+ /// Most platforms fundamentally can't even construct such an allocation.
+ /// For instance, no known 64-bit platform can ever serve a request
+ /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
+ /// However, some 32-bit and 16-bit platforms may successfully serve a request for
+ /// more than `isize::MAX` bytes with things like Physical Address
+ /// Extension. As such, memory acquired directly from allocators or memory
+ /// mapped files *may* be too large to handle with this function.
+ ///
+ /// Consider using `wrapping_offset` instead if these constraints are
+ /// difficult to satisfy. The only advantage of this method is that it
+ /// enables more aggressive compiler optimizations.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// let s: &str = "123";
+ ///
+ /// unsafe {
+ /// let end: *const u8 = s.as_ptr().add(3);
+ /// println!("{}", *end.sub(1) as char);
+ /// println!("{}", *end.sub(2) as char);
+ /// }
+ /// ```
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn sub(self, count: usize) -> Self
+ where T: Sized,
+ {
+ self.offset((count as isize).wrapping_neg())
+ }
+
+ /// Calculates the offset from a pointer using wrapping arithmetic.
+ /// (convenience for `.wrapping_offset(count as isize)`)
+ ///
+ /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
+ /// offset of `3 * size_of::<T>()` bytes.
+ ///
+ /// # Safety
+ ///
+ /// The resulting pointer does not need to be in bounds, but it is
+ /// potentially hazardous to dereference (which requires `unsafe`).
+ ///
+ /// Always use `.add(count)` instead when possible, because `add`
+ /// allows the compiler to optimize better.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// // Iterate using a raw pointer in increments of two elements
+ /// let data = [1u8, 2, 3, 4, 5];
+ /// let mut ptr: *const u8 = data.as_ptr();
+ /// let step = 2;
+ /// let end_rounded_up = ptr.wrapping_add(6);
+ ///
+ /// // This loop prints "1, 3, 5, "
+ /// while ptr != end_rounded_up {
+ /// unsafe {
+ /// print!("{}, ", *ptr);
+ /// }
+ /// ptr = ptr.wrapping_add(step);
+ /// }
+ /// ```
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub fn wrapping_add(self, count: usize) -> Self
+ where T: Sized,
+ {
+ self.wrapping_offset(count as isize)
+ }
+
+ /// Calculates the offset from a pointer using wrapping arithmetic.
+ /// (convenience for `.wrapping_offset((count as isize).wrapping_sub())`)
+ ///
+ /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
+ /// offset of `3 * size_of::<T>()` bytes.
+ ///
+ /// # Safety
+ ///
+ /// The resulting pointer does not need to be in bounds, but it is
+ /// potentially hazardous to dereference (which requires `unsafe`).
+ ///
+ /// Always use `.sub(count)` instead when possible, because `sub`
+ /// allows the compiler to optimize better.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// // Iterate using a raw pointer in increments of two elements (backwards)
+ /// let data = [1u8, 2, 3, 4, 5];
+ /// let mut ptr: *const u8 = data.as_ptr();
+ /// let start_rounded_down = ptr.wrapping_sub(2);
+ /// ptr = ptr.wrapping_add(4);
+ /// let step = 2;
+ /// // This loop prints "5, 3, 1, "
+ /// while ptr != start_rounded_down {
+ /// unsafe {
+ /// print!("{}, ", *ptr);
+ /// }
+ /// ptr = ptr.wrapping_sub(step);
+ /// }
+ /// ```
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub fn wrapping_sub(self, count: usize) -> Self
+ where T: Sized,
+ {
+ self.wrapping_offset((count as isize).wrapping_neg())
+ }
+
+ /// Reads the value from `self` without moving it. This leaves the
+ /// memory in `self` unchanged.
+ ///
+ /// See [`ptr::read`] for safety concerns and examples.
+ ///
+ /// [`ptr::read`]: ./ptr/fn.read.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn read(self) -> T
+ where T: Sized,
+ {
+ read(self)
+ }
+
+ /// Performs a volatile read of the value from `self` without moving it. This
+ /// leaves the memory in `self` unchanged.
+ ///
+ /// Volatile operations are intended to act on I/O memory, and are guaranteed
+ /// to not be elided or reordered by the compiler across other volatile
+ /// operations.
+ ///
+ /// See [`ptr::read_volatile`] for safety concerns and examples.
+ ///
+ /// [`ptr::read_volatile`]: ./ptr/fn.read_volatile.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn read_volatile(self) -> T
+ where T: Sized,
+ {
+ read_volatile(self)
+ }
+
+ /// Reads the value from `self` without moving it. This leaves the
+ /// memory in `self` unchanged.
+ ///
+ /// Unlike `read`, the pointer may be unaligned.
+ ///
+ /// See [`ptr::read_unaligned`] for safety concerns and examples.
+ ///
+ /// [`ptr::read_unaligned`]: ./ptr/fn.read_unaligned.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn read_unaligned(self) -> T
+ where T: Sized,
+ {
+ read_unaligned(self)
+ }
+
+ /// Copies `count * size_of<T>` bytes from `self` to `dest`. The source
+ /// and destination may overlap.
+ ///
+ /// NOTE: this has the *same* argument order as [`ptr::copy`].
+ ///
+ /// See [`ptr::copy`] for safety concerns and examples.
+ ///
+ /// [`ptr::copy`]: ./ptr/fn.copy.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn copy_to(self, dest: *mut T, count: usize)
+ where T: Sized,
+ {
+ copy(self, dest, count)
+ }
+
+ /// Copies `count * size_of<T>` bytes from `self` to `dest`. The source
+ /// and destination may *not* overlap.
+ ///
+ /// NOTE: this has the *same* argument order as [`ptr::copy_nonoverlapping`].
+ ///
+ /// See [`ptr::copy_nonoverlapping`] for safety concerns and examples.
+ ///
+ /// [`ptr::copy_nonoverlapping`]: ./ptr/fn.copy_nonoverlapping.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn copy_to_nonoverlapping(self, dest: *mut T, count: usize)
+ where T: Sized,
+ {
+ copy_nonoverlapping(self, dest, count)
+ }
+
+ /// Computes the offset that needs to be applied to the pointer in order to make it aligned to
+ /// `align`.
+ ///
+ /// If it is not possible to align the pointer, the implementation returns
+ /// `usize::max_value()`.
+ ///
+ /// The offset is expressed in number of `T` elements, and not bytes. The value returned can be
+ /// used with the `offset` or `offset_to` methods.
+ ///
+ /// There are no guarantees whatsover that offsetting the pointer will not overflow or go
+ /// beyond the allocation that the pointer points into. It is up to the caller to ensure that
+ /// the returned offset is correct in all terms other than alignment.
+ ///
+ /// # Panics
+ ///
+ /// The function panics if `align` is not a power-of-two.
+ ///
+ /// # Examples
+ ///
+ /// Accessing adjacent `u8` as `u16`
+ ///
+ /// ```
+ /// # fn foo(n: usize) {
+ /// # use std::mem::align_of;
+ /// # unsafe {
+ /// let x = [5u8, 6u8, 7u8, 8u8, 9u8];
+ /// let ptr = &x[n] as *const u8;
+ /// let offset = ptr.align_offset(align_of::<u16>());
+ /// if offset < x.len() - n - 1 {
+ /// let u16_ptr = ptr.add(offset) as *const u16;
+ /// assert_ne!(*u16_ptr, 500);
+ /// } else {
+ /// // while the pointer can be aligned via `offset`, it would point
+ /// // outside the allocation
+ /// }
+ /// # } }
+ /// ```
+ #[stable(feature = "align_offset", since = "1.36.0")]
+ pub fn align_offset(self, align: usize) -> usize where T: Sized {
+ if !align.is_power_of_two() {
+ panic!("align_offset: align is not a power-of-two");
+ }
+ unsafe {
+ align_offset(self, align)
+ }
+ }
+}
+
+
+#[lang = "mut_ptr"]
+impl<T: ?Sized> *mut T {
+ /// Returns `true` if the pointer is null.
+ ///
+ /// Note that unsized types have many possible null pointers, as only the
+ /// raw data pointer is considered, not their length, vtable, etc.
+ /// Therefore, two pointers that are null may still not compare equal to
+ /// each other.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// let mut s = [1, 2, 3];
+ /// let ptr: *mut u32 = s.as_mut_ptr();
+ /// assert!(!ptr.is_null());
+ /// ```
+ #[stable(feature = "rust1", since = "1.0.0")]
+ #[inline]
+ pub fn is_null(self) -> bool {
+ // Compare via a cast to a thin pointer, so fat pointers are only
+ // considering their "data" part for null-ness.
+ (self as *mut u8) == null_mut()
+ }
+
+ /// Cast to a pointer to a different type
+ #[unstable(feature = "ptr_cast", issue = "60602")]
+ #[inline]
+ pub const fn cast<U>(self) -> *mut U {
+ self as _
+ }
+
+ /// Returns `None` if the pointer is null, or else returns a reference to
+ /// the value wrapped in `Some`.
+ ///
+ /// # Safety
+ ///
+ /// While this method and its mutable counterpart are useful for
+ /// null-safety, it is important to note that this is still an unsafe
+ /// operation because the returned value could be pointing to invalid
+ /// memory.
+ ///
+ /// Additionally, the lifetime `'a` returned is arbitrarily chosen and does
+ /// not necessarily reflect the actual lifetime of the data.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// let ptr: *mut u8 = &mut 10u8 as *mut u8;
+ ///
+ /// unsafe {
+ /// if let Some(val_back) = ptr.as_ref() {
+ /// println!("We got back the value: {}!", val_back);
+ /// }
+ /// }
+ /// ```
+ ///
+ /// # Null-unchecked version
+ ///
+ /// If you are sure the pointer can never be null and are looking for some kind of
+ /// `as_ref_unchecked` that returns the `&T` instead of `Option<&T>`, know that you can
+ /// dereference the pointer directly.
+ ///
+ /// ```
+ /// let ptr: *mut u8 = &mut 10u8 as *mut u8;
+ ///
+ /// unsafe {
+ /// let val_back = &*ptr;
+ /// println!("We got back the value: {}!", val_back);
+ /// }
+ /// ```
+ #[stable(feature = "ptr_as_ref", since = "1.9.0")]
+ #[inline]
+ pub unsafe fn as_ref<'a>(self) -> Option<&'a T> {
+ if self.is_null() {
+ None
+ } else {
+ Some(&*self)
+ }
+ }
+
+ /// Calculates the offset from a pointer.
+ ///
+ /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
+ /// offset of `3 * size_of::<T>()` bytes.
+ ///
+ /// # Safety
+ ///
+ /// If any of the following conditions are violated, the result is Undefined
+ /// Behavior:
+ ///
+ /// * Both the starting and resulting pointer must be either in bounds or one
+ /// byte past the end of the same allocated object.
+ ///
+ /// * The computed offset, **in bytes**, cannot overflow an `isize`.
+ ///
+ /// * The offset being in bounds cannot rely on "wrapping around" the address
+ /// space. That is, the infinite-precision sum, **in bytes** must fit in a usize.
+ ///
+ /// The compiler and standard library generally tries to ensure allocations
+ /// never reach a size where an offset is a concern. For instance, `Vec`
+ /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
+ /// `vec.as_ptr().add(vec.len())` is always safe.
+ ///
+ /// Most platforms fundamentally can't even construct such an allocation.
+ /// For instance, no known 64-bit platform can ever serve a request
+ /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
+ /// However, some 32-bit and 16-bit platforms may successfully serve a request for
+ /// more than `isize::MAX` bytes with things like Physical Address
+ /// Extension. As such, memory acquired directly from allocators or memory
+ /// mapped files *may* be too large to handle with this function.
+ ///
+ /// Consider using `wrapping_offset` instead if these constraints are
+ /// difficult to satisfy. The only advantage of this method is that it
+ /// enables more aggressive compiler optimizations.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// let mut s = [1, 2, 3];
+ /// let ptr: *mut u32 = s.as_mut_ptr();
+ ///
+ /// unsafe {
+ /// println!("{}", *ptr.offset(1));
+ /// println!("{}", *ptr.offset(2));
+ /// }
+ /// ```
+ #[stable(feature = "rust1", since = "1.0.0")]
+ #[inline]
+ pub unsafe fn offset(self, count: isize) -> *mut T where T: Sized {
+ intrinsics::offset(self, count) as *mut T
+ }
+
+ /// Calculates the offset from a pointer using wrapping arithmetic.
+ /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
+ /// offset of `3 * size_of::<T>()` bytes.
+ ///
+ /// # Safety
+ ///
+ /// The resulting pointer does not need to be in bounds, but it is
+ /// potentially hazardous to dereference (which requires `unsafe`).
+ /// In particular, the resulting pointer may *not* be used to access a
+ /// different allocated object than the one `self` points to. In other
+ /// words, `x.wrapping_offset(y.wrapping_offset_from(x))` is
+ /// *not* the same as `y`, and dereferencing it is undefined behavior
+ /// unless `x` and `y` point into the same allocated object.
+ ///
+ /// Always use `.offset(count)` instead when possible, because `offset`
+ /// allows the compiler to optimize better. If you need to cross object
+ /// boundaries, cast the pointer to an integer and do the arithmetic there.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// // Iterate using a raw pointer in increments of two elements
+ /// let mut data = [1u8, 2, 3, 4, 5];
+ /// let mut ptr: *mut u8 = data.as_mut_ptr();
+ /// let step = 2;
+ /// let end_rounded_up = ptr.wrapping_offset(6);
+ ///
+ /// while ptr != end_rounded_up {
+ /// unsafe {
+ /// *ptr = 0;
+ /// }
+ /// ptr = ptr.wrapping_offset(step);
+ /// }
+ /// assert_eq!(&data, &[0, 2, 0, 4, 0]);
+ /// ```
+ #[stable(feature = "ptr_wrapping_offset", since = "1.16.0")]
+ #[inline]
+ pub fn wrapping_offset(self, count: isize) -> *mut T where T: Sized {
+ unsafe {
+ intrinsics::arith_offset(self, count) as *mut T
+ }
+ }
+
+ /// Returns `None` if the pointer is null, or else returns a mutable
+ /// reference to the value wrapped in `Some`.
+ ///
+ /// # Safety
+ ///
+ /// As with `as_ref`, this is unsafe because it cannot verify the validity
+ /// of the returned pointer, nor can it ensure that the lifetime `'a`
+ /// returned is indeed a valid lifetime for the contained data.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// let mut s = [1, 2, 3];
+ /// let ptr: *mut u32 = s.as_mut_ptr();
+ /// let first_value = unsafe { ptr.as_mut().unwrap() };
+ /// *first_value = 4;
+ /// println!("{:?}", s); // It'll print: "[4, 2, 3]".
+ /// ```
+ #[stable(feature = "ptr_as_ref", since = "1.9.0")]
+ #[inline]
+ pub unsafe fn as_mut<'a>(self) -> Option<&'a mut T> {
+ if self.is_null() {
+ None
+ } else {
+ Some(&mut *self)
+ }
+ }
+
+ /// Calculates the distance between two pointers. The returned value is in
+ /// units of T: the distance in bytes is divided by `mem::size_of::<T>()`.
+ ///
+ /// This function is the inverse of [`offset`].
+ ///
+ /// [`offset`]: #method.offset-1
+ /// [`wrapping_offset_from`]: #method.wrapping_offset_from-1
+ ///
+ /// # Safety
+ ///
+ /// If any of the following conditions are violated, the result is Undefined
+ /// Behavior:
+ ///
+ /// * Both the starting and other pointer must be either in bounds or one
+ /// byte past the end of the same allocated object.
+ ///
+ /// * The distance between the pointers, **in bytes**, cannot overflow an `isize`.
+ ///
+ /// * The distance between the pointers, in bytes, must be an exact multiple
+ /// of the size of `T`.
+ ///
+ /// * The distance being in bounds cannot rely on "wrapping around" the address space.
+ ///
+ /// The compiler and standard library generally try to ensure allocations
+ /// never reach a size where an offset is a concern. For instance, `Vec`
+ /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
+ /// `ptr_into_vec.offset_from(vec.as_ptr())` is always safe.
+ ///
+ /// Most platforms fundamentally can't even construct such an allocation.
+ /// For instance, no known 64-bit platform can ever serve a request
+ /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
+ /// However, some 32-bit and 16-bit platforms may successfully serve a request for
+ /// more than `isize::MAX` bytes with things like Physical Address
+ /// Extension. As such, memory acquired directly from allocators or memory
+ /// mapped files *may* be too large to handle with this function.
+ ///
+ /// Consider using [`wrapping_offset_from`] instead if these constraints are
+ /// difficult to satisfy. The only advantage of this method is that it
+ /// enables more aggressive compiler optimizations.
+ ///
+ /// # Panics
+ ///
+ /// This function panics if `T` is a Zero-Sized Type ("ZST").
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// #![feature(ptr_offset_from)]
+ ///
+ /// let mut a = [0; 5];
+ /// let ptr1: *mut i32 = &mut a[1];
+ /// let ptr2: *mut i32 = &mut a[3];
+ /// unsafe {
+ /// assert_eq!(ptr2.offset_from(ptr1), 2);
+ /// assert_eq!(ptr1.offset_from(ptr2), -2);
+ /// assert_eq!(ptr1.offset(2), ptr2);
+ /// assert_eq!(ptr2.offset(-2), ptr1);
+ /// }
+ /// ```
+ #[unstable(feature = "ptr_offset_from", issue = "41079")]
+ #[inline]
+ pub unsafe fn offset_from(self, origin: *const T) -> isize where T: Sized {
+ (self as *const T).offset_from(origin)
+ }
+
+ /// Calculates the distance between two pointers. The returned value is in
+ /// units of T: the distance in bytes is divided by `mem::size_of::<T>()`.
+ ///
+ /// If the address different between the two pointers is not a multiple of
+ /// `mem::size_of::<T>()` then the result of the division is rounded towards
+ /// zero.
+ ///
+ /// Though this method is safe for any two pointers, note that its result
+ /// will be mostly useless if the two pointers aren't into the same allocated
+ /// object, for example if they point to two different local variables.
+ ///
+ /// # Panics
+ ///
+ /// This function panics if `T` is a zero-sized type.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// #![feature(ptr_wrapping_offset_from)]
+ ///
+ /// let mut a = [0; 5];
+ /// let ptr1: *mut i32 = &mut a[1];
+ /// let ptr2: *mut i32 = &mut a[3];
+ /// assert_eq!(ptr2.wrapping_offset_from(ptr1), 2);
+ /// assert_eq!(ptr1.wrapping_offset_from(ptr2), -2);
+ /// assert_eq!(ptr1.wrapping_offset(2), ptr2);
+ /// assert_eq!(ptr2.wrapping_offset(-2), ptr1);
+ ///
+ /// let ptr1: *mut i32 = 3 as _;
+ /// let ptr2: *mut i32 = 13 as _;
+ /// assert_eq!(ptr2.wrapping_offset_from(ptr1), 2);
+ /// ```
+ #[unstable(feature = "ptr_wrapping_offset_from", issue = "41079")]
+ #[inline]
+ pub fn wrapping_offset_from(self, origin: *const T) -> isize where T: Sized {
+ (self as *const T).wrapping_offset_from(origin)
+ }
+
+ /// Calculates the offset from a pointer (convenience for `.offset(count as isize)`).
+ ///
+ /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
+ /// offset of `3 * size_of::<T>()` bytes.
+ ///
+ /// # Safety
+ ///
+ /// If any of the following conditions are violated, the result is Undefined
+ /// Behavior:
+ ///
+ /// * Both the starting and resulting pointer must be either in bounds or one
+ /// byte past the end of the same allocated object.
+ ///
+ /// * The computed offset, **in bytes**, cannot overflow an `isize`.
+ ///
+ /// * The offset being in bounds cannot rely on "wrapping around" the address
+ /// space. That is, the infinite-precision sum must fit in a `usize`.
+ ///
+ /// The compiler and standard library generally tries to ensure allocations
+ /// never reach a size where an offset is a concern. For instance, `Vec`
+ /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
+ /// `vec.as_ptr().add(vec.len())` is always safe.
+ ///
+ /// Most platforms fundamentally can't even construct such an allocation.
+ /// For instance, no known 64-bit platform can ever serve a request
+ /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
+ /// However, some 32-bit and 16-bit platforms may successfully serve a request for
+ /// more than `isize::MAX` bytes with things like Physical Address
+ /// Extension. As such, memory acquired directly from allocators or memory
+ /// mapped files *may* be too large to handle with this function.
+ ///
+ /// Consider using `wrapping_offset` instead if these constraints are
+ /// difficult to satisfy. The only advantage of this method is that it
+ /// enables more aggressive compiler optimizations.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// let s: &str = "123";
+ /// let ptr: *const u8 = s.as_ptr();
+ ///
+ /// unsafe {
+ /// println!("{}", *ptr.add(1) as char);
+ /// println!("{}", *ptr.add(2) as char);
+ /// }
+ /// ```
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn add(self, count: usize) -> Self
+ where T: Sized,
+ {
+ self.offset(count as isize)
+ }
+
+ /// Calculates the offset from a pointer (convenience for
+ /// `.offset((count as isize).wrapping_neg())`).
+ ///
+ /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
+ /// offset of `3 * size_of::<T>()` bytes.
+ ///
+ /// # Safety
+ ///
+ /// If any of the following conditions are violated, the result is Undefined
+ /// Behavior:
+ ///
+ /// * Both the starting and resulting pointer must be either in bounds or one
+ /// byte past the end of the same allocated object.
+ ///
+ /// * The computed offset cannot exceed `isize::MAX` **bytes**.
+ ///
+ /// * The offset being in bounds cannot rely on "wrapping around" the address
+ /// space. That is, the infinite-precision sum must fit in a usize.
+ ///
+ /// The compiler and standard library generally tries to ensure allocations
+ /// never reach a size where an offset is a concern. For instance, `Vec`
+ /// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
+ /// `vec.as_ptr().add(vec.len()).sub(vec.len())` is always safe.
+ ///
+ /// Most platforms fundamentally can't even construct such an allocation.
+ /// For instance, no known 64-bit platform can ever serve a request
+ /// for 2<sup>63</sup> bytes due to page-table limitations or splitting the address space.
+ /// However, some 32-bit and 16-bit platforms may successfully serve a request for
+ /// more than `isize::MAX` bytes with things like Physical Address
+ /// Extension. As such, memory acquired directly from allocators or memory
+ /// mapped files *may* be too large to handle with this function.
+ ///
+ /// Consider using `wrapping_offset` instead if these constraints are
+ /// difficult to satisfy. The only advantage of this method is that it
+ /// enables more aggressive compiler optimizations.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// let s: &str = "123";
+ ///
+ /// unsafe {
+ /// let end: *const u8 = s.as_ptr().add(3);
+ /// println!("{}", *end.sub(1) as char);
+ /// println!("{}", *end.sub(2) as char);
+ /// }
+ /// ```
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn sub(self, count: usize) -> Self
+ where T: Sized,
+ {
+ self.offset((count as isize).wrapping_neg())
+ }
+
+ /// Calculates the offset from a pointer using wrapping arithmetic.
+ /// (convenience for `.wrapping_offset(count as isize)`)
+ ///
+ /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
+ /// offset of `3 * size_of::<T>()` bytes.
+ ///
+ /// # Safety
+ ///
+ /// The resulting pointer does not need to be in bounds, but it is
+ /// potentially hazardous to dereference (which requires `unsafe`).
+ ///
+ /// Always use `.add(count)` instead when possible, because `add`
+ /// allows the compiler to optimize better.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// // Iterate using a raw pointer in increments of two elements
+ /// let data = [1u8, 2, 3, 4, 5];
+ /// let mut ptr: *const u8 = data.as_ptr();
+ /// let step = 2;
+ /// let end_rounded_up = ptr.wrapping_add(6);
+ ///
+ /// // This loop prints "1, 3, 5, "
+ /// while ptr != end_rounded_up {
+ /// unsafe {
+ /// print!("{}, ", *ptr);
+ /// }
+ /// ptr = ptr.wrapping_add(step);
+ /// }
+ /// ```
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub fn wrapping_add(self, count: usize) -> Self
+ where T: Sized,
+ {
+ self.wrapping_offset(count as isize)
+ }
+
+ /// Calculates the offset from a pointer using wrapping arithmetic.
+ /// (convenience for `.wrapping_offset((count as isize).wrapping_sub())`)
+ ///
+ /// `count` is in units of T; e.g., a `count` of 3 represents a pointer
+ /// offset of `3 * size_of::<T>()` bytes.
+ ///
+ /// # Safety
+ ///
+ /// The resulting pointer does not need to be in bounds, but it is
+ /// potentially hazardous to dereference (which requires `unsafe`).
+ ///
+ /// Always use `.sub(count)` instead when possible, because `sub`
+ /// allows the compiler to optimize better.
+ ///
+ /// # Examples
+ ///
+ /// Basic usage:
+ ///
+ /// ```
+ /// // Iterate using a raw pointer in increments of two elements (backwards)
+ /// let data = [1u8, 2, 3, 4, 5];
+ /// let mut ptr: *const u8 = data.as_ptr();
+ /// let start_rounded_down = ptr.wrapping_sub(2);
+ /// ptr = ptr.wrapping_add(4);
+ /// let step = 2;
+ /// // This loop prints "5, 3, 1, "
+ /// while ptr != start_rounded_down {
+ /// unsafe {
+ /// print!("{}, ", *ptr);
+ /// }
+ /// ptr = ptr.wrapping_sub(step);
+ /// }
+ /// ```
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub fn wrapping_sub(self, count: usize) -> Self
+ where T: Sized,
+ {
+ self.wrapping_offset((count as isize).wrapping_neg())
+ }
+
+ /// Reads the value from `self` without moving it. This leaves the
+ /// memory in `self` unchanged.
+ ///
+ /// See [`ptr::read`] for safety concerns and examples.
+ ///
+ /// [`ptr::read`]: ./ptr/fn.read.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn read(self) -> T
+ where T: Sized,
+ {
+ read(self)
+ }
+
+ /// Performs a volatile read of the value from `self` without moving it. This
+ /// leaves the memory in `self` unchanged.
+ ///
+ /// Volatile operations are intended to act on I/O memory, and are guaranteed
+ /// to not be elided or reordered by the compiler across other volatile
+ /// operations.
+ ///
+ /// See [`ptr::read_volatile`] for safety concerns and examples.
+ ///
+ /// [`ptr::read_volatile`]: ./ptr/fn.read_volatile.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn read_volatile(self) -> T
+ where T: Sized,
+ {
+ read_volatile(self)
+ }
+
+ /// Reads the value from `self` without moving it. This leaves the
+ /// memory in `self` unchanged.
+ ///
+ /// Unlike `read`, the pointer may be unaligned.
+ ///
+ /// See [`ptr::read_unaligned`] for safety concerns and examples.
+ ///
+ /// [`ptr::read_unaligned`]: ./ptr/fn.read_unaligned.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn read_unaligned(self) -> T
+ where T: Sized,
+ {
+ read_unaligned(self)
+ }
+
+ /// Copies `count * size_of<T>` bytes from `self` to `dest`. The source
+ /// and destination may overlap.
+ ///
+ /// NOTE: this has the *same* argument order as [`ptr::copy`].
+ ///
+ /// See [`ptr::copy`] for safety concerns and examples.
+ ///
+ /// [`ptr::copy`]: ./ptr/fn.copy.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn copy_to(self, dest: *mut T, count: usize)
+ where T: Sized,
+ {
+ copy(self, dest, count)
+ }
+
+ /// Copies `count * size_of<T>` bytes from `self` to `dest`. The source
+ /// and destination may *not* overlap.
+ ///
+ /// NOTE: this has the *same* argument order as [`ptr::copy_nonoverlapping`].
+ ///
+ /// See [`ptr::copy_nonoverlapping`] for safety concerns and examples.
+ ///
+ /// [`ptr::copy_nonoverlapping`]: ./ptr/fn.copy_nonoverlapping.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn copy_to_nonoverlapping(self, dest: *mut T, count: usize)
+ where T: Sized,
+ {
+ copy_nonoverlapping(self, dest, count)
+ }
+
+ /// Copies `count * size_of<T>` bytes from `src` to `self`. The source
+ /// and destination may overlap.
+ ///
+ /// NOTE: this has the *opposite* argument order of [`ptr::copy`].
+ ///
+ /// See [`ptr::copy`] for safety concerns and examples.
+ ///
+ /// [`ptr::copy`]: ./ptr/fn.copy.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn copy_from(self, src: *const T, count: usize)
+ where T: Sized,
+ {
+ copy(src, self, count)
+ }
+
+ /// Copies `count * size_of<T>` bytes from `src` to `self`. The source
+ /// and destination may *not* overlap.
+ ///
+ /// NOTE: this has the *opposite* argument order of [`ptr::copy_nonoverlapping`].
+ ///
+ /// See [`ptr::copy_nonoverlapping`] for safety concerns and examples.
+ ///
+ /// [`ptr::copy_nonoverlapping`]: ./ptr/fn.copy_nonoverlapping.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn copy_from_nonoverlapping(self, src: *const T, count: usize)
+ where T: Sized,
+ {
+ copy_nonoverlapping(src, self, count)
+ }
+
+ /// Executes the destructor (if any) of the pointed-to value.
+ ///
+ /// See [`ptr::drop_in_place`] for safety concerns and examples.
+ ///
+ /// [`ptr::drop_in_place`]: ./ptr/fn.drop_in_place.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn drop_in_place(self) {
+ drop_in_place(self)
+ }
+
+ /// Overwrites a memory location with the given value without reading or
+ /// dropping the old value.
+ ///
+ /// See [`ptr::write`] for safety concerns and examples.
+ ///
+ /// [`ptr::write`]: ./ptr/fn.write.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn write(self, val: T)
+ where T: Sized,
+ {
+ write(self, val)
+ }
+
+ /// Invokes memset on the specified pointer, setting `count * size_of::<T>()`
+ /// bytes of memory starting at `self` to `val`.
+ ///
+ /// See [`ptr::write_bytes`] for safety concerns and examples.
+ ///
+ /// [`ptr::write_bytes`]: ./ptr/fn.write_bytes.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn write_bytes(self, val: u8, count: usize)
+ where T: Sized,
+ {
+ write_bytes(self, val, count)
+ }
+
+ /// Performs a volatile write of a memory location with the given value without
+ /// reading or dropping the old value.
+ ///
+ /// Volatile operations are intended to act on I/O memory, and are guaranteed
+ /// to not be elided or reordered by the compiler across other volatile
+ /// operations.
+ ///
+ /// See [`ptr::write_volatile`] for safety concerns and examples.
+ ///
+ /// [`ptr::write_volatile`]: ./ptr/fn.write_volatile.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn write_volatile(self, val: T)
+ where T: Sized,
+ {
+ write_volatile(self, val)
+ }
+
+ /// Overwrites a memory location with the given value without reading or
+ /// dropping the old value.
+ ///
+ /// Unlike `write`, the pointer may be unaligned.
+ ///
+ /// See [`ptr::write_unaligned`] for safety concerns and examples.
+ ///
+ /// [`ptr::write_unaligned`]: ./ptr/fn.write_unaligned.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn write_unaligned(self, val: T)
+ where T: Sized,
+ {
+ write_unaligned(self, val)
+ }
+
+ /// Replaces the value at `self` with `src`, returning the old
+ /// value, without dropping either.
+ ///
+ /// See [`ptr::replace`] for safety concerns and examples.
+ ///
+ /// [`ptr::replace`]: ./ptr/fn.replace.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn replace(self, src: T) -> T
+ where T: Sized,
+ {
+ replace(self, src)
+ }
+
+ /// Swaps the values at two mutable locations of the same type, without
+ /// deinitializing either. They may overlap, unlike `mem::swap` which is
+ /// otherwise equivalent.
+ ///
+ /// See [`ptr::swap`] for safety concerns and examples.
+ ///
+ /// [`ptr::swap`]: ./ptr/fn.swap.html
+ #[stable(feature = "pointer_methods", since = "1.26.0")]
+ #[inline]
+ pub unsafe fn swap(self, with: *mut T)
+ where T: Sized,
+ {
+ swap(self, with)
+ }
+
+ /// Computes the offset that needs to be applied to the pointer in order to make it aligned to
+ /// `align`.
+ ///
+ /// If it is not possible to align the pointer, the implementation returns
+ /// `usize::max_value()`.
+ ///
+ /// The offset is expressed in number of `T` elements, and not bytes. The value returned can be
+ /// used with the `offset` or `offset_to` methods.
+ ///
+ /// There are no guarantees whatsover that offsetting the pointer will not overflow or go
+ /// beyond the allocation that the pointer points into. It is up to the caller to ensure that
+ /// the returned offset is correct in all terms other than alignment.
+ ///
+ /// # Panics
+ ///
+ /// The function panics if `align` is not a power-of-two.
+ ///
+ /// # Examples
+ ///
+ /// Accessing adjacent `u8` as `u16`
+ ///
+ /// ```
+ /// # fn foo(n: usize) {
+ /// # use std::mem::align_of;
+ /// # unsafe {
+ /// let x = [5u8, 6u8, 7u8, 8u8, 9u8];
+ /// let ptr = &x[n] as *const u8;
+ /// let offset = ptr.align_offset(align_of::<u16>());
+ /// if offset < x.len() - n - 1 {
+ /// let u16_ptr = ptr.add(offset) as *const u16;
+ /// assert_ne!(*u16_ptr, 500);
+ /// } else {
+ /// // while the pointer can be aligned via `offset`, it would point
+ /// // outside the allocation
+ /// }
+ /// # } }
+ /// ```
+ #[stable(feature = "align_offset", since = "1.36.0")]
+ pub fn align_offset(self, align: usize) -> usize where T: Sized {
+ if !align.is_power_of_two() {
+ panic!("align_offset: align is not a power-of-two");
+ }
+ unsafe {
+ align_offset(self, align)
+ }
+ }
+}
+
+/// Align pointer `p`.
+///
+/// Calculate offset (in terms of elements of `stride` stride) that has to be applied
+/// to pointer `p` so that pointer `p` would get aligned to `a`.
+///
+/// Note: This implementation has been carefully tailored to not panic. It is UB for this to panic.
+/// The only real change that can be made here is change of `INV_TABLE_MOD_16` and associated
+/// constants.
+///
+/// If we ever decide to make it possible to call the intrinsic with `a` that is not a
+/// power-of-two, it will probably be more prudent to just change to a naive implementation rather
+/// than trying to adapt this to accommodate that change.
+///
+/// Any questions go to @nagisa.
+#[lang="align_offset"]
+pub(crate) unsafe fn align_offset<T: Sized>(p: *const T, a: usize) -> usize {
+ /// Calculate multiplicative modular inverse of `x` modulo `m`.
+ ///
+ /// This implementation is tailored for align_offset and has following preconditions:
+ ///
+ /// * `m` is a power-of-two;
+ /// * `x < m`; (if `x ≥ m`, pass in `x % m` instead)
+ ///
+ /// Implementation of this function shall not panic. Ever.
+ #[inline]
+ fn mod_inv(x: usize, m: usize) -> usize {
+ /// Multiplicative modular inverse table modulo 2⁴ = 16.
+ ///
+ /// Note, that this table does not contain values where inverse does not exist (i.e., for
+ /// `0⁻¹ mod 16`, `2⁻¹ mod 16`, etc.)
+ const INV_TABLE_MOD_16: [u8; 8] = [1, 11, 13, 7, 9, 3, 5, 15];
+ /// Modulo for which the `INV_TABLE_MOD_16` is intended.
+ const INV_TABLE_MOD: usize = 16;
+ /// INV_TABLE_MOD²
+ const INV_TABLE_MOD_SQUARED: usize = INV_TABLE_MOD * INV_TABLE_MOD;
+
+ let table_inverse = INV_TABLE_MOD_16[(x & (INV_TABLE_MOD - 1)) >> 1] as usize;
+ if m <= INV_TABLE_MOD {
+ table_inverse & (m - 1)
+ } else {
+ // We iterate "up" using the following formula:
+ //
+ // $$ xy ≡ 1 (mod 2ⁿ) → xy (2 - xy) ≡ 1 (mod 2²ⁿ) $$
+ //
+ // until 2²ⁿ ≥ m. Then we can reduce to our desired `m` by taking the result `mod m`.
+ let mut inverse = table_inverse;
+ let mut going_mod = INV_TABLE_MOD_SQUARED;
+ loop {
+ // y = y * (2 - xy) mod n
+ //
+ // Note, that we use wrapping operations here intentionally – the original formula
+ // uses e.g., subtraction `mod n`. It is entirely fine to do them `mod
+ // usize::max_value()` instead, because we take the result `mod n` at the end
+ // anyway.
+ inverse = inverse.wrapping_mul(
+ 2usize.wrapping_sub(x.wrapping_mul(inverse))
+ ) & (going_mod - 1);
+ if going_mod > m {
+ return inverse & (m - 1);
+ }
+ going_mod = going_mod.wrapping_mul(going_mod);
+ }
+ }
+ }
+
+ let stride = mem::size_of::<T>();
+ let a_minus_one = a.wrapping_sub(1);
+ let pmoda = p as usize & a_minus_one;
+
+ if pmoda == 0 {
+ // Already aligned. Yay!
+ return 0;
+ }
+
+ if stride <= 1 {
+ return if stride == 0 {
+ // If the pointer is not aligned, and the element is zero-sized, then no amount of
+ // elements will ever align the pointer.
+ !0
+ } else {
+ a.wrapping_sub(pmoda)
+ };
+ }
+
+ let smoda = stride & a_minus_one;
+ // a is power-of-two so cannot be 0. stride = 0 is handled above.
+ let gcdpow = intrinsics::cttz_nonzero(stride).min(intrinsics::cttz_nonzero(a));
+ let gcd = 1usize << gcdpow;
+
+ if p as usize & (gcd - 1) == 0 {
+ // This branch solves for the following linear congruence equation:
+ //
+ // $$ p + so ≡ 0 mod a $$
+ //
+ // $p$ here is the pointer value, $s$ – stride of `T`, $o$ offset in `T`s, and $a$ – the
+ // requested alignment.
+ //
+ // g = gcd(a, s)
+ // o = (a - (p mod a))/g * ((s/g)⁻¹ mod a)
+ //
+ // The first term is “the relative alignment of p to a”, the second term is “how does
+ // incrementing p by s bytes change the relative alignment of p”. Division by `g` is
+ // necessary to make this equation well formed if $a$ and $s$ are not co-prime.
+ //
+ // Furthermore, the result produced by this solution is not “minimal”, so it is necessary
+ // to take the result $o mod lcm(s, a)$. We can replace $lcm(s, a)$ with just a $a / g$.
+ let j = a.wrapping_sub(pmoda) >> gcdpow;
+ let k = smoda >> gcdpow;
+ return intrinsics::unchecked_rem(j.wrapping_mul(mod_inv(k, a)), a >> gcdpow);
+ }
+
+ // Cannot be aligned at all.
+ usize::max_value()
+}
+
+
+
+// Equality for pointers
+#[stable(feature = "rust1", since = "1.0.0")]
+impl<T: ?Sized> PartialEq for *const T {
+ #[inline]
+ fn eq(&self, other: &*const T) -> bool { *self == *other }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl<T: ?Sized> Eq for *const T {}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl<T: ?Sized> PartialEq for *mut T {
+ #[inline]
+ fn eq(&self, other: &*mut T) -> bool { *self == *other }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl<T: ?Sized> Eq for *mut T {}
+
+/// Compares raw pointers for equality.
+///
+/// This is the same as using the `==` operator, but less generic:
+/// the arguments have to be `*const T` raw pointers,
+/// not anything that implements `PartialEq`.
+///
+/// This can be used to compare `&T` references (which coerce to `*const T` implicitly)
+/// by their address rather than comparing the values they point to
+/// (which is what the `PartialEq for &T` implementation does).
+///
+/// # Examples
+///
+/// ```
+/// use std::ptr;
+///
+/// let five = 5;
+/// let other_five = 5;
+/// let five_ref = &five;
+/// let same_five_ref = &five;
+/// let other_five_ref = &other_five;
+///
+/// assert!(five_ref == same_five_ref);
+/// assert!(ptr::eq(five_ref, same_five_ref));
+///
+/// assert!(five_ref == other_five_ref);
+/// assert!(!ptr::eq(five_ref, other_five_ref));
+/// ```
+///
+/// Slices are also compared by their length (fat pointers):
+///
+/// ```
+/// let a = [1, 2, 3];
+/// assert!(std::ptr::eq(&a[..3], &a[..3]));
+/// assert!(!std::ptr::eq(&a[..2], &a[..3]));
+/// assert!(!std::ptr::eq(&a[0..2], &a[1..3]));
+/// ```
+///
+/// Traits are also compared by their implementation:
+///
+/// ```
+/// #[repr(transparent)]
+/// struct Wrapper { member: i32 }
+///
+/// trait Trait {}
+/// impl Trait for Wrapper {}
+/// impl Trait for i32 {}
+///
+/// fn main() {
+/// let wrapper = Wrapper { member: 10 };
+///
+/// // Pointers have equal addresses.
+/// assert!(std::ptr::eq(
+/// &wrapper as *const Wrapper as *const u8,
+/// &wrapper.member as *const i32 as *const u8
+/// ));
+///
+/// // Objects have equal addresses, but `Trait` has different implementations.
+/// assert!(!std::ptr::eq(
+/// &wrapper as &dyn Trait,
+/// &wrapper.member as &dyn Trait,
+/// ));
+/// assert!(!std::ptr::eq(
+/// &wrapper as &dyn Trait as *const dyn Trait,
+/// &wrapper.member as &dyn Trait as *const dyn Trait,
+/// ));
+///
+/// // Converting the reference to a `*const u8` compares by address.
+/// assert!(std::ptr::eq(
+/// &wrapper as &dyn Trait as *const dyn Trait as *const u8,
+/// &wrapper.member as &dyn Trait as *const dyn Trait as *const u8,
+/// ));
+/// }
+/// ```
+#[stable(feature = "ptr_eq", since = "1.17.0")]
+#[inline]
+pub fn eq<T: ?Sized>(a: *const T, b: *const T) -> bool {
+ a == b
+}
+
+/// Hash a raw pointer.
+///
+/// This can be used to hash a `&T` reference (which coerces to `*const T` implicitly)
+/// by its address rather than the value it points to
+/// (which is what the `Hash for &T` implementation does).
+///
+/// # Examples
+///
+/// ```
+/// use std::collections::hash_map::DefaultHasher;
+/// use std::hash::{Hash, Hasher};
+/// use std::ptr;
+///
+/// let five = 5;
+/// let five_ref = &five;
+///
+/// let mut hasher = DefaultHasher::new();
+/// ptr::hash(five_ref, &mut hasher);
+/// let actual = hasher.finish();
+///
+/// let mut hasher = DefaultHasher::new();
+/// (five_ref as *const i32).hash(&mut hasher);
+/// let expected = hasher.finish();
+///
+/// assert_eq!(actual, expected);
+/// ```
+#[stable(feature = "ptr_hash", since = "1.35.0")]
+pub fn hash<T: ?Sized, S: hash::Hasher>(hashee: *const T, into: &mut S) {
+ use crate::hash::Hash;
+ hashee.hash(into);
+}
+
+// Impls for function pointers
+macro_rules! fnptr_impls_safety_abi {
+ ($FnTy: ty, $($Arg: ident),*) => {
+ #[stable(feature = "fnptr_impls", since = "1.4.0")]
+ impl<Ret, $($Arg),*> PartialEq for $FnTy {
+ #[inline]
+ fn eq(&self, other: &Self) -> bool {
+ *self as usize == *other as usize
+ }
+ }
+
+ #[stable(feature = "fnptr_impls", since = "1.4.0")]
+ impl<Ret, $($Arg),*> Eq for $FnTy {}
+
+ #[stable(feature = "fnptr_impls", since = "1.4.0")]
+ impl<Ret, $($Arg),*> PartialOrd for $FnTy {
+ #[inline]
+ fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
+ (*self as usize).partial_cmp(&(*other as usize))
+ }
+ }
+
+ #[stable(feature = "fnptr_impls", since = "1.4.0")]
+ impl<Ret, $($Arg),*> Ord for $FnTy {
+ #[inline]
+ fn cmp(&self, other: &Self) -> Ordering {
+ (*self as usize).cmp(&(*other as usize))
+ }
+ }
+
+ #[stable(feature = "fnptr_impls", since = "1.4.0")]
+ impl<Ret, $($Arg),*> hash::Hash for $FnTy {
+ fn hash<HH: hash::Hasher>(&self, state: &mut HH) {
+ state.write_usize(*self as usize)
+ }
+ }
+
+ #[stable(feature = "fnptr_impls", since = "1.4.0")]
+ impl<Ret, $($Arg),*> fmt::Pointer for $FnTy {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ fmt::Pointer::fmt(&(*self as *const ()), f)
+ }
+ }
+
+ #[stable(feature = "fnptr_impls", since = "1.4.0")]
+ impl<Ret, $($Arg),*> fmt::Debug for $FnTy {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ fmt::Pointer::fmt(&(*self as *const ()), f)
+ }
+ }
+ }
+}
+
+macro_rules! fnptr_impls_args {
+ ($($Arg: ident),+) => {
+ fnptr_impls_safety_abi! { extern "Rust" fn($($Arg),*) -> Ret, $($Arg),* }
+ fnptr_impls_safety_abi! { extern "C" fn($($Arg),*) -> Ret, $($Arg),* }
+ fnptr_impls_safety_abi! { extern "C" fn($($Arg),* , ...) -> Ret, $($Arg),* }
+ fnptr_impls_safety_abi! { unsafe extern "Rust" fn($($Arg),*) -> Ret, $($Arg),* }
+ fnptr_impls_safety_abi! { unsafe extern "C" fn($($Arg),*) -> Ret, $($Arg),* }
+ fnptr_impls_safety_abi! { unsafe extern "C" fn($($Arg),* , ...) -> Ret, $($Arg),* }
+ };
+ () => {
+ // No variadic functions with 0 parameters
+ fnptr_impls_safety_abi! { extern "Rust" fn() -> Ret, }
+ fnptr_impls_safety_abi! { extern "C" fn() -> Ret, }
+ fnptr_impls_safety_abi! { unsafe extern "Rust" fn() -> Ret, }
+ fnptr_impls_safety_abi! { unsafe extern "C" fn() -> Ret, }
+ };
+}
+
+fnptr_impls_args! { }
+fnptr_impls_args! { A }
+fnptr_impls_args! { A, B }
+fnptr_impls_args! { A, B, C }
+fnptr_impls_args! { A, B, C, D }
+fnptr_impls_args! { A, B, C, D, E }
+fnptr_impls_args! { A, B, C, D, E, F }
+fnptr_impls_args! { A, B, C, D, E, F, G }
+fnptr_impls_args! { A, B, C, D, E, F, G, H }
+fnptr_impls_args! { A, B, C, D, E, F, G, H, I }
+fnptr_impls_args! { A, B, C, D, E, F, G, H, I, J }
+fnptr_impls_args! { A, B, C, D, E, F, G, H, I, J, K }
+fnptr_impls_args! { A, B, C, D, E, F, G, H, I, J, K, L }
+
+// Comparison for pointers
+#[stable(feature = "rust1", since = "1.0.0")]
+impl<T: ?Sized> Ord for *const T {
+ #[inline]
+ fn cmp(&self, other: &*const T) -> Ordering {
+ if self < other {
+ Less
+ } else if self == other {
+ Equal
+ } else {
+ Greater
+ }
+ }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl<T: ?Sized> PartialOrd for *const T {
+ #[inline]
+ fn partial_cmp(&self, other: &*const T) -> Option<Ordering> {
+ Some(self.cmp(other))
+ }
+
+ #[inline]
+ fn lt(&self, other: &*const T) -> bool { *self < *other }
+
+ #[inline]
+ fn le(&self, other: &*const T) -> bool { *self <= *other }
+
+ #[inline]
+ fn gt(&self, other: &*const T) -> bool { *self > *other }
+
+ #[inline]
+ fn ge(&self, other: &*const T) -> bool { *self >= *other }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl<T: ?Sized> Ord for *mut T {
+ #[inline]
+ fn cmp(&self, other: &*mut T) -> Ordering {
+ if self < other {
+ Less
+ } else if self == other {
+ Equal
+ } else {
+ Greater
+ }
+ }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl<T: ?Sized> PartialOrd for *mut T {
+ #[inline]
+ fn partial_cmp(&self, other: &*mut T) -> Option<Ordering> {
+ Some(self.cmp(other))
+ }
+
+ #[inline]
+ fn lt(&self, other: &*mut T) -> bool { *self < *other }
+
+ #[inline]
+ fn le(&self, other: &*mut T) -> bool { *self <= *other }
+
+ #[inline]
+ fn gt(&self, other: &*mut T) -> bool { *self > *other }
+
+ #[inline]
+ fn ge(&self, other: &*mut T) -> bool { *self >= *other }
+}
--- /dev/null
+use crate::convert::From;
+use crate::ops::{CoerceUnsized, DispatchFromDyn};
+use crate::fmt;
+use crate::hash;
+use crate::marker::Unsize;
+use crate::mem;
+use crate::ptr::Unique;
+use crate::cmp::Ordering;
+
+/// `*mut T` but non-zero and covariant.
+///
+/// This is often the correct thing to use when building data structures using
+/// raw pointers, but is ultimately more dangerous to use because of its additional
+/// properties. If you're not sure if you should use `NonNull<T>`, just use `*mut T`!
+///
+/// Unlike `*mut T`, the pointer must always be non-null, even if the pointer
+/// is never dereferenced. This is so that enums may use this forbidden value
+/// as a discriminant -- `Option<NonNull<T>>` has the same size as `*mut T`.
+/// However the pointer may still dangle if it isn't dereferenced.
+///
+/// Unlike `*mut T`, `NonNull<T>` is covariant over `T`. If this is incorrect
+/// for your use case, you should include some [`PhantomData`] in your type to
+/// provide invariance, such as `PhantomData<Cell<T>>` or `PhantomData<&'a mut T>`.
+/// Usually this won't be necessary; covariance is correct for most safe abstractions,
+/// such as `Box`, `Rc`, `Arc`, `Vec`, and `LinkedList`. This is the case because they
+/// provide a public API that follows the normal shared XOR mutable rules of Rust.
+///
+/// Notice that `NonNull<T>` has a `From` instance for `&T`. However, this does
+/// not change the fact that mutating through a (pointer derived from a) shared
+/// reference is undefined behavior unless the mutation happens inside an
+/// [`UnsafeCell<T>`]. The same goes for creating a mutable reference from a shared
+/// reference. When using this `From` instance without an `UnsafeCell<T>`,
+/// it is your responsibility to ensure that `as_mut` is never called, and `as_ptr`
+/// is never used for mutation.
+///
+/// [`PhantomData`]: ../marker/struct.PhantomData.html
+/// [`UnsafeCell<T>`]: ../cell/struct.UnsafeCell.html
+#[stable(feature = "nonnull", since = "1.25.0")]
+#[repr(transparent)]
+#[rustc_layout_scalar_valid_range_start(1)]
+#[cfg_attr(not(stage0), rustc_nonnull_optimization_guaranteed)]
+pub struct NonNull<T: ?Sized> {
+ pointer: *const T,
+}
+
+/// `NonNull` pointers are not `Send` because the data they reference may be aliased.
+// N.B., this impl is unnecessary, but should provide better error messages.
+#[stable(feature = "nonnull", since = "1.25.0")]
+impl<T: ?Sized> !Send for NonNull<T> { }
+
+/// `NonNull` pointers are not `Sync` because the data they reference may be aliased.
+// N.B., this impl is unnecessary, but should provide better error messages.
+#[stable(feature = "nonnull", since = "1.25.0")]
+impl<T: ?Sized> !Sync for NonNull<T> { }
+
+impl<T: Sized> NonNull<T> {
+ /// Creates a new `NonNull` that is dangling, but well-aligned.
+ ///
+ /// This is useful for initializing types which lazily allocate, like
+ /// `Vec::new` does.
+ ///
+ /// Note that the pointer value may potentially represent a valid pointer to
+ /// a `T`, which means this must not be used as a "not yet initialized"
+ /// sentinel value. Types that lazily allocate must track initialization by
+ /// some other means.
+ #[stable(feature = "nonnull", since = "1.25.0")]
+ #[inline]
+ pub const fn dangling() -> Self {
+ unsafe {
+ let ptr = mem::align_of::<T>() as *mut T;
+ NonNull::new_unchecked(ptr)
+ }
+ }
+}
+
+impl<T: ?Sized> NonNull<T> {
+ /// Creates a new `NonNull`.
+ ///
+ /// # Safety
+ ///
+ /// `ptr` must be non-null.
+ #[stable(feature = "nonnull", since = "1.25.0")]
+ #[inline]
+ pub const unsafe fn new_unchecked(ptr: *mut T) -> Self {
+ NonNull { pointer: ptr as _ }
+ }
+
+ /// Creates a new `NonNull` if `ptr` is non-null.
+ #[stable(feature = "nonnull", since = "1.25.0")]
+ #[inline]
+ pub fn new(ptr: *mut T) -> Option<Self> {
+ if !ptr.is_null() {
+ Some(unsafe { Self::new_unchecked(ptr) })
+ } else {
+ None
+ }
+ }
+
+ /// Acquires the underlying `*mut` pointer.
+ #[stable(feature = "nonnull", since = "1.25.0")]
+ #[inline]
+ pub const fn as_ptr(self) -> *mut T {
+ self.pointer as *mut T
+ }
+
+ /// Dereferences the content.
+ ///
+ /// The resulting lifetime is bound to self so this behaves "as if"
+ /// it were actually an instance of T that is getting borrowed. If a longer
+ /// (unbound) lifetime is needed, use `&*my_ptr.as_ptr()`.
+ #[stable(feature = "nonnull", since = "1.25.0")]
+ #[inline]
+ pub unsafe fn as_ref(&self) -> &T {
+ &*self.as_ptr()
+ }
+
+ /// Mutably dereferences the content.
+ ///
+ /// The resulting lifetime is bound to self so this behaves "as if"
+ /// it were actually an instance of T that is getting borrowed. If a longer
+ /// (unbound) lifetime is needed, use `&mut *my_ptr.as_ptr()`.
+ #[stable(feature = "nonnull", since = "1.25.0")]
+ #[inline]
+ pub unsafe fn as_mut(&mut self) -> &mut T {
+ &mut *self.as_ptr()
+ }
+
+ /// Cast to a pointer of another type
+ #[stable(feature = "nonnull_cast", since = "1.27.0")]
+ #[inline]
+ pub const fn cast<U>(self) -> NonNull<U> {
+ unsafe {
+ NonNull::new_unchecked(self.as_ptr() as *mut U)
+ }
+ }
+}
+
+#[stable(feature = "nonnull", since = "1.25.0")]
+impl<T: ?Sized> Clone for NonNull<T> {
+ #[inline]
+ fn clone(&self) -> Self {
+ *self
+ }
+}
+
+#[stable(feature = "nonnull", since = "1.25.0")]
+impl<T: ?Sized> Copy for NonNull<T> { }
+
+#[unstable(feature = "coerce_unsized", issue = "27732")]
+impl<T: ?Sized, U: ?Sized> CoerceUnsized<NonNull<U>> for NonNull<T> where T: Unsize<U> { }
+
+#[unstable(feature = "dispatch_from_dyn", issue = "0")]
+impl<T: ?Sized, U: ?Sized> DispatchFromDyn<NonNull<U>> for NonNull<T> where T: Unsize<U> { }
+
+#[stable(feature = "nonnull", since = "1.25.0")]
+impl<T: ?Sized> fmt::Debug for NonNull<T> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ fmt::Pointer::fmt(&self.as_ptr(), f)
+ }
+}
+
+#[stable(feature = "nonnull", since = "1.25.0")]
+impl<T: ?Sized> fmt::Pointer for NonNull<T> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ fmt::Pointer::fmt(&self.as_ptr(), f)
+ }
+}
+
+#[stable(feature = "nonnull", since = "1.25.0")]
+impl<T: ?Sized> Eq for NonNull<T> {}
+
+#[stable(feature = "nonnull", since = "1.25.0")]
+impl<T: ?Sized> PartialEq for NonNull<T> {
+ #[inline]
+ fn eq(&self, other: &Self) -> bool {
+ self.as_ptr() == other.as_ptr()
+ }
+}
+
+#[stable(feature = "nonnull", since = "1.25.0")]
+impl<T: ?Sized> Ord for NonNull<T> {
+ #[inline]
+ fn cmp(&self, other: &Self) -> Ordering {
+ self.as_ptr().cmp(&other.as_ptr())
+ }
+}
+
+#[stable(feature = "nonnull", since = "1.25.0")]
+impl<T: ?Sized> PartialOrd for NonNull<T> {
+ #[inline]
+ fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
+ self.as_ptr().partial_cmp(&other.as_ptr())
+ }
+}
+
+#[stable(feature = "nonnull", since = "1.25.0")]
+impl<T: ?Sized> hash::Hash for NonNull<T> {
+ #[inline]
+ fn hash<H: hash::Hasher>(&self, state: &mut H) {
+ self.as_ptr().hash(state)
+ }
+}
+
+#[unstable(feature = "ptr_internals", issue = "0")]
+impl<T: ?Sized> From<Unique<T>> for NonNull<T> {
+ #[inline]
+ fn from(unique: Unique<T>) -> Self {
+ unsafe { NonNull::new_unchecked(unique.as_ptr()) }
+ }
+}
+
+#[stable(feature = "nonnull", since = "1.25.0")]
+impl<T: ?Sized> From<&mut T> for NonNull<T> {
+ #[inline]
+ fn from(reference: &mut T) -> Self {
+ unsafe { NonNull { pointer: reference as *mut T } }
+ }
+}
+
+#[stable(feature = "nonnull", since = "1.25.0")]
+impl<T: ?Sized> From<&T> for NonNull<T> {
+ #[inline]
+ fn from(reference: &T) -> Self {
+ unsafe { NonNull { pointer: reference as *const T } }
+ }
+}
--- /dev/null
+use crate::convert::From;
+use crate::ops::{CoerceUnsized, DispatchFromDyn};
+use crate::fmt;
+use crate::marker::{PhantomData, Unsize};
+use crate::mem;
+use crate::ptr::NonNull;
+
+/// A wrapper around a raw non-null `*mut T` that indicates that the possessor
+/// of this wrapper owns the referent. Useful for building abstractions like
+/// `Box<T>`, `Vec<T>`, `String`, and `HashMap<K, V>`.
+///
+/// Unlike `*mut T`, `Unique<T>` behaves "as if" it were an instance of `T`.
+/// It implements `Send`/`Sync` if `T` is `Send`/`Sync`. It also implies
+/// the kind of strong aliasing guarantees an instance of `T` can expect:
+/// the referent of the pointer should not be modified without a unique path to
+/// its owning Unique.
+///
+/// If you're uncertain of whether it's correct to use `Unique` for your purposes,
+/// consider using `NonNull`, which has weaker semantics.
+///
+/// Unlike `*mut T`, the pointer must always be non-null, even if the pointer
+/// is never dereferenced. This is so that enums may use this forbidden value
+/// as a discriminant -- `Option<Unique<T>>` has the same size as `Unique<T>`.
+/// However the pointer may still dangle if it isn't dereferenced.
+///
+/// Unlike `*mut T`, `Unique<T>` is covariant over `T`. This should always be correct
+/// for any type which upholds Unique's aliasing requirements.
+#[unstable(feature = "ptr_internals", issue = "0",
+ reason = "use NonNull instead and consider PhantomData<T> \
+ (if you also use #[may_dangle]), Send, and/or Sync")]
+#[doc(hidden)]
+#[repr(transparent)]
+#[rustc_layout_scalar_valid_range_start(1)]
+pub struct Unique<T: ?Sized> {
+ pointer: *const T,
+ // NOTE: this marker has no consequences for variance, but is necessary
+ // for dropck to understand that we logically own a `T`.
+ //
+ // For details, see:
+ // https://github.com/rust-lang/rfcs/blob/master/text/0769-sound-generic-drop.md#phantom-data
+ _marker: PhantomData<T>,
+}
+
+/// `Unique` pointers are `Send` if `T` is `Send` because the data they
+/// reference is unaliased. Note that this aliasing invariant is
+/// unenforced by the type system; the abstraction using the
+/// `Unique` must enforce it.
+#[unstable(feature = "ptr_internals", issue = "0")]
+unsafe impl<T: Send + ?Sized> Send for Unique<T> { }
+
+/// `Unique` pointers are `Sync` if `T` is `Sync` because the data they
+/// reference is unaliased. Note that this aliasing invariant is
+/// unenforced by the type system; the abstraction using the
+/// `Unique` must enforce it.
+#[unstable(feature = "ptr_internals", issue = "0")]
+unsafe impl<T: Sync + ?Sized> Sync for Unique<T> { }
+
+#[unstable(feature = "ptr_internals", issue = "0")]
+impl<T: Sized> Unique<T> {
+ /// Creates a new `Unique` that is dangling, but well-aligned.
+ ///
+ /// This is useful for initializing types which lazily allocate, like
+ /// `Vec::new` does.
+ ///
+ /// Note that the pointer value may potentially represent a valid pointer to
+ /// a `T`, which means this must not be used as a "not yet initialized"
+ /// sentinel value. Types that lazily allocate must track initialization by
+ /// some other means.
+ // FIXME: rename to dangling() to match NonNull?
+ #[inline]
+ pub const fn empty() -> Self {
+ unsafe {
+ Unique::new_unchecked(mem::align_of::<T>() as *mut T)
+ }
+ }
+}
+
+#[unstable(feature = "ptr_internals", issue = "0")]
+impl<T: ?Sized> Unique<T> {
+ /// Creates a new `Unique`.
+ ///
+ /// # Safety
+ ///
+ /// `ptr` must be non-null.
+ #[inline]
+ pub const unsafe fn new_unchecked(ptr: *mut T) -> Self {
+ Unique { pointer: ptr as _, _marker: PhantomData }
+ }
+
+ /// Creates a new `Unique` if `ptr` is non-null.
+ #[inline]
+ pub fn new(ptr: *mut T) -> Option<Self> {
+ if !ptr.is_null() {
+ Some(unsafe { Unique { pointer: ptr as _, _marker: PhantomData } })
+ } else {
+ None
+ }
+ }
+
+ /// Acquires the underlying `*mut` pointer.
+ #[inline]
+ pub const fn as_ptr(self) -> *mut T {
+ self.pointer as *mut T
+ }
+
+ /// Dereferences the content.
+ ///
+ /// The resulting lifetime is bound to self so this behaves "as if"
+ /// it were actually an instance of T that is getting borrowed. If a longer
+ /// (unbound) lifetime is needed, use `&*my_ptr.as_ptr()`.
+ #[inline]
+ pub unsafe fn as_ref(&self) -> &T {
+ &*self.as_ptr()
+ }
+
+ /// Mutably dereferences the content.
+ ///
+ /// The resulting lifetime is bound to self so this behaves "as if"
+ /// it were actually an instance of T that is getting borrowed. If a longer
+ /// (unbound) lifetime is needed, use `&mut *my_ptr.as_ptr()`.
+ #[inline]
+ pub unsafe fn as_mut(&mut self) -> &mut T {
+ &mut *self.as_ptr()
+ }
+}
+
+#[unstable(feature = "ptr_internals", issue = "0")]
+impl<T: ?Sized> Clone for Unique<T> {
+ #[inline]
+ fn clone(&self) -> Self {
+ *self
+ }
+}
+
+#[unstable(feature = "ptr_internals", issue = "0")]
+impl<T: ?Sized> Copy for Unique<T> { }
+
+#[unstable(feature = "ptr_internals", issue = "0")]
+impl<T: ?Sized, U: ?Sized> CoerceUnsized<Unique<U>> for Unique<T> where T: Unsize<U> { }
+
+#[unstable(feature = "ptr_internals", issue = "0")]
+impl<T: ?Sized, U: ?Sized> DispatchFromDyn<Unique<U>> for Unique<T> where T: Unsize<U> { }
+
+#[unstable(feature = "ptr_internals", issue = "0")]
+impl<T: ?Sized> fmt::Debug for Unique<T> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ fmt::Pointer::fmt(&self.as_ptr(), f)
+ }
+}
+
+#[unstable(feature = "ptr_internals", issue = "0")]
+impl<T: ?Sized> fmt::Pointer for Unique<T> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ fmt::Pointer::fmt(&self.as_ptr(), f)
+ }
+}
+
+#[unstable(feature = "ptr_internals", issue = "0")]
+impl<T: ?Sized> From<&mut T> for Unique<T> {
+ #[inline]
+ fn from(reference: &mut T) -> Self {
+ unsafe { Unique { pointer: reference as *mut T, _marker: PhantomData } }
+ }
+}
+
+#[unstable(feature = "ptr_internals", issue = "0")]
+impl<T: ?Sized> From<&T> for Unique<T> {
+ #[inline]
+ fn from(reference: &T) -> Self {
+ unsafe { Unique { pointer: reference as *const T, _marker: PhantomData } }
+ }
+}
+
+#[unstable(feature = "ptr_internals", issue = "0")]
+impl<'a, T: ?Sized> From<NonNull<T>> for Unique<T> {
+ #[inline]
+ fn from(p: NonNull<T>) -> Self {
+ unsafe { Unique::new_unchecked(p.as_ptr()) }
+ }
+}
for sp in prior_arms {
err.span_label(*sp, format!(
"this is found to be of type `{}`",
- self.resolve_type_vars_if_possible(&last_ty),
+ self.resolve_vars_if_possible(&last_ty),
));
}
} else if let Some(sp) = prior_arms.last() {
&self,
exp_found: &ty::error::ExpectedFound<Ty<'tcx>>,
) -> Option<(DiagnosticStyledString, DiagnosticStyledString)> {
- let exp_found = self.resolve_type_vars_if_possible(exp_found);
+ let exp_found = self.resolve_vars_if_possible(exp_found);
if exp_found.references_error() {
return None;
}
&self,
exp_found: &ty::error::ExpectedFound<T>,
) -> Option<(DiagnosticStyledString, DiagnosticStyledString)> {
- let exp_found = self.resolve_type_vars_if_possible(exp_found);
+ let exp_found = self.resolve_vars_if_possible(exp_found);
if exp_found.references_error() {
return None;
}
});
match ty_opt {
Some(ty) => {
- let ty = self.infcx.resolve_type_vars_if_possible(&ty);
+ let ty = self.infcx.resolve_vars_if_possible(&ty);
ty.walk().any(|inner_ty| {
inner_ty == self.target_ty || match (&inner_ty.sty, &self.target_ty.sty) {
(&Infer(TyVar(a_vid)), &Infer(TyVar(b_vid))) => {
span: Span,
ty: Ty<'tcx>
) -> DiagnosticBuilder<'gcx> {
- let ty = self.resolve_type_vars_if_possible(&ty);
+ let ty = self.resolve_vars_if_possible(&ty);
let name = self.extract_type_name(&ty, None);
let mut err_span = span;
span: Span,
ty: Ty<'tcx>
) -> DiagnosticBuilder<'gcx> {
- let ty = self.resolve_type_vars_if_possible(&ty);
+ let ty = self.resolve_vars_if_possible(&ty);
let name = self.extract_type_name(&ty, None);
let mut err = struct_span_err!(self.tcx.sess,
_ => (),
}
- let expected_trait_ref = self.infcx.resolve_type_vars_if_possible(&ty::TraitRef {
+ let expected_trait_ref = self.infcx.resolve_vars_if_possible(&ty::TraitRef {
def_id: trait_def_id,
substs: expected_substs,
});
- let actual_trait_ref = self.infcx.resolve_type_vars_if_possible(&ty::TraitRef {
+ let actual_trait_ref = self.infcx.resolve_vars_if_possible(&ty::TraitRef {
def_id: trait_def_id,
substs: actual_substs,
});
let (mut fudger, value) = self.probe(|snapshot| {
match f() {
Ok(value) => {
- let value = self.resolve_type_vars_if_possible(&value);
+ let value = self.resolve_vars_if_possible(&value);
// At this point, `value` could in principle refer
// to inference variables that have been created during
/// Process the region constraints and report any errors that
/// result. After this, no more unification operations should be
/// done -- or the compiler will panic -- but it is legal to use
- /// `resolve_type_vars_if_possible` as well as `fully_resolve`.
+ /// `resolve_vars_if_possible` as well as `fully_resolve`.
pub fn resolve_regions_and_report_errors(
&self,
region_context: DefId,
}
pub fn ty_to_string(&self, t: Ty<'tcx>) -> String {
- self.resolve_type_vars_if_possible(&t).to_string()
+ self.resolve_vars_if_possible(&t).to_string()
}
pub fn tys_to_string(&self, ts: &[Ty<'tcx>]) -> String {
}
pub fn trait_ref_to_string(&self, t: &ty::TraitRef<'tcx>) -> String {
- self.resolve_type_vars_if_possible(t).to_string()
+ self.resolve_vars_if_possible(t).to_string()
}
/// If `TyVar(vid)` resolves to a type, return that type. Else, return the
self.type_variables.borrow_mut().root_var(var)
}
- /// Where possible, replaces type/int/float variables in
+ /// Where possible, replaces type/const variables in
/// `value` with their final value. Note that region variables
- /// are unaffected. If a type variable has not been unified, it
+ /// are unaffected. If a type/const variable has not been unified, it
/// is left as is. This is an idempotent operation that does
/// not affect inference state in any way and so you can do it
/// at will.
- pub fn resolve_type_vars_if_possible<T>(&self, value: &T) -> T
+ pub fn resolve_vars_if_possible<T>(&self, value: &T) -> T
where
T: TypeFoldable<'tcx>,
{
if !value.needs_infer() {
return value.clone(); // avoid duplicated subst-folding
}
- let mut r = resolve::OpportunisticTypeResolver::new(self);
+ let mut r = resolve::OpportunisticVarResolver::new(self);
value.fold_with(&mut r)
}
/// process of visiting `T`, this will resolve (where possible)
/// type variables in `T`, but it never constructs the final,
/// resolved type, so it's more efficient than
- /// `resolve_type_vars_if_possible()`.
+ /// `resolve_vars_if_possible()`.
pub fn unresolved_type_vars<T>(&self, value: &T) -> Option<(Ty<'tcx>, Option<Span>)>
where
T: TypeFoldable<'tcx>,
where
M: FnOnce(String) -> DiagnosticBuilder<'tcx>,
{
- let actual_ty = self.resolve_type_vars_if_possible(&actual_ty);
+ let actual_ty = self.resolve_vars_if_possible(&actual_ty);
debug!("type_error_struct_with_diag({:?}, {:?})", sp, actual_ty);
// Don't report an error if actual type is `Error`.
ty: Ty<'tcx>,
span: Span,
) -> bool {
- let ty = self.resolve_type_vars_if_possible(&ty);
+ let ty = self.resolve_vars_if_possible(&ty);
// Even if the type may have no inference variables, during
// type-checking closure types are in local tables only.
let tcx = self.tcx;
- let concrete_ty = self.resolve_type_vars_if_possible(&opaque_defn.concrete_ty);
+ let concrete_ty = self.resolve_vars_if_possible(&opaque_defn.concrete_ty);
debug!("constrain_opaque_type: concrete_ty={:?}", concrete_ty);
debug!("add_implied_bounds()");
for &ty in fn_sig_tys {
- let ty = infcx.resolve_type_vars_if_possible(&ty);
+ let ty = infcx.resolve_vars_if_possible(&ty);
debug!("add_implied_bounds: ty = {}", ty);
let implied_bounds = infcx.implied_outlives_bounds(self.param_env, body_id, ty, span);
self.add_outlives_bounds(Some(infcx), implied_bounds)
sup_type, sub_region, origin
);
- let sup_type = self.resolve_type_vars_if_possible(&sup_type);
+ let sup_type = self.resolve_vars_if_possible(&sup_type);
if let Some(region_bound_pairs) = region_bound_pairs_map.get(&body_id) {
let outlives = &mut TypeOutlives::new(
implicit_region_bound,
param_env,
);
- let ty = self.resolve_type_vars_if_possible(&ty);
+ let ty = self.resolve_vars_if_possible(&ty);
outlives.type_must_outlive(origin, ty, region);
}
}
use super::{InferCtxt, FixupError, FixupResult, Span, type_variable::TypeVariableOrigin};
use crate::mir::interpret::ConstValue;
-use crate::ty::{self, Ty, TyCtxt, TypeFoldable, InferConst};
+use crate::ty::{self, Ty, Const, TyCtxt, TypeFoldable, InferConst, TypeFlags};
use crate::ty::fold::{TypeFolder, TypeVisitor};
///////////////////////////////////////////////////////////////////////////
-// OPPORTUNISTIC TYPE RESOLVER
+// OPPORTUNISTIC VAR RESOLVER
-/// The opportunistic type resolver can be used at any time. It simply replaces
-/// type variables that have been unified with the things they have
+/// The opportunistic resolver can be used at any time. It simply replaces
+/// type/const variables that have been unified with the things they have
/// been unified with (similar to `shallow_resolve`, but deep). This is
/// useful for printing messages etc but also required at various
/// points for correctness.
-pub struct OpportunisticTypeResolver<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
+pub struct OpportunisticVarResolver<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
infcx: &'a InferCtxt<'a, 'gcx, 'tcx>,
}
-impl<'a, 'gcx, 'tcx> OpportunisticTypeResolver<'a, 'gcx, 'tcx> {
+impl<'a, 'gcx, 'tcx> OpportunisticVarResolver<'a, 'gcx, 'tcx> {
#[inline]
pub fn new(infcx: &'a InferCtxt<'a, 'gcx, 'tcx>) -> Self {
- OpportunisticTypeResolver { infcx }
+ OpportunisticVarResolver { infcx }
}
}
-impl<'a, 'gcx, 'tcx> TypeFolder<'gcx, 'tcx> for OpportunisticTypeResolver<'a, 'gcx, 'tcx> {
+impl<'a, 'gcx, 'tcx> TypeFolder<'gcx, 'tcx> for OpportunisticVarResolver<'a, 'gcx, 'tcx> {
fn tcx<'b>(&'b self) -> TyCtxt<'b, 'gcx, 'tcx> {
self.infcx.tcx
}
if !t.has_infer_types() {
t // micro-optimize -- if there is nothing in this type that this fold affects...
} else {
- let t0 = self.infcx.shallow_resolve(t);
- t0.super_fold_with(self)
+ let t = self.infcx.shallow_resolve(t);
+ t.super_fold_with(self)
+ }
+ }
+
+ fn fold_const(&mut self, ct: &'tcx Const<'tcx>) -> &'tcx Const<'tcx> {
+ if !ct.has_type_flags(TypeFlags::HAS_CT_INFER) {
+ ct // micro-optimize -- if there is nothing in this const that this fold affects...
+ } else {
+ let ct = self.infcx.shallow_resolve(ct);
+ ct.super_fold_with(self)
}
}
}
.unwrap_or(true)
}
- fn resolve_type_vars_if_possible<T>(&self, value: &T) -> T
+ fn resolve_vars_if_possible<T>(&self, value: &T) -> T
where T: TypeFoldable<'tcx>
{
- self.infcx.map(|infcx| infcx.resolve_type_vars_if_possible(value))
+ self.infcx.map(|infcx| infcx.resolve_vars_if_possible(value))
.unwrap_or_else(|| value.clone())
}
-> McResult<Ty<'tcx>> {
match ty {
Some(ty) => {
- let ty = self.resolve_type_vars_if_possible(&ty);
+ let ty = self.resolve_vars_if_possible(&ty);
if ty.references_error() || ty.is_ty_var() {
debug!("resolve_type_vars_or_error: error from {:?}", ty);
Err(())
where F: FnOnce() -> McResult<cmt_<'tcx>>
{
debug!("cat_expr_adjusted_with({:?}): {:?}", adjustment, expr);
- let target = self.resolve_type_vars_if_possible(&adjustment.target);
+ let target = self.resolve_vars_if_possible(&adjustment.target);
match adjustment.kind {
adjustment::Adjust::Deref(overloaded) => {
// Equivalent to *expr or something similar.
use rustc_data_structures::stable_hasher::{HashStable, StableHasher,
StableHasherResult};
use crate::ich::StableHashingContext;
-use crate::mir::{Mir, BasicBlock};
+use crate::mir::{Body, BasicBlock};
use crate::rustc_serialize as serialize;
pub fn predecessors(
&self,
- mir: &Mir<'_>
+ mir: &Body<'_>
) -> MappedReadGuard<'_, IndexVec<BasicBlock, Vec<BasicBlock>>> {
if self.predecessors.borrow().is_none() {
*self.predecessors.borrow_mut() = Some(calculate_predecessors(mir));
}
}
-fn calculate_predecessors(mir: &Mir<'_>) -> IndexVec<BasicBlock, Vec<BasicBlock>> {
+fn calculate_predecessors(mir: &Body<'_>) -> IndexVec<BasicBlock, Vec<BasicBlock>> {
let mut result = IndexVec::from_elem(vec![], mir.basic_blocks());
for (bb, data) in mir.basic_blocks().iter_enumerated() {
if let Some(ref term) = data.terminator {
use super::{
Pointer, EvalResult, AllocId, ScalarMaybeUndef, write_target_uint, read_target_uint, Scalar,
- truncate,
};
use crate::ty::layout::{Size, Align};
ScalarMaybeUndef::Undef => return self.mark_definedness(ptr, type_size, false),
};
- let bytes = match val {
- Scalar::Ptr(val) => {
- assert_eq!(type_size, cx.data_layout().pointer_size);
- val.offset.bytes() as u128
- }
-
- Scalar::Bits { bits, size } => {
- assert_eq!(size as u64, type_size.bytes());
- debug_assert_eq!(truncate(bits, Size::from_bytes(size.into())), bits,
- "Unexpected value of size {} when writing to memory", size);
- bits
- },
+ let bytes = match val.to_bits_or_ptr(type_size, cx) {
+ Err(val) => val.offset.bytes() as u128,
+ Ok(data) => data,
};
let endian = cx.data_layout().endian;
/// For a promoted global, the `Instance` of the function they belong to.
pub instance: ty::Instance<'tcx>,
- /// The index for promoted globals within their function's `Mir`.
+ /// The index for promoted globals within their function's `mir::Body`.
pub promoted: Option<mir::Promoted>,
}
/// illegal and will likely ICE.
/// This function exists to allow const eval to detect the difference between evaluation-
/// local dangling pointers and allocations in constants/statics.
+ #[inline]
pub fn get(&self, id: AllocId) -> Option<AllocKind<'tcx>> {
self.id_to_kind.get(&id).cloned()
}
// Methods to access integers in the target endianness
////////////////////////////////////////////////////////////////////////////////
+#[inline]
pub fn write_target_uint(
endianness: layout::Endian,
mut target: &mut [u8],
}
}
+#[inline]
pub fn read_target_uint(endianness: layout::Endian, mut source: &[u8]) -> Result<u128, io::Error> {
match endianness {
layout::Endian::Little => source.read_uint128::<LittleEndian>(source.len()),
// Methods to facilitate working with signed integers stored in a u128
////////////////////////////////////////////////////////////////////////////////
+/// Truncate `value` to `size` bits and then sign-extend it to 128 bits
+/// (i.e., if it is negative, fill with 1's on the left).
+#[inline]
pub fn sign_extend(value: u128, size: Size) -> u128 {
let size = size.bits();
+ if size == 0 {
+ // Truncated until nothing is left.
+ return 0;
+ }
// sign extend
let shift = 128 - size;
// shift the unsigned value to the left
(((value << shift) as i128) >> shift) as u128
}
+/// Truncate `value` to `size` bits.
+#[inline]
pub fn truncate(value: u128, size: Size) -> u128 {
let size = size.bits();
+ if size == 0 {
+ // Truncated until nothing is left.
+ return 0;
+ }
let shift = 128 - size;
// truncate (shift left to drop out leftover values, shift right to fill with zeroes)
(value << shift) >> shift
self.data_layout().pointer_size
}
- //// Trunace the given value to the pointer size; also return whether there was an overflow
+ /// Helper function: truncate given value-"overflowed flag" pair to pointer size and
+ /// update "overflowed flag" if there was an overflow.
+ /// This should be called by all the other methods before returning!
#[inline]
- fn truncate_to_ptr(&self, val: u128) -> (u64, bool) {
+ fn truncate_to_ptr(&self, (val, over): (u64, bool)) -> (u64, bool) {
+ let val = val as u128;
let max_ptr_plus_1 = 1u128 << self.pointer_size().bits();
- ((val % max_ptr_plus_1) as u64, val >= max_ptr_plus_1)
- }
-
- #[inline]
- fn offset<'tcx>(&self, val: u64, i: u64) -> EvalResult<'tcx, u64> {
- let (res, over) = self.overflowing_offset(val, i);
- if over { err!(Overflow(mir::BinOp::Add)) } else { Ok(res) }
+ ((val % max_ptr_plus_1) as u64, over || val >= max_ptr_plus_1)
}
#[inline]
fn overflowing_offset(&self, val: u64, i: u64) -> (u64, bool) {
- let (res, over1) = val.overflowing_add(i);
- let (res, over2) = self.truncate_to_ptr(u128::from(res));
- (res, over1 || over2)
- }
-
- #[inline]
- fn signed_offset<'tcx>(&self, val: u64, i: i64) -> EvalResult<'tcx, u64> {
- let (res, over) = self.overflowing_signed_offset(val, i128::from(i));
- if over { err!(Overflow(mir::BinOp::Add)) } else { Ok(res) }
+ let res = val.overflowing_add(i);
+ self.truncate_to_ptr(res)
}
// Overflow checking only works properly on the range from -u64 to +u64.
fn overflowing_signed_offset(&self, val: u64, i: i128) -> (u64, bool) {
// FIXME: is it possible to over/underflow here?
if i < 0 {
- // trickery to ensure that i64::min_value() works fine
- // this formula only works for true negative values, it panics for zero!
+ // Trickery to ensure that i64::min_value() works fine: compute n = -i.
+ // This formula only works for true negative values, it overflows for zero!
let n = u64::max_value() - (i as u64) + 1;
- val.overflowing_sub(n)
+ let res = val.overflowing_sub(n);
+ self.truncate_to_ptr(res)
} else {
self.overflowing_offset(val, i as u64)
}
}
+
+ #[inline]
+ fn offset<'tcx>(&self, val: u64, i: u64) -> EvalResult<'tcx, u64> {
+ let (res, over) = self.overflowing_offset(val, i);
+ if over { err!(Overflow(mir::BinOp::Add)) } else { Ok(res) }
+ }
+
+ #[inline]
+ fn signed_offset<'tcx>(&self, val: u64, i: i64) -> EvalResult<'tcx, u64> {
+ let (res, over) = self.overflowing_signed_offset(val, i128::from(i));
+ if over { err!(Overflow(mir::BinOp::Add)) } else { Ok(res) }
+ }
}
impl<T: layout::HasDataLayout> PointerArithmetic for T {}
RustcEncodable, RustcDecodable, Hash, HashStable)]
pub enum Scalar<Tag=(), Id=AllocId> {
/// The raw bytes of a simple value.
- Bits {
- /// The first `size` bytes are the value.
+ Raw {
+ /// The first `size` bytes of `data` are the value.
/// Do not try to read less or more bytes than that. The remaining bytes must be 0.
+ data: u128,
size: u8,
- bits: u128,
},
/// A pointer into an `Allocation`. An `Allocation` in the `memory` module has a list of
match self {
Scalar::Ptr(ptr) =>
write!(f, "{:?}", ptr),
- &Scalar::Bits { bits, size } => {
+ &Scalar::Raw { data, size } => {
+ Scalar::check_data(data, size);
if size == 0 {
- assert_eq!(bits, 0, "ZST value must be 0");
write!(f, "<ZST>")
} else {
- assert_eq!(truncate(bits, Size::from_bytes(size as u64)), bits,
- "Scalar value {:#x} exceeds size of {} bytes", bits, size);
// Format as hex number wide enough to fit any value of the given `size`.
- // So bits=20, size=1 will be "0x14", but with size=4 it'll be "0x00000014".
- write!(f, "0x{:>0width$x}", bits, width=(size*2) as usize)
+ // So data=20, size=1 will be "0x14", but with size=4 it'll be "0x00000014".
+ write!(f, "0x{:>0width$x}", data, width=(size*2) as usize)
}
}
}
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Scalar::Ptr(_) => write!(f, "a pointer"),
- Scalar::Bits { bits, .. } => write!(f, "{}", bits),
+ Scalar::Raw { data, .. } => write!(f, "{}", data),
}
}
}
impl<'tcx> Scalar<()> {
+ #[inline(always)]
+ fn check_data(data: u128, size: u8) {
+ debug_assert_eq!(truncate(data, Size::from_bytes(size as u64)), data,
+ "Scalar value {:#x} exceeds size of {} bytes", data, size);
+ }
+
#[inline]
pub fn with_tag<Tag>(self, new_tag: Tag) -> Scalar<Tag> {
match self {
Scalar::Ptr(ptr) => Scalar::Ptr(ptr.with_tag(new_tag)),
- Scalar::Bits { bits, size } => Scalar::Bits { bits, size },
+ Scalar::Raw { data, size } => Scalar::Raw { data, size },
}
}
pub fn erase_tag(self) -> Scalar {
match self {
Scalar::Ptr(ptr) => Scalar::Ptr(ptr.erase_tag()),
- Scalar::Bits { bits, size } => Scalar::Bits { bits, size },
+ Scalar::Raw { data, size } => Scalar::Raw { data, size },
}
}
#[inline]
pub fn ptr_null(cx: &impl HasDataLayout) -> Self {
- Scalar::Bits {
- bits: 0,
+ Scalar::Raw {
+ data: 0,
size: cx.data_layout().pointer_size.bytes() as u8,
}
}
#[inline]
pub fn zst() -> Self {
- Scalar::Bits { bits: 0, size: 0 }
+ Scalar::Raw { data: 0, size: 0 }
}
#[inline]
pub fn ptr_offset(self, i: Size, cx: &impl HasDataLayout) -> EvalResult<'tcx, Self> {
let dl = cx.data_layout();
match self {
- Scalar::Bits { bits, size } => {
+ Scalar::Raw { data, size } => {
assert_eq!(size as u64, dl.pointer_size.bytes());
- Ok(Scalar::Bits {
- bits: dl.offset(bits as u64, i.bytes())? as u128,
+ Ok(Scalar::Raw {
+ data: dl.offset(data as u64, i.bytes())? as u128,
size,
})
}
pub fn ptr_wrapping_offset(self, i: Size, cx: &impl HasDataLayout) -> Self {
let dl = cx.data_layout();
match self {
- Scalar::Bits { bits, size } => {
+ Scalar::Raw { data, size } => {
assert_eq!(size as u64, dl.pointer_size.bytes());
- Scalar::Bits {
- bits: dl.overflowing_offset(bits as u64, i.bytes()).0 as u128,
+ Scalar::Raw {
+ data: dl.overflowing_offset(data as u64, i.bytes()).0 as u128,
size,
}
}
pub fn ptr_signed_offset(self, i: i64, cx: &impl HasDataLayout) -> EvalResult<'tcx, Self> {
let dl = cx.data_layout();
match self {
- Scalar::Bits { bits, size } => {
+ Scalar::Raw { data, size } => {
assert_eq!(size as u64, dl.pointer_size().bytes());
- Ok(Scalar::Bits {
- bits: dl.signed_offset(bits as u64, i)? as u128,
+ Ok(Scalar::Raw {
+ data: dl.signed_offset(data as u64, i)? as u128,
size,
})
}
pub fn ptr_wrapping_signed_offset(self, i: i64, cx: &impl HasDataLayout) -> Self {
let dl = cx.data_layout();
match self {
- Scalar::Bits { bits, size } => {
+ Scalar::Raw { data, size } => {
assert_eq!(size as u64, dl.pointer_size.bytes());
- Scalar::Bits {
- bits: dl.overflowing_signed_offset(bits as u64, i128::from(i)).0 as u128,
+ Scalar::Raw {
+ data: dl.overflowing_signed_offset(data as u64, i128::from(i)).0 as u128,
size,
}
}
}
}
- /// Returns this pointers offset from the allocation base, or from NULL (for
+ /// Returns this pointer's offset from the allocation base, or from NULL (for
/// integer pointers).
#[inline]
pub fn get_ptr_offset(self, cx: &impl HasDataLayout) -> Size {
match self {
- Scalar::Bits { bits, size } => {
+ Scalar::Raw { data, size } => {
assert_eq!(size as u64, cx.pointer_size().bytes());
- Size::from_bytes(bits as u64)
+ Size::from_bytes(data as u64)
}
Scalar::Ptr(ptr) => ptr.offset,
}
#[inline]
pub fn is_null_ptr(self, cx: &impl HasDataLayout) -> bool {
match self {
- Scalar::Bits { bits, size } => {
+ Scalar::Raw { data, size } => {
assert_eq!(size as u64, cx.data_layout().pointer_size.bytes());
- bits == 0
+ data == 0
},
Scalar::Ptr(_) => false,
}
#[inline]
pub fn from_bool(b: bool) -> Self {
- Scalar::Bits { bits: b as u128, size: 1 }
+ Scalar::Raw { data: b as u128, size: 1 }
}
#[inline]
pub fn from_char(c: char) -> Self {
- Scalar::Bits { bits: c as u128, size: 4 }
+ Scalar::Raw { data: c as u128, size: 4 }
}
#[inline]
pub fn from_uint(i: impl Into<u128>, size: Size) -> Self {
let i = i.into();
- debug_assert_eq!(truncate(i, size), i,
- "Unsigned value {} does not fit in {} bits", i, size.bits());
- Scalar::Bits { bits: i, size: size.bytes() as u8 }
+ assert_eq!(
+ truncate(i, size), i,
+ "Unsigned value {:#x} does not fit in {} bits", i, size.bits()
+ );
+ Scalar::Raw { data: i, size: size.bytes() as u8 }
}
#[inline]
let i = i.into();
// `into` performed sign extension, we have to truncate
let truncated = truncate(i as u128, size);
- debug_assert_eq!(sign_extend(truncated, size) as i128, i,
- "Signed value {} does not fit in {} bits", i, size.bits());
- Scalar::Bits { bits: truncated, size: size.bytes() as u8 }
+ assert_eq!(
+ sign_extend(truncated, size) as i128, i,
+ "Signed value {:#x} does not fit in {} bits", i, size.bits()
+ );
+ Scalar::Raw { data: truncated, size: size.bytes() as u8 }
}
#[inline]
pub fn from_f32(f: f32) -> Self {
- Scalar::Bits { bits: f.to_bits() as u128, size: 4 }
+ Scalar::Raw { data: f.to_bits() as u128, size: 4 }
}
#[inline]
pub fn from_f64(f: f64) -> Self {
- Scalar::Bits { bits: f.to_bits() as u128, size: 8 }
+ Scalar::Raw { data: f.to_bits() as u128, size: 8 }
+ }
+
+ #[inline]
+ pub fn to_bits_or_ptr(
+ self,
+ target_size: Size,
+ cx: &impl HasDataLayout,
+ ) -> Result<u128, Pointer<Tag>> {
+ match self {
+ Scalar::Raw { data, size } => {
+ assert_eq!(target_size.bytes(), size as u64);
+ assert_ne!(size, 0, "you should never look at the bits of a ZST");
+ Scalar::check_data(data, size);
+ Ok(data)
+ }
+ Scalar::Ptr(ptr) => {
+ assert_eq!(target_size, cx.data_layout().pointer_size);
+ Err(ptr)
+ }
+ }
}
#[inline]
pub fn to_bits(self, target_size: Size) -> EvalResult<'tcx, u128> {
match self {
- Scalar::Bits { bits, size } => {
+ Scalar::Raw { data, size } => {
assert_eq!(target_size.bytes(), size as u64);
- assert_ne!(size, 0, "to_bits cannot be used with zsts");
- Ok(bits)
+ assert_ne!(size, 0, "you should never look at the bits of a ZST");
+ Scalar::check_data(data, size);
+ Ok(data)
}
Scalar::Ptr(_) => err!(ReadPointerAsBytes),
}
#[inline]
pub fn to_ptr(self) -> EvalResult<'tcx, Pointer<Tag>> {
match self {
- Scalar::Bits { bits: 0, .. } => err!(InvalidNullPointerUsage),
- Scalar::Bits { .. } => err!(ReadBytesAsPointer),
+ Scalar::Raw { data: 0, .. } => err!(InvalidNullPointerUsage),
+ Scalar::Raw { .. } => err!(ReadBytesAsPointer),
Scalar::Ptr(p) => Ok(p),
}
}
#[inline]
pub fn is_bits(self) -> bool {
match self {
- Scalar::Bits { .. } => true,
+ Scalar::Raw { .. } => true,
_ => false,
}
}
pub fn to_bool(self) -> EvalResult<'tcx, bool> {
match self {
- Scalar::Bits { bits: 0, size: 1 } => Ok(false),
- Scalar::Bits { bits: 1, size: 1 } => Ok(true),
+ Scalar::Raw { data: 0, size: 1 } => Ok(false),
+ Scalar::Raw { data: 1, size: 1 } => Ok(true),
_ => err!(InvalidBool),
}
}
pub fn to_u8(self) -> EvalResult<'static, u8> {
let sz = Size::from_bits(8);
let b = self.to_bits(sz)?;
- assert_eq!(b as u8 as u128, b);
Ok(b as u8)
}
pub fn to_u32(self) -> EvalResult<'static, u32> {
let sz = Size::from_bits(32);
let b = self.to_bits(sz)?;
- assert_eq!(b as u32 as u128, b);
Ok(b as u32)
}
pub fn to_u64(self) -> EvalResult<'static, u64> {
let sz = Size::from_bits(64);
let b = self.to_bits(sz)?;
- assert_eq!(b as u64 as u128, b);
Ok(b as u64)
}
pub fn to_usize(self, cx: &impl HasDataLayout) -> EvalResult<'static, u64> {
let b = self.to_bits(cx.data_layout().pointer_size)?;
- assert_eq!(b as u64 as u128, b);
Ok(b as u64)
}
let sz = Size::from_bits(8);
let b = self.to_bits(sz)?;
let b = sign_extend(b, sz) as i128;
- assert_eq!(b as i8 as i128, b);
Ok(b as i8)
}
let sz = Size::from_bits(32);
let b = self.to_bits(sz)?;
let b = sign_extend(b, sz) as i128;
- assert_eq!(b as i32 as i128, b);
Ok(b as i32)
}
let sz = Size::from_bits(64);
let b = self.to_bits(sz)?;
let b = sign_extend(b, sz) as i128;
- assert_eq!(b as i64 as i128, b);
Ok(b as i64)
}
pub fn to_isize(self, cx: &impl HasDataLayout) -> EvalResult<'static, i64> {
- let b = self.to_bits(cx.data_layout().pointer_size)?;
- let b = sign_extend(b, cx.data_layout().pointer_size) as i128;
- assert_eq!(b as i64 as i128, b);
+ let sz = cx.data_layout().pointer_size;
+ let b = self.to_bits(sz)?;
+ let b = sign_extend(b, sz) as i128;
Ok(b as i64)
}
}
}
-impl<'tcx> HasLocalDecls<'tcx> for Mir<'tcx> {
+impl<'tcx> HasLocalDecls<'tcx> for Body<'tcx> {
fn local_decls(&self) -> &LocalDecls<'tcx> {
&self.local_decls
}
/// Lowered representation of a single function.
#[derive(Clone, RustcEncodable, RustcDecodable, Debug)]
-pub struct Mir<'tcx> {
+pub struct Body<'tcx> {
/// List of basic blocks. References to basic block use a newtyped index type `BasicBlock`
/// that indexes into this vector.
basic_blocks: IndexVec<BasicBlock, BasicBlockData<'tcx>>,
pub source_scope_local_data: ClearCrossCrate<IndexVec<SourceScope, SourceScopeLocalData>>,
/// Rvalues promoted from this function, such as borrows of constants.
- /// Each of them is the Mir of a constant with the fn's type parameters
+ /// Each of them is the Body of a constant with the fn's type parameters
/// in scope, but a separate set of locals.
- pub promoted: IndexVec<Promoted, Mir<'tcx>>,
+ pub promoted: IndexVec<Promoted, Body<'tcx>>,
/// Yields type of the function, if it is a generator.
pub yield_ty: Option<Ty<'tcx>>,
/// Generator drop glue
- pub generator_drop: Option<Box<Mir<'tcx>>>,
+ pub generator_drop: Option<Box<Body<'tcx>>>,
/// The layout of a generator. Produced by the state transformation.
pub generator_layout: Option<GeneratorLayout<'tcx>>,
cache: cache::Cache,
}
-impl<'tcx> Mir<'tcx> {
+impl<'tcx> Body<'tcx> {
pub fn new(
basic_blocks: IndexVec<BasicBlock, BasicBlockData<'tcx>>,
source_scopes: IndexVec<SourceScope, SourceScopeData>,
source_scope_local_data: ClearCrossCrate<IndexVec<SourceScope, SourceScopeLocalData>>,
- promoted: IndexVec<Promoted, Mir<'tcx>>,
+ promoted: IndexVec<Promoted, Body<'tcx>>,
yield_ty: Option<Ty<'tcx>>,
local_decls: LocalDecls<'tcx>,
user_type_annotations: CanonicalUserTypeAnnotations<'tcx>,
local_decls.len()
);
- Mir {
+ Body {
phase: MirPhase::Build,
basic_blocks,
source_scopes,
ExplicitUnsafe(hir::HirId),
}
-impl_stable_hash_for!(struct Mir<'tcx> {
+impl_stable_hash_for!(struct Body<'tcx> {
phase,
basic_blocks,
source_scopes,
cache
});
-impl<'tcx> Index<BasicBlock> for Mir<'tcx> {
+impl<'tcx> Index<BasicBlock> for Body<'tcx> {
type Output = BasicBlockData<'tcx>;
#[inline]
}
}
-impl<'tcx> IndexMut<BasicBlock> for Mir<'tcx> {
+impl<'tcx> IndexMut<BasicBlock> for Body<'tcx> {
#[inline]
fn index_mut(&mut self, index: BasicBlock) -> &mut BasicBlockData<'tcx> {
&mut self.basic_blocks_mut()[index]
}
}
-/// Classifies locals into categories. See `Mir::local_kind`.
+/// Classifies locals into categories. See `Body::local_kind`.
#[derive(PartialEq, Eq, Debug, HashStable)]
pub enum LocalKind {
/// User-declared variable binding
.map(|&u| {
tcx.mk_const(ty::Const {
val: ConstValue::Scalar(
- Scalar::Bits {
- bits: u,
- size: size.bytes() as u8,
- }.into(),
+ Scalar::from_uint(u, size).into(),
),
ty: switch_ty,
}).to_string().into()
}
}
-impl<'tcx> graph::DirectedGraph for Mir<'tcx> {
+impl<'tcx> graph::DirectedGraph for Body<'tcx> {
type Node = BasicBlock;
}
-impl<'tcx> graph::WithNumNodes for Mir<'tcx> {
+impl<'tcx> graph::WithNumNodes for Body<'tcx> {
fn num_nodes(&self) -> usize {
self.basic_blocks.len()
}
}
-impl<'tcx> graph::WithStartNode for Mir<'tcx> {
+impl<'tcx> graph::WithStartNode for Body<'tcx> {
fn start_node(&self) -> Self::Node {
START_BLOCK
}
}
-impl<'tcx> graph::WithPredecessors for Mir<'tcx> {
+impl<'tcx> graph::WithPredecessors for Body<'tcx> {
fn predecessors<'graph>(
&'graph self,
node: Self::Node,
}
}
-impl<'tcx> graph::WithSuccessors for Mir<'tcx> {
+impl<'tcx> graph::WithSuccessors for Body<'tcx> {
fn successors<'graph>(
&'graph self,
node: Self::Node,
}
}
-impl<'a, 'b> graph::GraphPredecessors<'b> for Mir<'a> {
+impl<'a, 'b> graph::GraphPredecessors<'b> for Body<'a> {
type Item = BasicBlock;
type Iter = IntoIter<BasicBlock>;
}
-impl<'a, 'b> graph::GraphSuccessors<'b> for Mir<'a> {
+impl<'a, 'b> graph::GraphSuccessors<'b> for Body<'a> {
type Item = BasicBlock;
type Iter = iter::Cloned<Successors<'b>>;
}
}
/// Returns `true` if `other` is earlier in the control flow graph than `self`.
- pub fn is_predecessor_of<'tcx>(&self, other: Location, mir: &Mir<'tcx>) -> bool {
+ pub fn is_predecessor_of<'tcx>(&self, other: Location, mir: &Body<'tcx>) -> bool {
// If we are in the same block as the other location and are an earlier statement
// then we are a predecessor of `other`.
if self.block == other.block && self.statement_index < other.statement_index {
}
BraceStructTypeFoldableImpl! {
- impl<'tcx> TypeFoldable<'tcx> for Mir<'tcx> {
+ impl<'tcx> TypeFoldable<'tcx> for Body<'tcx> {
phase,
basic_blocks,
source_scopes,
/// A preorder traversal of this graph is either `A B D C` or `A C D B`
#[derive(Clone)]
pub struct Preorder<'a, 'tcx: 'a> {
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
visited: BitSet<BasicBlock>,
worklist: Vec<BasicBlock>,
root_is_start_block: bool,
}
impl<'a, 'tcx> Preorder<'a, 'tcx> {
- pub fn new(mir: &'a Mir<'tcx>, root: BasicBlock) -> Preorder<'a, 'tcx> {
+ pub fn new(mir: &'a Body<'tcx>, root: BasicBlock) -> Preorder<'a, 'tcx> {
let worklist = vec![root];
Preorder {
}
}
-pub fn preorder<'a, 'tcx>(mir: &'a Mir<'tcx>) -> Preorder<'a, 'tcx> {
+pub fn preorder<'a, 'tcx>(mir: &'a Body<'tcx>) -> Preorder<'a, 'tcx> {
Preorder::new(mir, START_BLOCK)
}
///
/// A Postorder traversal of this graph is `D B C A` or `D C B A`
pub struct Postorder<'a, 'tcx: 'a> {
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
visited: BitSet<BasicBlock>,
visit_stack: Vec<(BasicBlock, Successors<'a>)>,
root_is_start_block: bool,
}
impl<'a, 'tcx> Postorder<'a, 'tcx> {
- pub fn new(mir: &'a Mir<'tcx>, root: BasicBlock) -> Postorder<'a, 'tcx> {
+ pub fn new(mir: &'a Body<'tcx>, root: BasicBlock) -> Postorder<'a, 'tcx> {
let mut po = Postorder {
mir,
visited: BitSet::new_empty(mir.basic_blocks().len()),
}
}
-pub fn postorder<'a, 'tcx>(mir: &'a Mir<'tcx>) -> Postorder<'a, 'tcx> {
+pub fn postorder<'a, 'tcx>(mir: &'a Body<'tcx>) -> Postorder<'a, 'tcx> {
Postorder::new(mir, START_BLOCK)
}
/// to re-use the traversal
#[derive(Clone)]
pub struct ReversePostorder<'a, 'tcx: 'a> {
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
blocks: Vec<BasicBlock>,
idx: usize
}
impl<'a, 'tcx> ReversePostorder<'a, 'tcx> {
- pub fn new(mir: &'a Mir<'tcx>, root: BasicBlock) -> ReversePostorder<'a, 'tcx> {
+ pub fn new(mir: &'a Body<'tcx>, root: BasicBlock) -> ReversePostorder<'a, 'tcx> {
let blocks : Vec<_> = Postorder::new(mir, root).map(|(bb, _)| bb).collect();
let len = blocks.len();
}
-pub fn reverse_postorder<'a, 'tcx>(mir: &'a Mir<'tcx>) -> ReversePostorder<'a, 'tcx> {
+pub fn reverse_postorder<'a, 'tcx>(mir: &'a Body<'tcx>) -> ReversePostorder<'a, 'tcx> {
ReversePostorder::new(mir, START_BLOCK)
}
// Override these, and call `self.super_xxx` to revert back to the
// default behavior.
- fn visit_mir(&mut self, mir: & $($mutability)? Mir<'tcx>) {
- self.super_mir(mir);
+ fn visit_body(&mut self, mir: & $($mutability)? Body<'tcx>) {
+ self.super_body(mir);
}
fn visit_basic_block_data(&mut self,
// The `super_xxx` methods comprise the default behavior and are
// not meant to be overridden.
- fn super_mir(&mut self,
- mir: & $($mutability)? Mir<'tcx>) {
+ fn super_body(&mut self,
+ mir: & $($mutability)? Body<'tcx>) {
if let Some(yield_ty) = &$($mutability)? mir.yield_ty {
self.visit_ty(yield_ty, TyContext::YieldTy(SourceInfo {
span: mir.span,
}
// for best performance, we want to use an iterator rather
- // than a for-loop, to avoid calling Mir::invalidate for
+ // than a for-loop, to avoid calling `mir::Body::invalidate` for
// each basic block.
macro_rules! basic_blocks {
(mut) => (mir.basic_blocks_mut().iter_enumerated_mut());
// Convenience methods
- fn visit_location(&mut self, mir: & $($mutability)? Mir<'tcx>, location: Location) {
+ fn visit_location(&mut self, mir: & $($mutability)? Body<'tcx>, location: Location) {
let basic_block = & $($mutability)? mir[location.block];
if basic_block.statements.len() == location.statement_index {
if let Some(ref $($mutability)? terminator) = basic_block.terminator {
desc { "getting a list of all mir_keys" }
}
- /// Maps DefId's that have an associated Mir to the result
+ /// Maps DefId's that have an associated `mir::Body` to the result
/// of the MIR qualify_consts pass. The actual meaning of
/// the value isn't known except to the pass itself.
query mir_const_qualif(key: DefId) -> (u8, &'tcx BitSet<mir::Local>) {
/// Fetch the MIR for a given `DefId` right after it's built - this includes
/// unreachable code.
- query mir_built(_: DefId) -> &'tcx Steal<mir::Mir<'tcx>> {}
+ query mir_built(_: DefId) -> &'tcx Steal<mir::Body<'tcx>> {}
/// Fetch the MIR for a given `DefId` up till the point where it is
/// ready for const evaluation.
///
/// See the README for the `mir` module for details.
- query mir_const(_: DefId) -> &'tcx Steal<mir::Mir<'tcx>> {
+ query mir_const(_: DefId) -> &'tcx Steal<mir::Body<'tcx>> {
no_hash
}
- query mir_validated(_: DefId) -> &'tcx Steal<mir::Mir<'tcx>> {
+ query mir_validated(_: DefId) -> &'tcx Steal<mir::Body<'tcx>> {
no_hash
}
/// MIR after our optimization passes have run. This is MIR that is ready
/// for codegen. This is also the only query that can fetch non-local MIR, at present.
- query optimized_mir(key: DefId) -> &'tcx mir::Mir<'tcx> {
+ query optimized_mir(key: DefId) -> &'tcx mir::Body<'tcx> {
cache { key.is_local() }
load_cached(tcx, id) {
- let mir: Option<crate::mir::Mir<'tcx>> = tcx.queries.on_disk_cache
+ let mir: Option<crate::mir::Body<'tcx>> = tcx.queries.on_disk_cache
.try_load_query_result(tcx, id);
mir.map(|x| tcx.alloc_mir(x))
}
/// in the case of closures, this will be redirected to the enclosing function.
query region_scope_tree(_: DefId) -> &'tcx region::ScopeTree {}
- query mir_shims(key: ty::InstanceDef<'tcx>) -> &'tcx mir::Mir<'tcx> {
+ query mir_shims(key: ty::InstanceDef<'tcx>) -> &'tcx mir::Body<'tcx> {
no_force
desc { |tcx| "generating MIR shim for `{}`", tcx.def_path_str(key.def_id()) }
}
}
#[derive(Clone, PartialEq, Hash)]
-pub enum PgoGenerate {
+pub enum SwitchWithOptPath {
Enabled(Option<PathBuf>),
Disabled,
}
-impl PgoGenerate {
+impl SwitchWithOptPath {
pub fn enabled(&self) -> bool {
match *self {
- PgoGenerate::Enabled(_) => true,
- PgoGenerate::Disabled => false,
+ SwitchWithOptPath::Enabled(_) => true,
+ SwitchWithOptPath::Disabled => false,
}
}
}
pub const parse_linker_plugin_lto: Option<&str> =
Some("either a boolean (`yes`, `no`, `on`, `off`, etc), \
or the path to the linker plugin");
- pub const parse_pgo_generate: Option<&str> =
+ pub const parse_switch_with_opt_path: Option<&str> =
Some("an optional path to the profiling data output directory");
pub const parse_merge_functions: Option<&str> =
Some("one of: `disabled`, `trampolines`, or `aliases`");
#[allow(dead_code)]
mod $mod_set {
- use super::{$struct_name, Passes, Sanitizer, LtoCli, LinkerPluginLto, PgoGenerate};
+ use super::{$struct_name, Passes, Sanitizer, LtoCli, LinkerPluginLto, SwitchWithOptPath};
use rustc_target::spec::{LinkerFlavor, MergeFunctions, PanicStrategy, RelroLevel};
use std::path::PathBuf;
use std::str::FromStr;
true
}
- fn parse_pgo_generate(slot: &mut PgoGenerate, v: Option<&str>) -> bool {
+ fn parse_switch_with_opt_path(slot: &mut SwitchWithOptPath, v: Option<&str>) -> bool {
*slot = match v {
- None => PgoGenerate::Enabled(None),
- Some(path) => PgoGenerate::Enabled(Some(PathBuf::from(path))),
+ None => SwitchWithOptPath::Enabled(None),
+ Some(path) => SwitchWithOptPath::Enabled(Some(PathBuf::from(path))),
};
true
}
"extra arguments to prepend to the linker invocation (space separated)"),
profile: bool = (false, parse_bool, [TRACKED],
"insert profiling code"),
- pgo_gen: PgoGenerate = (PgoGenerate::Disabled, parse_pgo_generate, [TRACKED],
+ pgo_gen: SwitchWithOptPath = (SwitchWithOptPath::Disabled,
+ parse_switch_with_opt_path, [TRACKED],
"Generate PGO profile data, to a given file, or to the default location if it's empty."),
pgo_use: Option<PathBuf> = (None, parse_opt_pathbuf, [TRACKED],
"Use PGO profile data from the given profile file."),
"don't interleave execution of lints; allows benchmarking individual lints"),
crate_attr: Vec<String> = (Vec::new(), parse_string_push, [TRACKED],
"inject the given attribute in the crate"),
- self_profile: bool = (false, parse_bool, [UNTRACKED],
+ self_profile: SwitchWithOptPath = (SwitchWithOptPath::Disabled,
+ parse_switch_with_opt_path, [UNTRACKED],
"run the self profiler and output the raw event data"),
self_profile_events: Option<Vec<String>> = (None, parse_opt_comma_list, [UNTRACKED],
"specifies which kinds of events get recorded by the self profiler"),
use std::path::PathBuf;
use std::collections::hash_map::DefaultHasher;
use super::{CrateType, DebugInfo, ErrorOutputType, OptLevel, OutputTypes,
- Passes, Sanitizer, LtoCli, LinkerPluginLto, PgoGenerate};
+ Passes, Sanitizer, LtoCli, LinkerPluginLto, SwitchWithOptPath};
use syntax::feature_gate::UnstableFeatures;
use rustc_target::spec::{MergeFunctions, PanicStrategy, RelroLevel, TargetTriple};
use syntax::edition::Edition;
impl_dep_tracking_hash_via_hash!(TargetTriple);
impl_dep_tracking_hash_via_hash!(Edition);
impl_dep_tracking_hash_via_hash!(LinkerPluginLto);
- impl_dep_tracking_hash_via_hash!(PgoGenerate);
+ impl_dep_tracking_hash_via_hash!(SwitchWithOptPath);
impl_dep_tracking_hash_for_sortable_vec_of!(String);
impl_dep_tracking_hash_for_sortable_vec_of!(PathBuf);
build_session_options_and_crate_config,
to_crate_config
};
- use crate::session::config::{LtoCli, LinkerPluginLto, PgoGenerate, ExternEntry};
+ use crate::session::config::{LtoCli, LinkerPluginLto, SwitchWithOptPath, ExternEntry};
use crate::session::build_session;
use crate::session::search_paths::SearchPath;
use std::collections::{BTreeMap, BTreeSet};
assert!(reference.dep_tracking_hash() != opts.dep_tracking_hash());
opts = reference.clone();
- opts.debugging_opts.pgo_gen = PgoGenerate::Enabled(None);
+ opts.debugging_opts.pgo_gen = SwitchWithOptPath::Enabled(None);
assert_ne!(reference.dep_tracking_hash(), opts.dep_tracking_hash());
opts = reference.clone();
use crate::lint::builtin::BuiltinLintDiagnostics;
use crate::middle::allocator::AllocatorKind;
use crate::middle::dependency_format;
-use crate::session::config::OutputType;
+use crate::session::config::{OutputType, SwitchWithOptPath};
use crate::session::search_paths::{PathKind, SearchPath};
use crate::util::nodemap::{FxHashMap, FxHashSet};
use crate::util::common::{duration_to_secs_str, ErrorReported};
driver_lint_caps: FxHashMap<lint::LintId, lint::Level>,
) -> Session {
let self_profiler =
- if sopts.debugging_opts.self_profile {
- let profiler = SelfProfiler::new(&sopts.debugging_opts.self_profile_events);
+ if let SwitchWithOptPath::Enabled(ref d) = sopts.debugging_opts.self_profile {
+ let directory = if let Some(ref directory) = d {
+ directory
+ } else {
+ std::path::Path::new(".")
+ };
+
+ let profiler = SelfProfiler::new(
+ directory,
+ sopts.crate_name.as_ref().map(|s| &s[..]),
+ &sopts.debugging_opts.self_profile_events
+ );
match profiler {
Ok(profiler) => {
crate::ty::query::QueryName::register_with_profiler(&profiler);
continue;
}
- // Call infcx.resolve_type_vars_if_possible to see if we can
+ // Call infcx.resolve_vars_if_possible to see if we can
// get rid of any inference variables.
- let obligation = infcx.resolve_type_vars_if_possible(
+ let obligation = infcx.resolve_vars_if_possible(
&Obligation::new(dummy_cause.clone(), new_env, pred)
);
let result = select.select(&obligation);
fresh_preds.insert(self.clean_pred(select.infcx(), predicate));
// Resolve any inference variables that we can, to help selection succeed
- predicate = select.infcx().resolve_type_vars_if_possible(&predicate);
+ predicate = select.infcx().resolve_vars_if_possible(&predicate);
// We only add a predicate as a user-displayable bound if
// it involves a generic parameter, and doesn't contain
obligation: PredicateObligation<'tcx>
) -> InEnvironment<'tcx, PredicateObligation<'tcx>> {
assert!(!infcx.is_in_snapshot());
- let obligation = infcx.resolve_type_vars_if_possible(&obligation);
+ let obligation = infcx.resolve_vars_if_possible(&obligation);
let environment = match obligation.param_env.def_id {
Some(def_id) => infcx.tcx.environment(def_id),
bug!("Encountered errors `{:?}` resolving bounds after type-checking", errors);
}
- let result = self.resolve_type_vars_if_possible(result);
+ let result = self.resolve_vars_if_possible(result);
let result = self.tcx.erase_regions(&result);
self.tcx.lift_to_global(&result).unwrap_or_else(||
- bug!("Uninferred types/regions in `{:?}`", result)
+ bug!("Uninferred types/regions/consts in `{:?}`", result)
)
}
}
a_impl_header.predicates
.iter()
.chain(&b_impl_header.predicates)
- .map(|p| infcx.resolve_type_vars_if_possible(p))
+ .map(|p| infcx.resolve_vars_if_possible(p))
.map(|p| Obligation { cause: ObligationCause::dummy(),
param_env,
recursion_depth: 0,
return None
}
- let impl_header = selcx.infcx().resolve_type_vars_if_possible(&a_impl_header);
+ let impl_header = selcx.infcx().resolve_vars_if_possible(&a_impl_header);
let intercrate_ambiguity_causes = selcx.take_intercrate_ambiguity_causes();
debug!("overlap: intercrate_ambiguity_causes={:#?}", intercrate_ambiguity_causes);
error: &MismatchedProjectionTypes<'tcx>)
{
let predicate =
- self.resolve_type_vars_if_possible(&obligation.predicate);
+ self.resolve_vars_if_possible(&obligation.predicate);
if predicate.references_error() {
return
where T: fmt::Display + TypeFoldable<'tcx>
{
let predicate =
- self.resolve_type_vars_if_possible(&obligation.predicate);
+ self.resolve_vars_if_possible(&obligation.predicate);
let mut err = struct_span_err!(self.tcx.sess, obligation.cause.span, E0275,
"overflow evaluating the requirement `{}`",
predicate);
/// we do not suggest increasing the overflow limit, which is not
/// going to help).
pub fn report_overflow_error_cycle(&self, cycle: &[PredicateObligation<'tcx>]) -> ! {
- let cycle = self.resolve_type_vars_if_possible(&cycle.to_owned());
+ let cycle = self.resolve_vars_if_possible(&cycle.to_owned());
assert!(cycle.len() > 0);
debug!("report_overflow_error_cycle: cycle={:?}", cycle);
fn get_parent_trait_ref(&self, code: &ObligationCauseCode<'tcx>) -> Option<String> {
match code {
&ObligationCauseCode::BuiltinDerivedObligation(ref data) => {
- let parent_trait_ref = self.resolve_type_vars_if_possible(
+ let parent_trait_ref = self.resolve_vars_if_possible(
&data.parent_trait_ref);
match self.get_parent_trait_ref(&data.parent_code) {
Some(t) => Some(t),
match obligation.predicate {
ty::Predicate::Trait(ref trait_predicate) => {
let trait_predicate =
- self.resolve_type_vars_if_possible(trait_predicate);
+ self.resolve_vars_if_possible(trait_predicate);
if self.tcx.sess.has_errors() && trait_predicate.references_error() {
return;
}
ty::Predicate::RegionOutlives(ref predicate) => {
- let predicate = self.resolve_type_vars_if_possible(predicate);
+ let predicate = self.resolve_vars_if_possible(predicate);
let err = self.region_outlives_predicate(&obligation.cause,
&predicate).err().unwrap();
struct_span_err!(
ty::Predicate::Projection(..) | ty::Predicate::TypeOutlives(..) => {
let predicate =
- self.resolve_type_vars_if_possible(&obligation.predicate);
+ self.resolve_vars_if_possible(&obligation.predicate);
struct_span_err!(self.tcx.sess, span, E0280,
"the requirement `{}` is not satisfied",
predicate)
}
OutputTypeParameterMismatch(ref found_trait_ref, ref expected_trait_ref, _) => {
- let found_trait_ref = self.resolve_type_vars_if_possible(&*found_trait_ref);
- let expected_trait_ref = self.resolve_type_vars_if_possible(&*expected_trait_ref);
+ let found_trait_ref = self.resolve_vars_if_possible(&*found_trait_ref);
+ let expected_trait_ref = self.resolve_vars_if_possible(&*expected_trait_ref);
if expected_trait_ref.self_ty().references_error() {
return;
// ambiguous impls. The latter *ought* to be a
// coherence violation, so we don't report it here.
- let predicate = self.resolve_type_vars_if_possible(&obligation.predicate);
+ let predicate = self.resolve_vars_if_possible(&obligation.predicate);
let span = obligation.cause.span;
debug!("maybe_report_ambiguity(predicate={:?}, obligation={:?})",
err.note("shared static variables must have a type that implements `Sync`");
}
ObligationCauseCode::BuiltinDerivedObligation(ref data) => {
- let parent_trait_ref = self.resolve_type_vars_if_possible(&data.parent_trait_ref);
+ let parent_trait_ref = self.resolve_vars_if_possible(&data.parent_trait_ref);
let ty = parent_trait_ref.skip_binder().self_ty();
err.note(&format!("required because it appears within the type `{}`", ty));
obligated_types.push(ty);
}
}
ObligationCauseCode::ImplDerivedObligation(ref data) => {
- let parent_trait_ref = self.resolve_type_vars_if_possible(&data.parent_trait_ref);
+ let parent_trait_ref = self.resolve_vars_if_possible(&data.parent_trait_ref);
err.note(
&format!("required because of the requirements on the impl of `{}` for `{}`",
parent_trait_ref,
obligated_types: &mut Vec<&ty::TyS<'tcx>>,
cause_code: &ObligationCauseCode<'tcx>) -> bool {
if let ObligationCauseCode::BuiltinDerivedObligation(ref data) = cause_code {
- let parent_trait_ref = self.resolve_type_vars_if_possible(&data.parent_trait_ref);
+ let parent_trait_ref = self.resolve_vars_if_possible(&data.parent_trait_ref);
if obligated_types.iter().any(|ot| ot == &parent_trait_ref.skip_binder().self_ty()) {
return true;
{
// this helps to reduce duplicate errors, as well as making
// debug output much nicer to read and so on.
- let obligation = infcx.resolve_type_vars_if_possible(&obligation);
+ let obligation = infcx.resolve_vars_if_possible(&obligation);
debug!("register_predicate_obligation(obligation={:?})", obligation);
}) {
debug!("process_predicate: pending obligation {:?} still stalled on {:?}",
self.selcx.infcx()
- .resolve_type_vars_if_possible(&pending_obligation.obligation),
+ .resolve_vars_if_possible(&pending_obligation.obligation),
pending_obligation.stalled_on);
return ProcessResult::Unchanged;
}
if obligation.predicate.has_infer_types() {
obligation.predicate =
- self.selcx.infcx().resolve_type_vars_if_possible(&obligation.predicate);
+ self.selcx.infcx().resolve_vars_if_possible(&obligation.predicate);
}
debug!("process_obligation: obligation = {:?}", obligation);
trait_ref_type_vars(self.selcx, data.to_poly_trait_ref());
debug!("process_predicate: pending obligation {:?} now stalled on {:?}",
- self.selcx.infcx().resolve_type_vars_if_possible(obligation),
+ self.selcx.infcx().resolve_vars_if_possible(obligation),
pending_obligation.stalled_on);
ProcessResult::Unchanged
{
t.skip_binder() // ok b/c this check doesn't care about regions
.input_types()
- .map(|t| selcx.infcx().resolve_type_vars_if_possible(&t))
+ .map(|t| selcx.infcx().resolve_vars_if_possible(&t))
.filter(|t| t.has_infer_types())
.flat_map(|t| t.walk())
.filter(|t| match t.sty { ty::Infer(_) => true, _ => false })
debug!("fully_normalize: select_all_or_error start");
fulfill_cx.select_all_or_error(infcx)?;
debug!("fully_normalize: select_all_or_error complete");
- let resolved_value = infcx.resolve_type_vars_if_possible(&normalized_value);
+ let resolved_value = infcx.resolve_vars_if_possible(&normalized_value);
debug!("fully_normalize: resolved_value={:?}", resolved_value);
Ok(resolved_value)
}
}
fn fold<T:TypeFoldable<'tcx>>(&mut self, value: &T) -> T {
- let value = self.selcx.infcx().resolve_type_vars_if_possible(value);
+ let value = self.selcx.infcx().resolve_vars_if_possible(value);
if !value.has_projections() {
value
{
let infcx = selcx.infcx();
- let projection_ty = infcx.resolve_type_vars_if_possible(&projection_ty);
+ let projection_ty = infcx.resolve_vars_if_possible(&projection_ty);
let cache_key = ProjectionCacheKey { ty: projection_ty };
debug!("opt_normalize_projection_type(\
// from a specific call to `opt_normalize_projection_type` - if
// there's no precise match, the original cache entry is "stranded"
// anyway.
- ty: infcx.resolve_type_vars_if_possible(&predicate.projection_ty)
+ ty: infcx.resolve_vars_if_possible(&predicate.projection_ty)
})
}
}
&orig_values,
result)
{
- let ty = self.infcx.resolve_type_vars_if_possible(&ty);
+ let ty = self.infcx.resolve_vars_if_possible(&ty);
let kinds = value.into_kinds_reporting_overflows(tcx, span, ty);
return InferOk {
value: kinds,
region_obligations
.iter()
.map(|(_, r_o)| (r_o.sup_type, r_o.sub_region))
- .map(|(ty, r)| (infcx.resolve_type_vars_if_possible(&ty), r)),
+ .map(|(ty, r)| (infcx.resolve_vars_if_possible(&ty), r)),
®ion_constraint_data,
);
let obligation = &stack.obligation;
let predicate = self.infcx()
- .resolve_type_vars_if_possible(&obligation.predicate);
+ .resolve_vars_if_possible(&obligation.predicate);
// OK to skip binder because of the nature of the
// trait-ref-is-knowable check, which does not care about
cause: obligation.cause.clone(),
recursion_depth: obligation.recursion_depth,
predicate: self.infcx()
- .resolve_type_vars_if_possible(&obligation.predicate),
+ .resolve_vars_if_possible(&obligation.predicate),
};
if obligation.predicate.skip_binder().self_ty().is_ty_var() {
snapshot: &CombinedSnapshot<'_, 'tcx>,
) -> bool {
let poly_trait_predicate = self.infcx()
- .resolve_type_vars_if_possible(&obligation.predicate);
+ .resolve_vars_if_possible(&obligation.predicate);
let (placeholder_trait_predicate, placeholder_map) = self.infcx()
.replace_bound_vars_with_placeholders(&poly_trait_predicate);
debug!(
// Now resolve the *substitution* we built for the target earlier, replacing
// the inference variables inside with whatever we got from fulfillment.
- Ok(infcx.resolve_type_vars_if_possible(&target_substs))
+ Ok(infcx.resolve_vars_if_possible(&target_substs))
}
}
})
use crate::middle::lang_items;
use crate::middle::resolve_lifetime::{self, ObjectLifetimeDefault};
use crate::middle::stability;
-use crate::mir::{self, Mir, interpret, ProjectionKind};
+use crate::mir::{self, Body, interpret, ProjectionKind};
use crate::mir::interpret::{ConstValue, Allocation, Scalar};
use crate::ty::subst::{Kind, InternalSubsts, SubstsRef, Subst};
use crate::ty::ReprOptions;
generics: TypedArena<ty::Generics>,
trait_def: TypedArena<ty::TraitDef>,
adt_def: TypedArena<ty::AdtDef>,
- steal_mir: TypedArena<Steal<Mir<'tcx>>>,
- mir: TypedArena<Mir<'tcx>>,
+ steal_mir: TypedArena<Steal<Body<'tcx>>>,
+ mir: TypedArena<Body<'tcx>>,
tables: TypedArena<ty::TypeckTables<'tcx>>,
/// miri allocations
const_allocs: TypedArena<interpret::Allocation>,
CommonConsts {
err: mk_const(ty::Const {
- val: ConstValue::Scalar(Scalar::Bits { bits: 0, size: 0 }),
+ val: ConstValue::Scalar(Scalar::zst()),
ty: types.err,
}),
}
self.global_arenas.generics.alloc(generics)
}
- pub fn alloc_steal_mir(self, mir: Mir<'gcx>) -> &'gcx Steal<Mir<'gcx>> {
+ pub fn alloc_steal_mir(self, mir: Body<'gcx>) -> &'gcx Steal<Body<'gcx>> {
self.global_arenas.steal_mir.alloc(Steal::new(mir))
}
- pub fn alloc_mir(self, mir: Mir<'gcx>) -> &'gcx Mir<'gcx> {
+ pub fn alloc_mir(self, mir: Body<'gcx>) -> &'gcx Body<'gcx> {
self.global_arenas.mir.alloc(mir)
}
}
};
+ macro_rules! pluralise {
+ ($x:expr) => {
+ if $x != 1 { "s" } else { "" }
+ };
+ }
+
match *self {
CyclicTy(_) => write!(f, "cyclic type of infinite size"),
Mismatch => write!(f, "types differ"),
values.found)
}
Mutability => write!(f, "types differ in mutability"),
- FixedArraySize(values) => {
- write!(f, "expected an array with a fixed size of {} elements, \
- found one with {} elements",
+ TupleSize(values) => {
+ write!(f, "expected a tuple with {} element{}, \
+ found one with {} element{}",
values.expected,
- values.found)
+ pluralise!(values.expected),
+ values.found,
+ pluralise!(values.found))
}
- TupleSize(values) => {
- write!(f, "expected a tuple with {} elements, \
- found one with {} elements",
+ FixedArraySize(values) => {
+ write!(f, "expected an array with a fixed size of {} element{}, \
+ found one with {} element{}",
values.expected,
- values.found)
+ pluralise!(values.expected),
+ values.found,
+ pluralise!(values.found))
}
ArgCount => {
write!(f, "incorrect number of function parameters")
tcx.def_path_str(values.found))
}),
ProjectionBoundsLength(ref values) => {
- write!(f, "expected {} associated type bindings, found {}",
+ write!(f, "expected {} associated type binding{}, found {}",
values.expected,
+ pluralise!(values.expected),
values.found)
},
ExistentialMismatch(ref values) => {
&format!("trait `{}`", values.found))
}
ConstMismatch(ref values) => {
- write!(f, "expected `{:?}`, found `{:?}`", values.expected, values.found)
+ write!(f, "expected `{}`, found `{}`", values.expected, values.found)
}
}
}
use crate::infer::canonical::Canonical;
use crate::middle::lang_items::{FnTraitLangItem, FnMutTraitLangItem, FnOnceTraitLangItem};
use crate::middle::resolve_lifetime::ObjectLifetimeDefault;
-use crate::mir::Mir;
+use crate::mir::Body;
use crate::mir::interpret::{GlobalId, ErrorHandled};
use crate::mir::GeneratorLayout;
use crate::session::CrateDisambiguator;
/// Returns the possibly-auto-generated MIR of a `(DefId, Subst)` pair.
pub fn instance_mir(self, instance: ty::InstanceDef<'gcx>)
- -> &'gcx Mir<'gcx>
+ -> &'gcx Body<'gcx>
{
match instance {
ty::InstanceDef::Item(did) => {
p!(write("{}", name));
return Ok(self);
}
- if let ConstValue::Scalar(Scalar::Bits { bits, .. }) = ct.val {
+ if let ConstValue::Scalar(Scalar::Raw { data, .. }) = ct.val {
match ct.ty.sty {
ty::Bool => {
- p!(write("{}", if bits == 0 { "false" } else { "true" }));
+ p!(write("{}", if data == 0 { "false" } else { "true" }));
return Ok(self);
},
ty::Float(ast::FloatTy::F32) => {
- p!(write("{}f32", Single::from_bits(bits)));
+ p!(write("{}f32", Single::from_bits(data)));
return Ok(self);
},
ty::Float(ast::FloatTy::F64) => {
- p!(write("{}f64", Double::from_bits(bits)));
+ p!(write("{}f64", Double::from_bits(data)));
return Ok(self);
},
ty::Uint(ui) => {
- p!(write("{}{}", bits, ui));
+ p!(write("{}{}", data, ui));
return Ok(self);
},
ty::Int(i) =>{
let size = self.tcx().layout_of(ty::ParamEnv::empty().and(ty))
.unwrap()
.size;
- p!(write("{}{}", sign_extend(bits, size) as i128, i));
+ p!(write("{}{}", sign_extend(data, size) as i128, i));
return Ok(self);
},
ty::Char => {
- p!(write("{:?}", ::std::char::from_u32(bits as u32).unwrap()));
+ p!(write("{:?}", ::std::char::from_u32(data as u32).unwrap()));
return Ok(self);
}
_ => {},
use crate::ty::subst::{Kind, UnpackedKind, SubstsRef};
use crate::ty::{self, Ty, TyCtxt, TypeFoldable};
use crate::ty::error::{ExpectedFound, TypeError};
-use crate::mir::interpret::{GlobalId, ConstValue, Scalar};
-use crate::util::common::ErrorReported;
-use syntax_pos::DUMMY_SP;
+use crate::mir::interpret::{ConstValue, Scalar, GlobalId};
use std::rc::Rc;
use std::iter;
use rustc_target::spec::abi;
(&ty::Array(a_t, sz_a), &ty::Array(b_t, sz_b)) =>
{
let t = relation.relate(&a_t, &b_t)?;
- let to_u64 = |x: ty::Const<'tcx>| -> Result<u64, ErrorReported> {
- match x.val {
- // FIXME(const_generics): this doesn't work right now,
- // because it tries to relate an `Infer` to a `Param`.
- ConstValue::Unevaluated(def_id, substs) => {
- // FIXME(eddyb) get the right param_env.
- let param_env = ty::ParamEnv::empty();
- if let Some(substs) = tcx.lift_to_global(&substs) {
- let instance = ty::Instance::resolve(
- tcx.global_tcx(),
- param_env,
- def_id,
- substs,
- );
- if let Some(instance) = instance {
- let cid = GlobalId {
- instance,
- promoted: None,
- };
- if let Some(s) = tcx.const_eval(param_env.and(cid))
- .ok()
- .map(|c| c.unwrap_usize(tcx)) {
- return Ok(s)
- }
- }
+ match relation.relate(&sz_a, &sz_b) {
+ Ok(sz) => Ok(tcx.mk_ty(ty::Array(t, sz))),
+ Err(err) => {
+ // Check whether the lengths are both concrete/known values,
+ // but are unequal, for better diagnostics.
+ match (sz_a.assert_usize(tcx), sz_b.assert_usize(tcx)) {
+ (Some(sz_a_val), Some(sz_b_val)) => {
+ Err(TypeError::FixedArraySize(
+ expected_found(relation, &sz_a_val, &sz_b_val)
+ ))
}
- tcx.sess.delay_span_bug(tcx.def_span(def_id),
- "array length could not be evaluated");
- Err(ErrorReported)
+ _ => return Err(err),
}
- _ => x.assert_usize(tcx).ok_or_else(|| {
- tcx.sess.delay_span_bug(DUMMY_SP,
- "array length could not be evaluated");
- ErrorReported
- })
- }
- };
- match (to_u64(*sz_a), to_u64(*sz_b)) {
- (Ok(sz_a_u64), Ok(sz_b_u64)) => {
- if sz_a_u64 == sz_b_u64 {
- Ok(tcx.mk_ty(ty::Array(t, sz_a)))
- } else {
- Err(TypeError::FixedArraySize(
- expected_found(relation, &sz_a_u64, &sz_b_u64)))
- }
- }
- // We reported an error or will ICE, so we can return Error.
- (Err(ErrorReported), _) | (_, Err(ErrorReported)) => {
- Ok(tcx.types.err)
}
}
}
{
let tcx = relation.tcx();
+ let eagerly_eval = |x: &'tcx ty::Const<'tcx>| {
+ if let ConstValue::Unevaluated(def_id, substs) = x.val {
+ // FIXME(eddyb) get the right param_env.
+ let param_env = ty::ParamEnv::empty();
+ if let Some(substs) = tcx.lift_to_global(&substs) {
+ let instance = ty::Instance::resolve(
+ tcx.global_tcx(),
+ param_env,
+ def_id,
+ substs,
+ );
+ if let Some(instance) = instance {
+ let cid = GlobalId {
+ instance,
+ promoted: None,
+ };
+ if let Ok(ct) = tcx.const_eval(param_env.and(cid)) {
+ return ct.val;
+ }
+ }
+ }
+ }
+ x.val
+ };
+
// Currently, the values that can be unified are those that
// implement both `PartialEq` and `Eq`, corresponding to
// `structural_match` types.
// FIXME(const_generics): check for `structural_match` synthetic attribute.
- match (a.val, b.val) {
+ match (eagerly_eval(a), eagerly_eval(b)) {
(ConstValue::Infer(_), _) | (_, ConstValue::Infer(_)) => {
// The caller should handle these cases!
bug!("var types encountered in super_relate_consts: {:?} {:?}", a, b)
(ConstValue::Placeholder(p1), ConstValue::Placeholder(p2)) if p1 == p2 => {
Ok(a)
}
- (ConstValue::Scalar(Scalar::Bits { .. }), _) if a == b => {
- Ok(a)
+ (a_val @ ConstValue::Scalar(Scalar::Raw { .. }), b_val @ _)
+ if a.ty == b.ty && a_val == b_val =>
+ {
+ Ok(tcx.mk_const(ty::Const {
+ val: a_val,
+ ty: a.ty,
+ }))
}
(ConstValue::ByRef(..), _) => {
bug!(
}))
}
- _ => {
- Err(TypeError::ConstMismatch(expected_found(relation, &a, &b)))
- }
+ _ => Err(TypeError::ConstMismatch(expected_found(relation, &a, &b))),
}
}
/// optimization, but that'd be expensive. And yet we don't just want
/// to mutate it in place, because that would spoil the idea that
/// queries are these pure functions that produce an immutable value
-/// (since if you did the query twice, you could observe the
-/// mutations). So instead we have the query produce a `&'tcx
-/// Steal<Mir<'tcx>>` (to be very specific). Now we can read from this
+/// (since if you did the query twice, you could observe the mutations).
+/// So instead we have the query produce a `&'tcx Steal<mir::Body<'tcx>>`
+/// (to be very specific). Now we can read from this
/// as much as we want (using `borrow()`), but you can also
/// `steal()`. Once you steal, any further attempt to read will panic.
/// Therefore, we know that -- assuming no ICE -- nobody is observing
use crate::hir;
use crate::hir::def_id::DefId;
use crate::infer::canonical::Canonical;
-use crate::mir::interpret::{ConstValue, truncate};
+use crate::mir::interpret::ConstValue;
use crate::middle::region;
use polonius_engine::Atom;
use rustc_data_structures::indexed_vec::Idx;
let size = tcx.layout_of(ty).unwrap_or_else(|e| {
panic!("could not compute layout for {:?}: {:?}", ty, e)
}).size;
- let truncated = truncate(bits, size);
- assert_eq!(truncated, bits, "from_bits called with untruncated value");
- Self::from_scalar(tcx, Scalar::Bits { bits, size: size.bytes() as u8 }, ty.value)
+ Self::from_scalar(tcx, Scalar::from_uint(bits, size), ty.value)
}
#[inline]
pub fn zero_sized(tcx: TyCtxt<'_, '_, 'tcx>, ty: Ty<'tcx>) -> &'tcx Self {
- Self::from_scalar(tcx, Scalar::Bits { bits: 0, size: 0 }, ty)
+ Self::from_scalar(tcx, Scalar::zst(), ty)
}
#[inline]
use std::borrow::Cow;
use std::error::Error;
+use std::fs;
use std::mem::{self, Discriminant};
+use std::path::Path;
use std::process;
use std::thread::ThreadId;
use std::u32;
}
impl SelfProfiler {
- pub fn new(event_filters: &Option<Vec<String>>) -> Result<SelfProfiler, Box<dyn Error>> {
- let filename = format!("pid-{}.rustc_profile", process::id());
- let path = std::path::Path::new(&filename);
- let profiler = Profiler::new(path)?;
+ pub fn new(
+ output_directory: &Path,
+ crate_name: Option<&str>,
+ event_filters: &Option<Vec<String>>
+ ) -> Result<SelfProfiler, Box<dyn Error>> {
+ fs::create_dir_all(output_directory)?;
+
+ let crate_name = crate_name.unwrap_or("unknown-crate");
+ let filename = format!("{}-{}.rustc_profile", crate_name, process::id());
+ let path = output_directory.join(&filename);
+ let profiler = Profiler::new(&path)?;
let query_event_kind = profiler.alloc_string("Query");
let generic_activity_event_kind = profiler.alloc_string("GenericActivity");
use rustc::hir::def_id::LOCAL_CRATE;
use rustc_codegen_ssa::back::write::{CodegenContext, ModuleConfig, run_assembler};
use rustc_codegen_ssa::traits::*;
-use rustc::session::config::{self, OutputType, Passes, Lto, PgoGenerate};
+use rustc::session::config::{self, OutputType, Passes, Lto, SwitchWithOptPath};
use rustc::session::Session;
use rustc::ty::TyCtxt;
use rustc_codegen_ssa::{RLIB_BYTECODE_EXTENSION, ModuleCodegen, CompiledModule};
let inline_threshold = config.inline_threshold;
let pgo_gen_path = match config.pgo_gen {
- PgoGenerate::Enabled(ref opt_dir_path) => {
+ SwitchWithOptPath::Enabled(ref opt_dir_path) => {
let path = if let Some(dir_path) = opt_dir_path {
dir_path.join("default_%m.profraw")
} else {
Some(CString::new(format!("{}", path.display())).unwrap())
}
- PgoGenerate::Disabled => {
+ SwitchWithOptPath::Disabled => {
None
}
};
) -> &'ll Value {
let bitsize = if layout.is_bool() { 1 } else { layout.value.size(self).bits() };
match cv {
- Scalar::Bits { size: 0, .. } => {
+ Scalar::Raw { size: 0, .. } => {
assert_eq!(0, layout.value.size(self).bytes());
self.const_undef(self.type_ix(0))
},
- Scalar::Bits { bits, size } => {
+ Scalar::Raw { data, size } => {
assert_eq!(size as u64, layout.value.size(self).bytes());
- let llval = self.const_uint_big(self.type_ix(bitsize), bits);
+ let llval = self.const_uint_big(self.type_ix(bitsize), data);
if layout.value == layout::Pointer {
unsafe { llvm::LLVMConstIntToPtr(llval, llty) }
} else {
use crate::llvm;
use crate::llvm::debuginfo::{DIScope, DISubprogram};
use crate::common::CodegenCx;
-use rustc::mir::{Mir, SourceScope};
+use rustc::mir::{Body, SourceScope};
use libc::c_uint;
/// If debuginfo is disabled, the returned vector is empty.
pub fn create_mir_scopes(
cx: &CodegenCx<'ll, '_>,
- mir: &Mir<'_>,
+ mir: &Body<'_>,
debug_context: &FunctionDebugContext<&'ll DISubprogram>,
) -> IndexVec<SourceScope, MirDebugScope<&'ll DIScope>> {
let null_scope = MirDebugScope {
}
fn make_mir_scope(cx: &CodegenCx<'ll, '_>,
- mir: &Mir<'_>,
+ mir: &Body<'_>,
has_variables: &BitSet<SourceScope>,
debug_context: &FunctionDebugContextData<&'ll DISubprogram>,
scope: SourceScope,
instance: Instance<'tcx>,
sig: ty::FnSig<'tcx>,
llfn: &'ll Value,
- mir: &mir::Mir<'_>,
+ mir: &mir::Body<'_>,
) -> FunctionDebugContext<&'ll DISubprogram> {
if self.sess().opts.debuginfo == DebugInfo::None {
return FunctionDebugContext::DebugInfoDisabled;
fn create_mir_scopes(
&self,
- mir: &mir::Mir<'_>,
+ mir: &mir::Body<'_>,
debug_context: &mut FunctionDebugContext<&'ll DISubprogram>,
) -> IndexVec<mir::SourceScope, MirDebugScope<&'ll DIScope>> {
create_scope_map::create_mir_scopes(self, mir, debug_context)
use rustc::dep_graph::cgu_reuse_tracker::CguReuseTracker;
use rustc::middle::cstore::EncodedMetadata;
use rustc::session::config::{self, OutputFilenames, OutputType, Passes, Lto,
- Sanitizer, PgoGenerate};
+ Sanitizer, SwitchWithOptPath};
use rustc::session::Session;
use rustc::util::nodemap::FxHashMap;
use rustc::hir::def_id::{CrateNum, LOCAL_CRATE};
/// Some(level) to optimize binary size, or None to not affect program size.
pub opt_size: Option<config::OptLevel>,
- pub pgo_gen: PgoGenerate,
+ pub pgo_gen: SwitchWithOptPath,
pub pgo_use: Option<PathBuf>,
// Flags indicating which outputs to produce.
opt_level: None,
opt_size: None,
- pgo_gen: PgoGenerate::Disabled,
+ pgo_gen: SwitchWithOptPath::Disabled,
pgo_use: None,
emit_no_opt_bc: false,
let mir = fx.mir;
let mut analyzer = LocalAnalyzer::new(fx);
- analyzer.visit_mir(mir);
+ analyzer.visit_body(mir);
for (index, ty) in mir.local_decls.iter().map(|l| l.ty).enumerate() {
let ty = fx.monomorphize(&ty);
}
}
-pub fn cleanup_kinds<'a, 'tcx>(mir: &mir::Mir<'tcx>) -> IndexVec<mir::BasicBlock, CleanupKind> {
+pub fn cleanup_kinds<'a, 'tcx>(mir: &mir::Body<'tcx>) -> IndexVec<mir::BasicBlock, CleanupKind> {
fn discover_masters<'tcx>(result: &mut IndexVec<mir::BasicBlock, CleanupKind>,
- mir: &mir::Mir<'tcx>) {
+ mir: &mir::Body<'tcx>) {
for (bb, data) in mir.basic_blocks().iter_enumerated() {
match data.terminator().kind {
TerminatorKind::Goto { .. } |
}
fn propagate<'tcx>(result: &mut IndexVec<mir::BasicBlock, CleanupKind>,
- mir: &mir::Mir<'tcx>) {
+ mir: &mir::Body<'tcx>) {
let mut funclet_succs = IndexVec::from_elem(None, mir.basic_blocks());
let mut set_successor = |funclet: mir::BasicBlock, succ| {
use rustc::ty::{self, Ty, TypeFoldable, UpvarSubsts};
use rustc::ty::layout::{TyLayout, HasTyCtxt, FnTypeExt};
-use rustc::mir::{self, Mir};
+use rustc::mir::{self, Body};
use rustc::session::config::DebugInfo;
use rustc_mir::monomorphize::Instance;
use rustc_target::abi::call::{FnType, PassMode, IgnoreMode};
pub struct FunctionCx<'a, 'tcx: 'a, Bx: BuilderMethods<'a, 'tcx>> {
instance: Instance<'tcx>,
- mir: &'a mir::Mir<'tcx>,
+ mir: &'a mir::Body<'tcx>,
debug_context: FunctionDebugContext<Bx::DIScope>,
pub fn codegen_mir<'a, 'tcx: 'a, Bx: BuilderMethods<'a, 'tcx>>(
cx: &'a Bx::CodegenCx,
llfn: Bx::Value,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
instance: Instance<'tcx>,
sig: ty::FnSig<'tcx>,
) {
}
fn create_funclets<'a, 'tcx: 'a, Bx: BuilderMethods<'a, 'tcx>>(
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
bx: &mut Bx,
cleanup_kinds: &IndexVec<mir::BasicBlock, CleanupKind>,
block_bxs: &IndexVec<mir::BasicBlock, Bx::BasicBlock>)
instance: Instance<'tcx>,
sig: ty::FnSig<'tcx>,
llfn: Self::Value,
- mir: &mir::Mir<'_>,
+ mir: &mir::Body<'_>,
) -> FunctionDebugContext<Self::DIScope>;
fn create_mir_scopes(
&self,
- mir: &mir::Mir<'_>,
+ mir: &mir::Body<'_>,
debug_context: &mut FunctionDebugContext<Self::DIScope>,
) -> IndexVec<mir::SourceScope, MirDebugScope<Self::DIScope>>;
fn extend_scope_to_file(
ct: &'tcx ty::Const<'tcx>,
) -> Result<Self::Const, Self::Error> {
// only print integers
- if let ConstValue::Scalar(Scalar::Bits { .. }) = ct.val {
+ if let ConstValue::Scalar(Scalar::Raw { .. }) = ct.val {
if ct.ty.is_integral() {
return self.pretty_print_const(ct);
}
// for ':' and '-'
'-' | ':' => self.path.temp_buf.push('.'),
+ // Avoid segmentation fault on some platforms, see #60925.
+ 'm' if self.path.temp_buf.ends_with(".llv") => self.path.temp_buf.push_str("$6d$"),
+
// These are legal symbols
'a'..='z' | 'A'..='Z' | '0'..='9' | '_' | '.' | '$' => self.path.temp_buf.push(c),
use rustc::session::Session;
use rustc::ty::{self, Ty, TyCtxt};
use rustc::ty::codec::TyDecoder;
-use rustc::mir::Mir;
+use rustc::mir::Body;
use rustc::util::captures::Captures;
use std::io;
pub fn maybe_get_optimized_mir(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
id: DefIndex)
- -> Option<Mir<'tcx>> {
+ -> Option<Body<'tcx>> {
match self.is_proc_macro(id) {
true => None,
false => self.entry(id).mir.map(|mir| mir.decode((self, tcx))),
self.lazy_seq(param_names.iter().map(|ident| ident.name))
}
- fn encode_optimized_mir(&mut self, def_id: DefId) -> Option<Lazy<mir::Mir<'tcx>>> {
+ fn encode_optimized_mir(&mut self, def_id: DefId) -> Option<Lazy<mir::Body<'tcx>>> {
debug!("EntryBuilder::encode_mir({:?})", def_id);
if self.tcx.mir_keys(LOCAL_CRATE).contains(&def_id) {
let mir = self.tcx.optimized_mir(def_id);
pub predicates: Option<Lazy<ty::GenericPredicates<'tcx>>>,
pub predicates_defined_on: Option<Lazy<ty::GenericPredicates<'tcx>>>,
- pub mir: Option<Lazy<mir::Mir<'tcx>>>,
+ pub mir: Option<Lazy<mir::Body<'tcx>>>,
}
#[derive(Copy, Clone, RustcEncodable, RustcDecodable)]
use crate::dataflow::move_paths::MoveData;
use rustc::mir::traversal;
use rustc::mir::visit::{PlaceContext, Visitor, NonUseContext, MutatingUseContext};
-use rustc::mir::{self, Location, Mir, Local};
+use rustc::mir::{self, Location, Body, Local};
use rustc::ty::{RegionVid, TyCtxt};
use rustc::util::nodemap::{FxHashMap, FxHashSet};
use rustc_data_structures::indexed_vec::IndexVec;
impl LocalsStateAtExit {
fn build(
locals_are_invalidated_at_exit: bool,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
move_data: &MoveData<'tcx>
) -> Self {
struct HasStorageDead(BitSet<Local>);
LocalsStateAtExit::AllAreInvalidated
} else {
let mut has_storage_dead = HasStorageDead(BitSet::new_empty(mir.local_decls.len()));
- has_storage_dead.visit_mir(mir);
+ has_storage_dead.visit_body(mir);
let mut has_storage_dead_or_moved = has_storage_dead.0;
for move_out in &move_data.moves {
if let Some(index) = move_data.base_local(move_out.path) {
impl<'tcx> BorrowSet<'tcx> {
pub fn build(
tcx: TyCtxt<'_, '_, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
locals_are_invalidated_at_exit: bool,
move_data: &MoveData<'tcx>
) -> Self {
struct GatherBorrows<'a, 'gcx: 'tcx, 'tcx: 'a> {
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
idx_vec: IndexVec<BorrowIndex, BorrowData<'tcx>>,
location_map: FxHashMap<Location, BorrowIndex>,
activation_map: FxHashMap<Location, Vec<BorrowIndex>>,
-use rustc::mir::{BasicBlock, Location, Mir};
+use rustc::mir::{BasicBlock, Location, Body};
use rustc_data_structures::indexed_vec::{Idx, IndexVec};
/// Maps between a MIR Location, which identifies a particular
}
impl LocationTable {
- crate fn new(mir: &Mir<'_>) -> Self {
+ crate fn new(mir: &Body<'_>) -> Self {
let mut num_points = 0;
let statements_before_block = mir.basic_blocks()
.iter()
use rustc::middle::borrowck::SignalledError;
use rustc::mir::{AggregateKind, BasicBlock, BorrowCheckResult, BorrowKind};
use rustc::mir::{
- ClearCrossCrate, Local, Location, Mir, Mutability, Operand, Place, PlaceBase, Static, StaticKind
+ ClearCrossCrate, Local, Location, Body, Mutability, Operand, Place, PlaceBase, Static,
+
+ StaticKind
};
use rustc::mir::{Field, Projection, ProjectionElem, Rvalue, Statement, StatementKind};
use rustc::mir::{Terminator, TerminatorKind};
}
let opt_closure_req = tcx.infer_ctxt().enter(|infcx| {
- let input_mir: &Mir<'_> = &input_mir.borrow();
+ let input_mir: &Body<'_> = &input_mir.borrow();
do_mir_borrowck(&infcx, input_mir, def_id)
});
debug!("mir_borrowck done");
fn do_mir_borrowck<'a, 'gcx, 'tcx>(
infcx: &InferCtxt<'a, 'gcx, 'tcx>,
- input_mir: &Mir<'gcx>,
+ input_mir: &Body<'gcx>,
def_id: DefId,
) -> BorrowCheckResult<'gcx> {
debug!("do_mir_borrowck(def_id = {:?})", def_id);
// requires first making our own copy of the MIR. This copy will
// be modified (in place) to contain non-lexical lifetimes. It
// will have a lifetime tied to the inference context.
- let mut mir: Mir<'tcx> = input_mir.clone();
+ let mut mir: Body<'tcx> = input_mir.clone();
let free_regions = nll::replace_regions_in_mir(infcx, def_id, param_env, &mut mir);
let mir = &mir; // no further changes
let location_table = &LocationTable::new(mir);
pub struct MirBorrowckCtxt<'cx, 'gcx: 'tcx, 'tcx: 'cx> {
infcx: &'cx InferCtxt<'cx, 'gcx, 'tcx>,
- mir: &'cx Mir<'tcx>,
+ mir: &'cx Body<'tcx>,
mir_def_id: DefId,
move_data: &'cx MoveData<'tcx>,
impl<'cx, 'gcx, 'tcx> DataflowResultsConsumer<'cx, 'tcx> for MirBorrowckCtxt<'cx, 'gcx, 'tcx> {
type FlowState = Flows<'cx, 'gcx, 'tcx>;
- fn mir(&self) -> &'cx Mir<'tcx> {
+ fn mir(&self) -> &'cx Body<'tcx> {
self.mir
}
use rustc::hir;
use rustc::hir::Node;
-use rustc::mir::{self, BindingForm, Constant, ClearCrossCrate, Local, Location, Mir};
+use rustc::mir::{self, BindingForm, Constant, ClearCrossCrate, Local, Location, Body};
use rustc::mir::{
Mutability, Operand, Place, PlaceBase, Projection, ProjectionElem, Static, StaticKind,
};
// by trying (3.), then (2.) and finally falling back on (1.).
fn suggest_ampmut<'cx, 'gcx, 'tcx>(
tcx: TyCtxt<'cx, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
local: Local,
local_decl: &mir::LocalDecl<'tcx>,
opt_ty_info: Option<Span>,
use rustc::infer::InferCtxt;
use rustc::mir::visit::TyContext;
use rustc::mir::visit::Visitor;
-use rustc::mir::{BasicBlock, BasicBlockData, Location, Mir, Place, PlaceBase, Rvalue};
+use rustc::mir::{BasicBlock, BasicBlockData, Location, Body, Place, PlaceBase, Rvalue};
use rustc::mir::{SourceInfo, Statement, Terminator};
use rustc::mir::UserTypeProjection;
use rustc::ty::fold::TypeFoldable;
liveness_constraints: &mut LivenessValues<RegionVid>,
all_facts: &mut Option<AllFacts>,
location_table: &LocationTable,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
borrow_set: &BorrowSet<'tcx>,
) {
let mut cg = ConstraintGeneration {
use crate::borrow_check::nll::ToRegionVid;
use crate::util::liveness::{self, DefUse};
use rustc::mir::visit::{MirVisitable, PlaceContext, Visitor};
-use rustc::mir::{Local, Location, Mir};
+use rustc::mir::{Local, Location, Body};
use rustc::ty::{RegionVid, TyCtxt};
use rustc_data_structures::fx::FxHashSet;
crate fn find<'tcx>(
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
regioncx: &Rc<RegionInferenceContext<'tcx>>,
tcx: TyCtxt<'_, '_, 'tcx>,
region_vid: RegionVid,
}
struct UseFinder<'cx, 'gcx: 'tcx, 'tcx: 'cx> {
- mir: &'cx Mir<'tcx>,
+ mir: &'cx Body<'tcx>,
regioncx: &'cx Rc<RegionInferenceContext<'tcx>>,
tcx: TyCtxt<'cx, 'gcx, 'tcx>,
region_vid: RegionVid,
}
struct DefUseVisitor<'cx, 'gcx: 'tcx, 'tcx: 'cx> {
- mir: &'cx Mir<'tcx>,
+ mir: &'cx Body<'tcx>,
tcx: TyCtxt<'cx, 'gcx, 'tcx>,
region_vid: RegionVid,
def_use_result: Option<DefUseResult>,
use crate::borrow_check::nll::ConstraintDescription;
use crate::borrow_check::{MirBorrowckCtxt, WriteKind};
use rustc::mir::{
- CastKind, ConstraintCategory, FakeReadCause, Local, Location, Mir, Operand, Place, PlaceBase,
+ CastKind, ConstraintCategory, FakeReadCause, Local, Location, Body, Operand, Place, PlaceBase,
Projection, ProjectionElem, Rvalue, Statement, StatementKind, TerminatorKind,
};
use rustc::ty::{self, TyCtxt};
pub(in crate::borrow_check) fn add_explanation_to_diagnostic<'cx, 'gcx, 'tcx>(
&self,
tcx: TyCtxt<'cx, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
err: &mut DiagnosticBuilder<'_>,
borrow_desc: &str,
borrow_span: Option<Span>,
use crate::dataflow::indexes::BorrowIndex;
use rustc::ty::TyCtxt;
use rustc::mir::visit::Visitor;
-use rustc::mir::{BasicBlock, Location, Mir, Place, PlaceBase, Rvalue};
+use rustc::mir::{BasicBlock, Location, Body, Place, PlaceBase, Rvalue};
use rustc::mir::{Statement, StatementKind};
use rustc::mir::TerminatorKind;
use rustc::mir::{Operand, BorrowKind};
tcx: TyCtxt<'cx, 'gcx, 'tcx>,
all_facts: &mut Option<AllFacts>,
location_table: &LocationTable,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
borrow_set: &BorrowSet<'tcx>,
) {
if all_facts.is_none() {
mir,
dominators,
};
- ig.visit_mir(mir);
+ ig.visit_body(mir);
}
}
tcx: TyCtxt<'cx, 'gcx, 'tcx>,
all_facts: &'cx mut AllFacts,
location_table: &'cx LocationTable,
- mir: &'cx Mir<'tcx>,
+ mir: &'cx Body<'tcx>,
dominators: Dominators<BasicBlock>,
borrow_set: &'cx BorrowSet<'tcx>,
}
use crate::borrow_check::Upvar;
use rustc::hir::def_id::DefId;
use rustc::infer::InferCtxt;
-use rustc::mir::{ClosureOutlivesSubject, ClosureRegionRequirements, Mir};
+use rustc::mir::{ClosureOutlivesSubject, ClosureRegionRequirements, Body};
use rustc::ty::{self, RegionKind, RegionVid};
use rustc_errors::Diagnostic;
use std::fmt::Debug;
infcx: &InferCtxt<'cx, 'gcx, 'tcx>,
def_id: DefId,
param_env: ty::ParamEnv<'tcx>,
- mir: &mut Mir<'tcx>,
+ mir: &mut Body<'tcx>,
) -> UniversalRegions<'tcx> {
debug!("replace_regions_in_mir(def_id={:?})", def_id);
infcx: &InferCtxt<'cx, 'gcx, 'tcx>,
def_id: DefId,
universal_regions: UniversalRegions<'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
upvars: &[Upvar],
location_table: &LocationTable,
param_env: ty::ParamEnv<'gcx>,
fn dump_mir_results<'a, 'gcx, 'tcx>(
infcx: &InferCtxt<'a, 'gcx, 'tcx>,
source: MirSource<'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
regioncx: &RegionInferenceContext<'_>,
closure_region_requirements: &Option<ClosureRegionRequirements<'_>>,
) {
fn dump_annotation<'a, 'gcx, 'tcx>(
infcx: &InferCtxt<'a, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
mir_def_id: DefId,
regioncx: &RegionInferenceContext<'tcx>,
closure_region_requirements: &Option<ClosureRegionRequirements<'_>>,
use rustc::infer::error_reporting::nice_region_error::NiceRegionError;
use rustc::infer::InferCtxt;
use rustc::infer::NLLRegionVariableOrigin;
-use rustc::mir::{ConstraintCategory, Location, Mir};
+use rustc::mir::{ConstraintCategory, Location, Body};
use rustc::ty::{self, RegionVid};
use rustc_data_structures::indexed_vec::IndexVec;
use rustc_errors::{Diagnostic, DiagnosticBuilder};
/// path to blame.
fn best_blame_constraint(
&self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
from_region: RegionVid,
target_test: impl Fn(RegionVid) -> bool,
) -> (ConstraintCategory, bool, Span) {
/// Here we would be invoked with `fr = 'a` and `outlived_fr = `'b`.
pub(super) fn report_error(
&self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
upvars: &[Upvar],
infcx: &InferCtxt<'_, '_, 'tcx>,
mir_def_id: DefId,
/// ```
fn report_fnmut_error(
&self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
upvars: &[Upvar],
infcx: &InferCtxt<'_, '_, 'tcx>,
mir_def_id: DefId,
/// ```
fn report_escaping_data_error(
&self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
upvars: &[Upvar],
infcx: &InferCtxt<'_, '_, 'tcx>,
mir_def_id: DefId,
/// ```
fn report_general_error(
&self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
upvars: &[Upvar],
infcx: &InferCtxt<'_, '_, 'tcx>,
mir_def_id: DefId,
crate fn free_region_constraint_info(
&self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
upvars: &[Upvar],
mir_def_id: DefId,
infcx: &InferCtxt<'_, '_, 'tcx>,
// Finds a good span to blame for the fact that `fr1` outlives `fr2`.
crate fn find_outlives_blame_span(
&self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
fr1: RegionVid,
fr2: RegionVid,
) -> (ConstraintCategory, Span) {
fn retrieve_closure_constraint_info(
&self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
constraint: &OutlivesConstraint,
) -> (ConstraintCategory, bool, Span) {
let loc = match constraint.locations {
use rustc::hir::def::{Res, DefKind};
use rustc::hir::def_id::DefId;
use rustc::infer::InferCtxt;
-use rustc::mir::Mir;
+use rustc::mir::Body;
use rustc::ty::subst::{SubstsRef, UnpackedKind};
use rustc::ty::{self, RegionKind, RegionVid, Ty, TyCtxt};
use rustc::ty::print::RegionHighlightMode;
crate fn give_region_a_name(
&self,
infcx: &InferCtxt<'_, '_, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
upvars: &[Upvar],
mir_def_id: DefId,
fr: RegionVid,
fn give_name_if_anonymous_region_appears_in_arguments(
&self,
infcx: &InferCtxt<'_, '_, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
mir_def_id: DefId,
fr: RegionVid,
counter: &mut usize,
fn give_name_if_we_can_match_hir_ty_from_argument(
&self,
infcx: &InferCtxt<'_, '_, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
mir_def_id: DefId,
needle_fr: RegionVid,
argument_ty: Ty<'tcx>,
fn give_name_if_we_cannot_match_hir_ty(
&self,
infcx: &InferCtxt<'_, '_, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
needle_fr: RegionVid,
argument_ty: Ty<'tcx>,
counter: &mut usize,
fn give_name_if_anonymous_region_appears_in_output(
&self,
infcx: &InferCtxt<'_, '_, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
mir_def_id: DefId,
fr: RegionVid,
counter: &mut usize,
fn give_name_if_anonymous_region_appears_in_yield_ty(
&self,
infcx: &InferCtxt<'_, '_, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
mir_def_id: DefId,
fr: RegionVid,
counter: &mut usize,
use crate::borrow_check::nll::region_infer::RegionInferenceContext;
use crate::borrow_check::nll::ToRegionVid;
use crate::borrow_check::Upvar;
-use rustc::mir::{Local, Mir};
+use rustc::mir::{Local, Body};
use rustc::ty::{RegionVid, TyCtxt};
use rustc_data_structures::indexed_vec::Idx;
use syntax::source_map::Span;
crate fn get_var_name_and_span_for_region(
&self,
tcx: TyCtxt<'_, '_, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
upvars: &[Upvar],
fr: RegionVid,
) -> Option<(Option<Symbol>, Span)> {
/// declared.
crate fn get_argument_name_and_span_for_region(
&self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
argument_index: usize,
) -> (Option<Symbol>, Span) {
let implicit_inputs = self.universal_regions.defining_ty.implicit_inputs();
use rustc::infer::{InferCtxt, NLLRegionVariableOrigin, RegionVariableOrigin};
use rustc::mir::{
ClosureOutlivesRequirement, ClosureOutlivesSubject, ClosureRegionRequirements,
- ConstraintCategory, Local, Location, Mir,
+ ConstraintCategory, Local, Location, Body,
};
use rustc::ty::{self, subst::SubstsRef, RegionVid, Ty, TyCtxt, TypeFoldable};
use rustc::util::common::{self, ErrorReported};
universal_regions: Rc<UniversalRegions<'tcx>>,
placeholder_indices: Rc<PlaceholderIndices>,
universal_region_relations: Rc<UniversalRegionRelations<'tcx>>,
- _mir: &Mir<'tcx>,
+ _mir: &Body<'tcx>,
outlives_constraints: ConstraintSet,
closure_bounds_mapping: FxHashMap<
Location,
pub(super) fn solve<'gcx>(
&mut self,
infcx: &InferCtxt<'_, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
upvars: &[Upvar],
mir_def_id: DefId,
errors_buffer: &mut Vec<Diagnostic>,
fn solve_inner<'gcx>(
&mut self,
infcx: &InferCtxt<'_, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
upvars: &[Upvar],
mir_def_id: DefId,
errors_buffer: &mut Vec<Diagnostic>,
/// for each region variable until all the constraints are
/// satisfied. Note that some values may grow **too** large to be
/// feasible, but we check this later.
- fn propagate_constraints(&mut self, _mir: &Mir<'tcx>) {
+ fn propagate_constraints(&mut self, _mir: &Body<'tcx>) {
debug!("propagate_constraints()");
debug!("propagate_constraints: constraints={:#?}", {
fn check_type_tests<'gcx>(
&self,
infcx: &InferCtxt<'_, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
mir_def_id: DefId,
mut propagated_outlives_requirements: Option<&mut Vec<ClosureOutlivesRequirement<'gcx>>>,
errors_buffer: &mut Vec<Diagnostic>,
fn try_promote_type_test<'gcx>(
&self,
infcx: &InferCtxt<'_, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
type_test: &TypeTest<'tcx>,
propagated_outlives_requirements: &mut Vec<ClosureOutlivesRequirement<'gcx>>,
) -> bool {
fn eval_verify_bound(
&self,
tcx: TyCtxt<'_, '_, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
generic_ty: Ty<'tcx>,
lower_bound: RegionVid,
verify_bound: &VerifyBound<'tcx>,
fn eval_if_eq(
&self,
tcx: TyCtxt<'_, '_, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
generic_ty: Ty<'tcx>,
lower_bound: RegionVid,
test_ty: Ty<'tcx>,
// Evaluate whether `sup_region: sub_region @ point`.
fn eval_outlives(
&self,
- _mir: &Mir<'tcx>,
+ _mir: &Body<'tcx>,
sup_region: RegionVid,
sub_region: RegionVid,
) -> bool {
fn check_universal_regions<'gcx>(
&self,
infcx: &InferCtxt<'_, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
upvars: &[Upvar],
mir_def_id: DefId,
mut propagated_outlives_requirements: Option<&mut Vec<ClosureOutlivesRequirement<'gcx>>>,
fn check_universal_region<'gcx>(
&self,
infcx: &InferCtxt<'_, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
upvars: &[Upvar],
mir_def_id: DefId,
longer_fr: RegionVid,
longer_fr: RegionVid,
shorter_fr: RegionVid,
infcx: &InferCtxt<'_, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
upvars: &[Upvar],
mir_def_id: DefId,
propagated_outlives_requirements: &mut Option<&mut Vec<ClosureOutlivesRequirement<'gcx>>>,
fn check_bound_universal_region<'gcx>(
&self,
infcx: &InferCtxt<'_, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
_mir_def_id: DefId,
longer_fr: RegionVid,
placeholder: ty::PlaceholderRegion,
-use rustc::mir::{BasicBlock, Location, Mir};
+use rustc::mir::{BasicBlock, Location, Body};
use rustc::ty::{self, RegionVid};
use rustc_data_structures::bit_set::{HybridBitSet, SparseBitMatrix};
use rustc_data_structures::fx::FxHashMap;
}
impl RegionValueElements {
- crate fn new(mir: &Mir<'_>) -> Self {
+ crate fn new(mir: &Body<'_>) -> Self {
let mut num_points = 0;
let statements_before_block: IndexVec<BasicBlock, usize> = mir.basic_blocks()
.iter()
/// Pushes all predecessors of `index` onto `stack`.
crate fn push_predecessors(
&self,
- mir: &Mir<'_>,
+ mir: &Body<'_>,
index: PointIndex,
stack: &mut Vec<PointIndex>,
) {
use rustc::ty::subst::SubstsRef;
use rustc::ty::{self, ClosureSubsts, GeneratorSubsts, Ty, TypeFoldable};
-use rustc::mir::{Location, Mir};
+use rustc::mir::{Location, Body};
use rustc::mir::visit::{MutVisitor, TyContext};
use rustc::infer::{InferCtxt, NLLRegionVariableOrigin};
/// Replaces all free regions appearing in the MIR with fresh
/// inference variables, returning the number of variables created.
-pub fn renumber_mir<'tcx>(infcx: &InferCtxt<'_, '_, 'tcx>, mir: &mut Mir<'tcx>) {
+pub fn renumber_mir<'tcx>(infcx: &InferCtxt<'_, '_, 'tcx>, mir: &mut Body<'tcx>) {
debug!("renumber_mir()");
debug!("renumber_mir: mir.arg_count={:?}", mir.arg_count);
let mut visitor = NLLVisitor { infcx };
- visitor.visit_mir(mir);
+ visitor.visit_body(mir);
}
/// Replaces all regions appearing in `value` with fresh inference
}
impl<'a, 'gcx, 'tcx> MutVisitor<'tcx> for NLLVisitor<'a, 'gcx, 'tcx> {
- fn visit_mir(&mut self, mir: &mut Mir<'tcx>) {
+ fn visit_body(&mut self, mir: &mut Body<'tcx>) {
for promoted in mir.promoted.iter_mut() {
- self.visit_mir(promoted);
+ self.visit_body(promoted);
}
- self.super_mir(mir);
+ self.super_body(mir);
}
fn visit_ty(&mut self, ty: &mut Ty<'tcx>, ty_context: TyContext) {
impl<'a, 'gcx, 'tcx> TypeChecker<'a, 'gcx, 'tcx> {
pub(super) fn equate_inputs_and_outputs(
&mut self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
universal_regions: &UniversalRegions<'tcx>,
normalized_inputs_and_output: &[Ty<'tcx>],
) {
use crate::borrow_check::nll::region_infer::values::{PointIndex, RegionValueElements};
use crate::util::liveness::{categorize, DefUse};
use rustc::mir::visit::{PlaceContext, Visitor};
-use rustc::mir::{Local, Location, Mir};
+use rustc::mir::{Local, Location, Body};
use rustc_data_structures::indexed_vec::{Idx, IndexVec};
use rustc_data_structures::vec_linked_list as vll;
crate fn build(
live_locals: &Vec<Local>,
elements: &RegionValueElements,
- mir: &Mir<'_>,
+ mir: &Body<'_>,
) -> Self {
let nones = IndexVec::from_elem_n(None, mir.local_decls.len());
let mut local_use_map = LocalUseMap {
elements,
locals_with_use_data,
}
- .visit_mir(mir);
+ .visit_body(mir);
local_use_map
}
use crate::dataflow::move_paths::MoveData;
use crate::dataflow::FlowAtLocation;
use crate::dataflow::MaybeInitializedPlaces;
-use rustc::mir::{Local, Mir};
+use rustc::mir::{Local, Body};
use rustc::ty::{RegionVid, TyCtxt};
use rustc_data_structures::fx::FxHashSet;
use std::rc::Rc;
/// performed before
pub(super) fn generate<'gcx, 'tcx>(
typeck: &mut TypeChecker<'_, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
elements: &Rc<RegionValueElements>,
flow_inits: &mut FlowAtLocation<'tcx, MaybeInitializedPlaces<'_, 'gcx, 'tcx>>,
move_data: &MoveData<'tcx>,
fn compute_live_locals(
tcx: TyCtxt<'_, '_, 'tcx>,
free_regions: &FxHashSet<RegionVid>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
) -> Vec<Local> {
let live_locals: Vec<Local> = mir
.local_decls
use crate::dataflow::move_paths::MoveData;
use crate::dataflow::{FlowAtLocation, FlowsAtLocation, MaybeInitializedPlaces};
use rustc::infer::canonical::QueryRegionConstraint;
-use rustc::mir::{BasicBlock, ConstraintCategory, Local, Location, Mir};
+use rustc::mir::{BasicBlock, ConstraintCategory, Local, Location, Body};
use rustc::traits::query::dropck_outlives::DropckOutlivesResult;
use rustc::traits::query::type_op::outlives::DropckOutlives;
use rustc::traits::query::type_op::TypeOp;
/// this respects `#[may_dangle]` annotations).
pub(super) fn trace(
typeck: &mut TypeChecker<'_, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
elements: &Rc<RegionValueElements>,
flow_inits: &mut FlowAtLocation<'tcx, MaybeInitializedPlaces<'_, 'gcx, 'tcx>>,
move_data: &MoveData<'tcx>,
elements: &'me RegionValueElements,
/// MIR we are analyzing.
- mir: &'me Mir<'tcx>,
+ mir: &'me Body<'tcx>,
/// Mapping to/from the various indices used for initialization tracking.
move_data: &'me MoveData<'tcx>,
pub(crate) fn type_check<'gcx, 'tcx>(
infcx: &InferCtxt<'_, 'gcx, 'tcx>,
param_env: ty::ParamEnv<'gcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
mir_def_id: DefId,
universal_regions: &Rc<UniversalRegions<'tcx>>,
location_table: &LocationTable,
infcx: &'a InferCtxt<'a, 'gcx, 'tcx>,
mir_def_id: DefId,
param_env: ty::ParamEnv<'gcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
region_bound_pairs: &'a RegionBoundPairs<'tcx>,
implicit_region_bound: ty::Region<'tcx>,
borrowck_context: &'a mut BorrowCheckContext<'a, 'tcx>,
);
let errors_reported = {
let mut verifier = TypeVerifier::new(&mut checker, mir);
- verifier.visit_mir(mir);
+ verifier.visit_body(mir);
verifier.errors_reported
};
/// is a problem.
struct TypeVerifier<'a, 'b: 'a, 'gcx: 'tcx, 'tcx: 'b> {
cx: &'a mut TypeChecker<'b, 'gcx, 'tcx>,
- mir: &'b Mir<'tcx>,
+ mir: &'b Body<'tcx>,
last_span: Span,
mir_def_id: DefId,
errors_reported: bool,
}
}
- fn visit_mir(&mut self, mir: &Mir<'tcx>) {
+ fn visit_body(&mut self, mir: &Body<'tcx>) {
self.sanitize_type(&"return type", mir.return_ty());
for local_decl in &mir.local_decls {
self.sanitize_type(local_decl, local_decl.ty);
if self.errors_reported {
return;
}
- self.super_mir(mir);
+ self.super_body(mir);
}
}
impl<'a, 'b, 'gcx, 'tcx> TypeVerifier<'a, 'b, 'gcx, 'tcx> {
- fn new(cx: &'a mut TypeChecker<'b, 'gcx, 'tcx>, mir: &'b Mir<'tcx>) -> Self {
+ fn new(cx: &'a mut TypeChecker<'b, 'gcx, 'tcx>, mir: &'b Body<'tcx>) -> Self {
TypeVerifier {
mir,
mir_def_id: cx.mir_def_id,
})
}
- fn sanitize_promoted(&mut self, promoted_mir: &'b Mir<'tcx>, location: Location) {
+ fn sanitize_promoted(&mut self, promoted_mir: &'b Body<'tcx>, location: Location) {
// Determine the constraints from the promoted MIR by running the type
// checker on the promoted MIR, then transfer the constraints back to
// the main MIR, changing the locations to the provided location.
&mut closure_bounds
);
- self.visit_mir(promoted_mir);
+ self.visit_body(promoted_mir);
if !self.errors_reported {
// if verifier failed, don't do further checks to avoid ICEs
}
/// Gets a span representing the location.
- pub fn span(&self, mir: &Mir<'_>) -> Span {
+ pub fn span(&self, mir: &Body<'_>) -> Span {
match self {
Locations::All(span) => *span,
Locations::Single(l) => mir.source_info(*l).span,
impl<'a, 'gcx, 'tcx> TypeChecker<'a, 'gcx, 'tcx> {
fn new(
infcx: &'a InferCtxt<'a, 'gcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
mir_def_id: DefId,
param_env: ty::ParamEnv<'gcx>,
region_bound_pairs: &'a RegionBoundPairs<'tcx>,
debug!(
"eq_opaque_type_and_type: concrete_ty={:?}={:?} opaque_defn_ty={:?}",
opaque_decl.concrete_ty,
- infcx.resolve_type_vars_if_possible(&opaque_decl.concrete_ty),
+ infcx.resolve_vars_if_possible(&opaque_decl.concrete_ty),
opaque_defn_ty
);
obligations.add(infcx
self.infcx.tcx
}
- fn check_stmt(&mut self, mir: &Mir<'tcx>, stmt: &Statement<'tcx>, location: Location) {
+ fn check_stmt(&mut self, mir: &Body<'tcx>, stmt: &Statement<'tcx>, location: Location) {
debug!("check_stmt: {:?}", stmt);
let tcx = self.tcx();
match stmt.kind {
fn check_terminator(
&mut self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
term: &Terminator<'tcx>,
term_location: Location,
) {
fn check_call_dest(
&mut self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
term: &Terminator<'tcx>,
sig: &ty::FnSig<'tcx>,
destination: &Option<(Place<'tcx>, BasicBlock)>,
fn check_call_inputs(
&mut self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
term: &Terminator<'tcx>,
sig: &ty::FnSig<'tcx>,
args: &[Operand<'tcx>],
}
}
- fn check_iscleanup(&mut self, mir: &Mir<'tcx>, block_data: &BasicBlockData<'tcx>) {
+ fn check_iscleanup(&mut self, mir: &Body<'tcx>, block_data: &BasicBlockData<'tcx>) {
let is_cleanup = block_data.is_cleanup;
self.last_span = block_data.terminator().source_info.span;
match block_data.terminator().kind {
fn assert_iscleanup(
&mut self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
ctxt: &dyn fmt::Debug,
bb: BasicBlock,
iscleanuppad: bool,
}
}
- fn check_local(&mut self, mir: &Mir<'tcx>, local: Local, local_decl: &LocalDecl<'tcx>) {
+ fn check_local(&mut self, mir: &Body<'tcx>, local: Local, local_decl: &LocalDecl<'tcx>) {
match mir.local_kind(local) {
LocalKind::ReturnPointer | LocalKind::Arg => {
// return values of normal functions are required to be
}
}
- fn check_rvalue(&mut self, mir: &Mir<'tcx>, rvalue: &Rvalue<'tcx>, location: Location) {
+ fn check_rvalue(&mut self, mir: &Body<'tcx>, rvalue: &Rvalue<'tcx>, location: Location) {
let tcx = self.tcx();
match rvalue {
fn check_aggregate_rvalue(
&mut self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
rvalue: &Rvalue<'tcx>,
aggregate_kind: &AggregateKind<'tcx>,
operands: &[Operand<'tcx>],
/// - `borrowed_place`: the place `P` being borrowed
fn add_reborrow_constraint(
&mut self,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
location: Location,
borrow_region: ty::Region<'tcx>,
borrowed_place: &Place<'tcx>,
})
}
- fn typeck_mir(&mut self, mir: &Mir<'tcx>) {
+ fn typeck_mir(&mut self, mir: &Body<'tcx>) {
self.last_span = mir.span;
debug!("run_on_mir: {:?}", mir.span);
use crate::borrow_check::places_conflict;
use crate::borrow_check::AccessDepth;
use crate::dataflow::indexes::BorrowIndex;
-use rustc::mir::{BasicBlock, Location, Mir, Place, PlaceBase};
+use rustc::mir::{BasicBlock, Location, Body, Place, PlaceBase};
use rustc::mir::{ProjectionElem, BorrowKind};
use rustc::ty::TyCtxt;
use rustc_data_structures::graph::dominators::Dominators;
pub(super) fn each_borrow_involving_path<'a, 'tcx, 'gcx: 'tcx, F, I, S> (
s: &mut S,
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
_location: Location,
access_place: (AccessDepth, &Place<'tcx>),
borrow_set: &BorrowSet<'tcx>,
use rustc::hir;
use rustc::mir::ProjectionElem;
-use rustc::mir::{Mir, Place, PlaceBase, Mutability, Static, StaticKind};
+use rustc::mir::{Body, Place, PlaceBase, Mutability, Static, StaticKind};
use rustc::ty::{self, TyCtxt};
use crate::borrow_check::borrow_set::LocalsStateAtExit;
fn ignore_borrow(
&self,
tcx: TyCtxt<'_, '_, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
locals_state_at_exit: &LocalsStateAtExit,
) -> bool;
}
fn ignore_borrow(
&self,
tcx: TyCtxt<'_, '_, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
locals_state_at_exit: &LocalsStateAtExit,
) -> bool {
self.iterate(|place_base, place_projection| {
use crate::borrow_check::{Deep, Shallow, AccessDepth};
use rustc::hir;
use rustc::mir::{
- BorrowKind, Mir, Place, PlaceBase, Projection, ProjectionElem, ProjectionsIter,
+ BorrowKind, Body, Place, PlaceBase, Projection, ProjectionElem, ProjectionsIter,
StaticKind
};
use rustc::ty::{self, TyCtxt};
/// dataflow).
crate fn places_conflict<'gcx, 'tcx>(
tcx: TyCtxt<'_, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
borrow_place: &Place<'tcx>,
access_place: &Place<'tcx>,
bias: PlaceConflictBias,
/// order to make the conservative choice and preserve soundness.
pub(super) fn borrow_conflicts_with_place<'gcx, 'tcx>(
tcx: TyCtxt<'_, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
borrow_place: &Place<'tcx>,
borrow_kind: BorrowKind,
access_place: &Place<'tcx>,
fn place_components_conflict<'gcx, 'tcx>(
tcx: TyCtxt<'_, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
borrow_projections: (&PlaceBase<'tcx>, ProjectionsIter<'_, 'tcx>),
borrow_kind: BorrowKind,
access_projections: (&PlaceBase<'tcx>, ProjectionsIter<'_, 'tcx>),
// between `elem1` and `elem2`.
fn place_projection_conflict<'a, 'gcx: 'tcx, 'tcx>(
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
pi1: &Projection<'tcx>,
pi2: &Projection<'tcx>,
bias: PlaceConflictBias,
use rustc::hir;
use rustc::ty::{self, TyCtxt};
-use rustc::mir::{Mir, Place, PlaceBase, ProjectionElem};
+use rustc::mir::{Body, Place, PlaceBase, ProjectionElem};
pub trait IsPrefixOf<'tcx> {
fn is_prefix_of(&self, other: &Place<'tcx>) -> bool;
pub(super) struct Prefixes<'cx, 'gcx: 'tcx, 'tcx: 'cx> {
- mir: &'cx Mir<'tcx>,
+ mir: &'cx Body<'tcx>,
tcx: TyCtxt<'cx, 'gcx, 'tcx>,
kind: PrefixSet,
next: Option<&'cx Place<'tcx>>,
never_initialized_mut_locals: &mut never_initialized_mut_locals,
mbcx: self,
};
- visitor.visit_mir(visitor.mbcx.mir);
+ visitor.visit_body(visitor.mbcx.mir);
}
// Take the union of the existed `used_mut` set with those variables we've found were
use super::lints;
/// Construct the MIR for a given `DefId`.
-pub fn mir_build<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> Mir<'tcx> {
+pub fn mir_build<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> Body<'tcx> {
let id = tcx.hir().as_local_hir_id(def_id).unwrap();
// Figure out what primary body this item has.
build::construct_const(cx, body_id, return_ty, return_ty_span)
};
- // Convert the Mir to global types.
+ // Convert the `mir::Body` to global types.
let mut globalizer = GlobalizeMir {
tcx,
span: mir.span
};
- globalizer.visit_mir(&mut mir);
+ globalizer.visit_body(&mut mir);
let mir = unsafe {
- mem::transmute::<Mir<'_>, Mir<'tcx>>(mir)
+ mem::transmute::<Body<'_>, Body<'tcx>>(mir)
};
mir_util::dump_mir(tcx, None, "mir_map", &0,
fn create_constructor_shim<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
ctor_id: hir::HirId,
v: &'tcx hir::VariantData)
- -> Mir<'tcx>
+ -> Body<'tcx>
{
let span = tcx.hir().span_by_hir_id(ctor_id);
if let hir::VariantData::Tuple(ref fields, ctor_id) = *v {
tcx.infer_ctxt().enter(|infcx| {
let mut mir = shim::build_adt_ctor(&infcx, ctor_id, fields, span);
- // Convert the Mir to global types.
+ // Convert the `mir::Body` to global types.
let tcx = infcx.tcx.global_tcx();
let mut globalizer = GlobalizeMir {
tcx,
span: mir.span
};
- globalizer.visit_mir(&mut mir);
+ globalizer.visit_body(&mut mir);
let mir = unsafe {
- mem::transmute::<Mir<'_>, Mir<'tcx>>(mir)
+ mem::transmute::<Body<'_>, Body<'tcx>>(mir)
};
mir_util::dump_mir(tcx, None, "mir_map", &0,
yield_ty: Option<Ty<'gcx>>,
return_ty_span: Span,
body: &'gcx hir::Body)
- -> Mir<'tcx>
+ -> Body<'tcx>
where A: Iterator<Item=ArgInfo<'gcx>>
{
let arguments: Vec<_> = arguments.collect();
body_id: hir::BodyId,
const_ty: Ty<'tcx>,
const_ty_span: Span,
-) -> Mir<'tcx> {
+) -> Body<'tcx> {
let tcx = hir.tcx();
let owner_id = tcx.hir().body_owner(body_id);
let span = tcx.hir().span(owner_id);
fn construct_error<'a, 'gcx, 'tcx>(hir: Cx<'a, 'gcx, 'tcx>,
body_id: hir::BodyId)
- -> Mir<'tcx> {
+ -> Body<'tcx> {
let owner_id = hir.tcx().hir().body_owner(body_id);
let span = hir.tcx().hir().span(owner_id);
let ty = hir.tcx().types.err;
fn finish(self,
yield_ty: Option<Ty<'tcx>>)
- -> Mir<'tcx> {
+ -> Body<'tcx> {
for (index, block) in self.cfg.basic_blocks.iter().enumerate() {
if block.terminator.is_none() {
span_bug!(self.fn_span, "no terminator on block {:?}", index);
}
}
- Mir::new(
+ Body::new(
self.cfg.basic_blocks,
self.source_scopes,
ClearCrossCrate::Set(self.source_scope_local_data),
pub(crate) fn eval_promoted<'a, 'mir, 'tcx>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
cid: GlobalId<'tcx>,
- mir: &'mir mir::Mir<'tcx>,
+ mir: &'mir mir::Body<'tcx>,
param_env: ty::ParamEnv<'tcx>,
) -> EvalResult<'tcx, MPlaceTy<'tcx>> {
let span = tcx.def_span(cid.instance.def_id());
ecx.tcx.alloc_map.lock().unwrap_memory(ptr.alloc_id),
ptr.offset.bytes(),
),
- Scalar::Bits { .. } => (
+ Scalar::Raw { .. } => (
ecx.tcx.intern_const_alloc(Allocation::from_byte_aligned_bytes(b"", ())),
0,
),
fn eval_body_using_ecx<'mir, 'tcx>(
ecx: &mut CompileTimeEvalContext<'_, 'mir, 'tcx>,
cid: GlobalId<'tcx>,
- mir: &'mir mir::Mir<'tcx>,
+ mir: &'mir mir::Body<'tcx>,
param_env: ty::ParamEnv<'tcx>,
) -> EvalResult<'tcx, MPlaceTy<'tcx>> {
debug!("eval_body_using_ecx: {:?}, {:?}", cid, param_env);
args: &[OpTy<'tcx>],
dest: Option<PlaceTy<'tcx>>,
ret: Option<mir::BasicBlock>,
- ) -> EvalResult<'tcx, Option<&'mir mir::Mir<'tcx>>> {
+ ) -> EvalResult<'tcx, Option<&'mir mir::Body<'tcx>>> {
debug!("eval_fn_call: {:?}", instance);
// Only check non-glue functions
if let ty::InstanceDef::Item(def_id) = instance.def {
-use rustc::mir::{self, Mir, Location};
+use rustc::mir::{self, Body, Location};
use rustc::ty::{self, TyCtxt};
use crate::util::elaborate_drops::DropFlagState;
//
// FIXME: we have to do something for moving slice patterns.
fn place_contents_drop_state_cannot_differ<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
place: &mir::Place<'tcx>) -> bool {
let ty = place.ty(mir, tcx).ty;
match ty.sty {
pub(crate) fn on_lookup_result_bits<'a, 'gcx, 'tcx, F>(
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
move_data: &MoveData<'tcx>,
lookup_result: LookupResult,
each_child: F)
pub(crate) fn on_all_children_bits<'a, 'gcx, 'tcx, F>(
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
move_data: &MoveData<'tcx>,
move_path_index: MovePathIndex,
mut each_child: F)
{
fn is_terminal_path<'a, 'gcx, 'tcx>(
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
move_data: &MoveData<'tcx>,
path: MovePathIndex) -> bool
{
fn on_all_children_bits<'a, 'gcx, 'tcx, F>(
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
move_data: &MoveData<'tcx>,
move_path_index: MovePathIndex,
each_child: &mut F)
pub(crate) fn on_all_drop_children_bits<'a, 'gcx, 'tcx, F>(
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
ctxt: &MoveDataParamEnv<'gcx, 'tcx>,
path: MovePathIndex,
mut each_child: F)
pub(crate) fn drop_flag_effects_for_function_entry<'a, 'gcx, 'tcx, F>(
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
ctxt: &MoveDataParamEnv<'gcx, 'tcx>,
mut callback: F)
where F: FnMut(MovePathIndex, DropFlagState)
pub(crate) fn drop_flag_effects_for_location<'a, 'gcx, 'tcx, F>(
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
ctxt: &MoveDataParamEnv<'gcx, 'tcx>,
loc: Location,
mut callback: F)
pub(crate) fn for_location_inits<'a, 'gcx, 'tcx, F>(
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
move_data: &MoveData<'tcx>,
loc: Location,
mut callback: F)
//! Hook into libgraphviz for rendering dataflow graphs for MIR.
use rustc::hir::def_id::DefId;
-use rustc::mir::{BasicBlock, Mir};
+use rustc::mir::{BasicBlock, Body};
use std::fs;
use std::io;
pub trait MirWithFlowState<'tcx> {
type BD: BitDenotation<'tcx>;
fn def_id(&self) -> DefId;
- fn mir(&self) -> &Mir<'tcx>;
+ fn mir(&self) -> &Body<'tcx>;
fn flow_state(&self) -> &DataflowState<'tcx, Self::BD>;
}
{
type BD = BD;
fn def_id(&self) -> DefId { self.def_id }
- fn mir(&self) -> &Mir<'tcx> { self.flow_state.mir() }
+ fn mir(&self) -> &Body<'tcx> { self.flow_state.mir() }
fn flow_state(&self) -> &DataflowState<'tcx, Self::BD> { &self.flow_state.flow_state }
}
#[derive(Copy, Clone, PartialEq, Eq, Debug)]
pub struct Edge { source: BasicBlock, index: usize }
-fn outgoing(mir: &Mir<'_>, bb: BasicBlock) -> Vec<Edge> {
+fn outgoing(mir: &Body<'_>, bb: BasicBlock) -> Vec<Edge> {
(0..mir[bb].terminator().successors().count())
.map(|index| Edge { source: bb, index: index}).collect()
}
n: &Node,
w: &mut W,
block: BasicBlock,
- mir: &Mir<'_>) -> io::Result<()> {
+ mir: &Body<'_>) -> io::Result<()> {
// Header rows
const HDRS: [&str; 4] = ["ENTRY", "MIR", "BLOCK GENS", "BLOCK KILLS"];
const HDR_FMT: &str = "bgcolor=\"grey\"";
n: &Node,
w: &mut W,
block: BasicBlock,
- mir: &Mir<'_>)
+ mir: &Body<'_>)
-> io::Result<()> {
let i = n.index();
n: &Node,
w: &mut W,
block: BasicBlock,
- mir: &Mir<'_>)
+ mir: &Body<'_>)
-> io::Result<()> {
let i = n.index();
/// immovable generators.
#[derive(Copy, Clone)]
pub struct HaveBeenBorrowedLocals<'a, 'tcx: 'a> {
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
}
impl<'a, 'tcx: 'a> HaveBeenBorrowedLocals<'a, 'tcx> {
- pub fn new(mir: &'a Mir<'tcx>)
+ pub fn new(mir: &'a Body<'tcx>)
-> Self {
HaveBeenBorrowedLocals { mir }
}
- pub fn mir(&self) -> &Mir<'tcx> {
+ pub fn mir(&self) -> &Body<'tcx> {
self.mir
}
}
use crate::borrow_check::borrow_set::{BorrowSet, BorrowData};
use crate::borrow_check::place_ext::PlaceExt;
-use rustc::mir::{self, Location, Place, PlaceBase, Mir};
+use rustc::mir::{self, Location, Place, PlaceBase, Body};
use rustc::ty::TyCtxt;
use rustc::ty::RegionVid;
/// borrows in compact bitvectors.
pub struct Borrows<'a, 'gcx: 'tcx, 'tcx: 'a> {
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
borrow_set: Rc<BorrowSet<'tcx>>,
borrows_out_of_scope_at_location: FxHashMap<Location, Vec<BorrowIndex>>,
}
fn precompute_borrows_out_of_scope<'tcx>(
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
regioncx: &Rc<RegionInferenceContext<'tcx>>,
borrows_out_of_scope_at_location: &mut FxHashMap<Location, Vec<BorrowIndex>>,
borrow_index: BorrowIndex,
impl<'a, 'gcx, 'tcx> Borrows<'a, 'gcx, 'tcx> {
crate fn new(
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
nonlexical_regioncx: Rc<RegionInferenceContext<'tcx>>,
borrow_set: &Rc<BorrowSet<'tcx>>,
) -> Self {
//! zero-sized structure.
use rustc::ty::TyCtxt;
-use rustc::mir::{self, Mir, Location};
+use rustc::mir::{self, Body, Location};
use rustc_data_structures::bit_set::{BitSet, BitSetOperator};
use rustc_data_structures::indexed_vec::Idx;
/// places that would require a dynamic drop-flag at that statement.
pub struct MaybeInitializedPlaces<'a, 'gcx: 'tcx, 'tcx: 'a> {
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
mdpe: &'a MoveDataParamEnv<'gcx, 'tcx>,
}
impl<'a, 'gcx: 'tcx, 'tcx> MaybeInitializedPlaces<'a, 'gcx, 'tcx> {
pub fn new(tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
mdpe: &'a MoveDataParamEnv<'gcx, 'tcx>)
-> Self
{
/// places that would require a dynamic drop-flag at that statement.
pub struct MaybeUninitializedPlaces<'a, 'gcx: 'tcx, 'tcx: 'a> {
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
mdpe: &'a MoveDataParamEnv<'gcx, 'tcx>,
}
impl<'a, 'gcx, 'tcx> MaybeUninitializedPlaces<'a, 'gcx, 'tcx> {
pub fn new(tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
mdpe: &'a MoveDataParamEnv<'gcx, 'tcx>)
-> Self
{
/// that would require a dynamic drop-flag at that statement.
pub struct DefinitelyInitializedPlaces<'a, 'gcx: 'tcx, 'tcx: 'a> {
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
mdpe: &'a MoveDataParamEnv<'gcx, 'tcx>,
}
impl<'a, 'gcx, 'tcx: 'a> DefinitelyInitializedPlaces<'a, 'gcx, 'tcx> {
pub fn new(tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
mdpe: &'a MoveDataParamEnv<'gcx, 'tcx>)
-> Self
{
/// ```
pub struct EverInitializedPlaces<'a, 'gcx: 'tcx, 'tcx: 'a> {
tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
mdpe: &'a MoveDataParamEnv<'gcx, 'tcx>,
}
impl<'a, 'gcx: 'tcx, 'tcx: 'a> EverInitializedPlaces<'a, 'gcx, 'tcx> {
pub fn new(tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
mdpe: &'a MoveDataParamEnv<'gcx, 'tcx>)
-> Self
{
#[derive(Copy, Clone)]
pub struct MaybeStorageLive<'a, 'tcx: 'a> {
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
}
impl<'a, 'tcx: 'a> MaybeStorageLive<'a, 'tcx> {
- pub fn new(mir: &'a Mir<'tcx>)
+ pub fn new(mir: &'a Body<'tcx>)
-> Self {
MaybeStorageLive { mir }
}
- pub fn mir(&self) -> &Mir<'tcx> {
+ pub fn mir(&self) -> &Body<'tcx> {
self.mir
}
}
use rustc::hir::def_id::DefId;
use rustc::ty::{self, TyCtxt};
-use rustc::mir::{self, Mir, BasicBlock, BasicBlockData, Location, Statement, Terminator};
+use rustc::mir::{self, Body, BasicBlock, BasicBlockData, Location, Statement, Terminator};
use rustc::mir::traversal;
use rustc::session::Session;
}
pub(crate) fn do_dataflow<'a, 'gcx, 'tcx, BD, P>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
def_id: DefId,
attributes: &[ast::Attribute],
dead_unwinds: &BitSet<BasicBlock>,
// Delegated Hooks: Provide access to the MIR and process the flow state.
- fn mir(&self) -> &'a Mir<'tcx>;
+ fn mir(&self) -> &'a Body<'tcx>;
}
pub fn state_for_location<'tcx, T: BitDenotation<'tcx>>(loc: Location,
analysis: &T,
result: &DataflowResults<'tcx, T>,
- mir: &Mir<'tcx>)
+ mir: &Body<'tcx>)
-> BitSet<T::Idx> {
let mut on_entry = result.sets().on_entry_set_for(loc.block.index()).to_owned();
let mut kill_set = on_entry.to_hybrid();
{
flow_state: DataflowState<'tcx, O>,
dead_unwinds: &'a BitSet<mir::BasicBlock>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
}
impl<'a, 'tcx: 'a, O> DataflowAnalysis<'a, 'tcx, O> where O: BitDenotation<'tcx>
DataflowResults(self.flow_state)
}
- pub fn mir(&self) -> &'a Mir<'tcx> { self.mir }
+ pub fn mir(&self) -> &'a Body<'tcx> { self.mir }
}
pub struct DataflowResults<'tcx, O>(pub(crate) DataflowState<'tcx, O>) where O: BitDenotation<'tcx>;
impl<'a, 'tcx, D> DataflowAnalysis<'a, 'tcx, D> where D: BitDenotation<'tcx>
{
- pub fn new(mir: &'a Mir<'tcx>,
+ pub fn new(mir: &'a Body<'tcx>,
dead_unwinds: &'a BitSet<mir::BasicBlock>,
denotation: D) -> Self where D: InitialFlow {
let bits_per_block = denotation.bits_per_block();
use super::IllegalMoveOriginKind::*;
struct MoveDataBuilder<'a, 'gcx: 'tcx, 'tcx: 'a> {
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
tcx: TyCtxt<'a, 'gcx, 'tcx>,
data: MoveData<'tcx>,
errors: Vec<(Place<'tcx>, MoveError<'tcx>)>,
}
impl<'a, 'gcx, 'tcx> MoveDataBuilder<'a, 'gcx, 'tcx> {
- fn new(mir: &'a Mir<'tcx>, tcx: TyCtxt<'a, 'gcx, 'tcx>) -> Self {
+ fn new(mir: &'a Body<'tcx>, tcx: TyCtxt<'a, 'gcx, 'tcx>) -> Self {
let mut move_paths = IndexVec::new();
let mut path_map = IndexVec::new();
let mut init_path_map = IndexVec::new();
}
pub(super) fn gather_moves<'a, 'gcx, 'tcx>(
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
tcx: TyCtxt<'a, 'gcx, 'tcx>
) -> Result<MoveData<'tcx>, (MoveData<'tcx>, Vec<(Place<'tcx>, MoveError<'tcx>)>)> {
let mut builder = MoveDataBuilder::new(mir, tcx);
}
impl<T> LocationMap<T> where T: Default + Clone {
- fn new(mir: &Mir<'_>) -> Self {
+ fn new(mir: &Body<'_>) -> Self {
LocationMap {
map: mir.basic_blocks().iter().map(|block| {
vec![T::default(); block.statements.len()+1]
}
impl Init {
- crate fn span<'gcx>(&self, mir: &Mir<'gcx>) -> Span {
+ crate fn span<'gcx>(&self, mir: &Body<'gcx>) -> Span {
match self.location {
InitLocation::Argument(local) => mir.local_decls[local].source_info.span,
InitLocation::Statement(location) => mir.source_info(location).span,
}
impl<'a, 'gcx, 'tcx> MoveData<'tcx> {
- pub fn gather_moves(mir: &Mir<'tcx>, tcx: TyCtxt<'a, 'gcx, 'tcx>)
+ pub fn gather_moves(mir: &Body<'tcx>, tcx: TyCtxt<'a, 'gcx, 'tcx>)
-> Result<Self, (Self, Vec<(Place<'tcx>, MoveError<'tcx>)>)> {
builder::gather_moves(mir, tcx)
}
use syntax::ast;
-use rustc::ty::{self, Ty, TyCtxt, ParamEnv};
+use rustc::ty::{self, Ty, TyCtxt, ParamEnv, layout::Size};
use syntax_pos::symbol::Symbol;
use rustc::mir::interpret::{ConstValue, Scalar};
trace!("trunc {} with size {} and shift {}", n, width.bits(), 128 - width.bits());
let result = truncate(n, width);
trace!("trunc result: {}", result);
- Ok(ConstValue::Scalar(Scalar::Bits {
- bits: result,
- size: width.bytes() as u8,
- }))
+ Ok(ConstValue::Scalar(Scalar::from_uint(result, width)))
};
use rustc::mir::interpret::*;
let id = tcx.allocate_bytes(data);
ConstValue::Scalar(Scalar::Ptr(id.into()))
},
- LitKind::Byte(n) => ConstValue::Scalar(Scalar::Bits {
- bits: n as u128,
- size: 1,
- }),
+ LitKind::Byte(n) => ConstValue::Scalar(Scalar::from_uint(n, Size::from_bytes(1))),
LitKind::Int(n, _) if neg => {
let n = n as i128;
let n = n.overflowing_neg().0;
let num = num.as_str();
use rustc_apfloat::ieee::{Single, Double};
use rustc_apfloat::Float;
- let (bits, size) = match fty {
+ let (data, size) = match fty {
ast::FloatTy::F32 => {
num.parse::<f32>().map_err(|_| ())?;
let mut f = num.parse::<Single>().unwrap_or_else(|e| {
}
};
- Ok(ConstValue::Scalar(Scalar::Bits { bits, size }))
+ Ok(ConstValue::Scalar(Scalar::from_uint(data, Size::from_bytes(size))))
}
/// Whether this constant/function needs overflow checks.
check_overflow: bool,
- /// See field with the same name on `Mir`.
+ /// See field with the same name on `mir::Body`.
control_flow_destroyed: Vec<(Span, String)>,
}
use rustc_apfloat::ieee::{Single, Double};
use rustc::mir::interpret::{
- Scalar, EvalResult, Pointer, PointerArithmetic, InterpError, truncate
+ Scalar, EvalResult, Pointer, PointerArithmetic, InterpError,
};
use rustc::mir::CastKind;
use rustc_apfloat::Float;
use rustc::ty::TyKind::*;
trace!("Casting {:?}: {:?} to {:?}", val, src_layout.ty, dest_layout.ty);
- match val {
- Scalar::Ptr(ptr) => self.cast_from_ptr(ptr, dest_layout.ty),
- Scalar::Bits { bits, size } => {
- debug_assert_eq!(size as u64, src_layout.size.bytes());
- debug_assert_eq!(truncate(bits, Size::from_bytes(size.into())), bits,
- "Unexpected value of size {} before casting", size);
-
- let res = match src_layout.ty.sty {
- Float(fty) => self.cast_from_float(bits, fty, dest_layout.ty)?,
- _ => self.cast_from_int(bits, src_layout, dest_layout)?,
- };
-
- // Sanity check
- match res {
- Scalar::Ptr(_) => bug!("Fabricated a ptr value from an int...?"),
- Scalar::Bits { bits, size } => {
- debug_assert_eq!(size as u64, dest_layout.size.bytes());
- debug_assert_eq!(truncate(bits, Size::from_bytes(size.into())), bits,
- "Unexpected value of size {} after casting", size);
- }
+ match val.to_bits_or_ptr(src_layout.size, self) {
+ Err(ptr) => self.cast_from_ptr(ptr, dest_layout.ty),
+ Ok(data) => {
+ match src_layout.ty.sty {
+ Float(fty) => self.cast_from_float(data, fty, dest_layout.ty),
+ _ => self.cast_from_int(data, src_layout, dest_layout),
}
- // Done
- Ok(res)
}
}
}
trace!("cast_from_int: {}, {}, {}", v, src_layout.ty, dest_layout.ty);
use rustc::ty::TyKind::*;
match dest_layout.ty.sty {
- Int(_) | Uint(_) => {
+ Int(_) | Uint(_) | RawPtr(_) => {
let v = self.truncate(v, dest_layout);
Ok(Scalar::from_uint(v, dest_layout.size))
}
Ok(Scalar::from_uint(v, Size::from_bytes(4)))
},
- // No alignment check needed for raw pointers.
- // But we have to truncate to target ptr size.
- RawPtr(_) => {
- Ok(Scalar::from_uint(
- self.truncate_to_ptr(v).0,
- self.pointer_size(),
- ))
- },
-
// Casts to bool are not permitted by rustc, no need to handle them here.
_ => err!(Unimplemented(format!("int to {:?} cast", dest_layout.ty))),
}
// Function and callsite information
////////////////////////////////////////////////////////////////////////////////
/// The MIR for the function called on this frame.
- pub mir: &'mir mir::Mir<'tcx>,
+ pub mir: &'mir mir::Body<'tcx>,
/// The def_id and substs of the current function.
pub instance: ty::Instance<'tcx>,
}
#[inline(always)]
- pub(super) fn mir(&self) -> &'mir mir::Mir<'tcx> {
+ pub(super) fn mir(&self) -> &'mir mir::Body<'tcx> {
self.frame().mir
}
pub fn load_mir(
&self,
instance: ty::InstanceDef<'tcx>,
- ) -> EvalResult<'tcx, &'tcx mir::Mir<'tcx>> {
+ ) -> EvalResult<'tcx, &'tcx mir::Body<'tcx>> {
// do not continue if typeck errors occurred (can only occur in local crate)
let did = instance.def_id();
if did.is_local()
&mut self,
instance: ty::Instance<'tcx>,
span: source_map::Span,
- mir: &'mir mir::Mir<'tcx>,
+ mir: &'mir mir::Body<'tcx>,
return_place: Option<PlaceTy<'tcx, M::PointerTag>>,
return_to_block: StackPopCleanup,
) -> EvalResult<'tcx> {
args: &[OpTy<'tcx, Self::PointerTag>],
dest: Option<PlaceTy<'tcx, Self::PointerTag>>,
ret: Option<mir::BasicBlock>,
- ) -> EvalResult<'tcx, Option<&'mir mir::Mir<'tcx>>>;
+ ) -> EvalResult<'tcx, Option<&'mir mir::Body<'tcx>>>;
/// Directly process an intrinsic without pushing a stack frame.
/// If this returns successfully, the engine will take care of jumping to the next block.
use rustc::ty::{self, Instance, ParamEnv, query::TyCtxtAt};
use rustc::ty::layout::{Align, TargetDataLayout, Size, HasDataLayout};
-pub use rustc::mir::interpret::{truncate, write_target_uint, read_target_uint};
use rustc_data_structures::fx::{FxHashSet, FxHashMap};
use syntax::ast::Mutability;
required_align: Align
) -> EvalResult<'tcx> {
// Check non-NULL/Undef, extract offset
- let (offset, alloc_align) = match ptr {
- Scalar::Ptr(ptr) => {
+ let (offset, alloc_align) = match ptr.to_bits_or_ptr(self.pointer_size(), self) {
+ Err(ptr) => {
// check this is not NULL -- which we can ensure only if this is in-bounds
// of some (potentially dead) allocation.
let align = self.check_bounds_ptr(ptr, InboundsCheck::MaybeDead,
CheckInAllocMsg::NullPointerTest)?;
(ptr.offset.bytes(), align)
}
- Scalar::Bits { bits, size } => {
- assert_eq!(size as u64, self.pointer_size().bytes());
- assert!(bits < (1u128 << self.pointer_size().bits()));
+ Ok(data) => {
// check this is not NULL
- if bits == 0 {
+ if data == 0 {
return err!(InvalidNullPointerUsage);
}
// the "base address" is 0 and hence always aligned
- (bits as u64, required_align)
+ (data as u64, required_align)
}
};
// Check alignment
} => {
let variants_start = niche_variants.start().as_u32() as u128;
let variants_end = niche_variants.end().as_u32() as u128;
- match raw_discr {
- ScalarMaybeUndef::Scalar(Scalar::Ptr(ptr)) => {
+ let raw_discr = raw_discr.not_undef()
+ .map_err(|_| InterpError::InvalidDiscriminant(ScalarMaybeUndef::Undef))?;
+ match raw_discr.to_bits_or_ptr(discr_val.layout.size, self) {
+ Err(ptr) => {
// The niche must be just 0 (which an inbounds pointer value never is)
let ptr_valid = niche_start == 0 && variants_start == variants_end &&
self.memory.check_bounds_ptr(ptr, InboundsCheck::MaybeDead,
CheckInAllocMsg::NullPointerTest).is_ok();
if !ptr_valid {
- return err!(InvalidDiscriminant(raw_discr.erase_tag()));
+ return err!(InvalidDiscriminant(raw_discr.erase_tag().into()));
}
(dataful_variant.as_u32() as u128, dataful_variant)
},
- ScalarMaybeUndef::Scalar(Scalar::Bits { bits: raw_discr, size }) => {
- assert_eq!(size as u64, discr_val.layout.size.bytes());
+ Ok(raw_discr) => {
let adjusted_discr = raw_discr.wrapping_sub(niche_start)
.wrapping_add(variants_start);
if variants_start <= adjusted_discr && adjusted_discr <= variants_end {
(dataful_variant.as_u32() as u128, dataful_variant)
}
},
- ScalarMaybeUndef::Undef =>
- return err!(InvalidDiscriminant(ScalarMaybeUndef::Undef)),
}
}
})
Immediate::Scalar(ScalarMaybeUndef::Scalar(Scalar::Ptr(_))) =>
assert_eq!(self.pointer_size(), dest.layout.size,
"Size mismatch when writing pointer"),
- Immediate::Scalar(ScalarMaybeUndef::Scalar(Scalar::Bits { size, .. })) =>
+ Immediate::Scalar(ScalarMaybeUndef::Scalar(Scalar::Raw { size, .. })) =>
assert_eq!(Size::from_bytes(size.into()), dest.layout.size,
"Size mismatch when writing bits"),
Immediate::Scalar(ScalarMaybeUndef::Undef) => {}, // undef can have any size
fn snapshot(&self, ctx: &'a Ctx) -> Self::Item {
match self {
Scalar::Ptr(p) => Scalar::Ptr(p.snapshot(ctx)),
- Scalar::Bits{ size, bits } => Scalar::Bits {
+ Scalar::Raw{ size, data } => Scalar::Raw {
+ data: *data,
size: *size,
- bits: *bits,
},
}
}
wrapping_range_format(&layout.valid_range, max_hi),
)
);
- let bits = match value {
- Scalar::Ptr(ptr) => {
+ let bits = match value.to_bits_or_ptr(op.layout.size, self.ecx) {
+ Err(ptr) => {
if lo == 1 && hi == max_hi {
// only NULL is not allowed.
// We can call `check_align` to check non-NULL-ness, but have to also look
);
}
}
- Scalar::Bits { bits, size } => {
- assert_eq!(size as u64, op.layout.size.bytes());
- bits
- }
+ Ok(data) =>
+ data
};
// Now compare. This is slightly subtle because this is a special "wrap-around" range.
if wrapping_range_contains(&layout.valid_range, bits) {
use rustc::hir::intravisit::FnKind;
use rustc::hir::map::blocks::FnLikeNode;
use rustc::lint::builtin::UNCONDITIONAL_RECURSION;
-use rustc::mir::{self, Mir, TerminatorKind};
+use rustc::mir::{self, Body, TerminatorKind};
use rustc::ty::{self, AssocItem, AssocItemContainer, Instance, TyCtxt};
use rustc::ty::subst::InternalSubsts;
pub fn check(tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
def_id: DefId) {
let hir_id = tcx.hir().as_local_hir_id(def_id).unwrap();
fn check_fn_for_unconditional_recursion(tcx: TyCtxt<'a, 'tcx, 'tcx>,
fn_kind: FnKind<'_>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
def_id: DefId) {
if let FnKind::Closure(_) = fn_kind {
// closures can't recur, so they don't matter.
struct MirNeighborCollector<'a, 'tcx: 'a> {
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &'a mir::Mir<'tcx>,
+ mir: &'a mir::Body<'tcx>,
output: &'a mut Vec<MonoItem<'tcx>>,
param_substs: SubstsRef<'tcx>,
}
mir: &mir,
output,
param_substs: instance.substs,
- }.visit_mir(&mir);
+ }.visit_body(&mir);
let param_env = ty::ParamEnv::reveal_all();
for i in 0..mir.promoted.len() {
use rustc_data_structures::indexed_vec::Idx;
fn make_shim<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
instance: ty::InstanceDef<'tcx>)
- -> &'tcx Mir<'tcx>
+ -> &'tcx Body<'tcx>
{
debug!("make_shim({:?})", instance);
fn build_drop_shim<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
def_id: DefId,
ty: Option<Ty<'tcx>>)
- -> Mir<'tcx>
+ -> Body<'tcx>
{
debug!("build_drop_shim(def_id={:?}, ty={:?})", def_id, ty);
block(&mut blocks, TerminatorKind::Goto { target: return_block });
block(&mut blocks, TerminatorKind::Return);
- let mut mir = Mir::new(
+ let mut mir = Body::new(
blocks,
IndexVec::from_elem_n(
SourceScopeData { span: span, parent_scope: None }, 1
}
pub struct DropShimElaborator<'a, 'tcx: 'a> {
- pub mir: &'a Mir<'tcx>,
+ pub mir: &'a Body<'tcx>,
pub patch: MirPatch<'tcx>,
pub tcx: TyCtxt<'a, 'tcx, 'tcx>,
pub param_env: ty::ParamEnv<'tcx>,
type Path = ();
fn patch(&mut self) -> &mut MirPatch<'tcx> { &mut self.patch }
- fn mir(&self) -> &'a Mir<'tcx> { self.mir }
+ fn mir(&self) -> &'a Body<'tcx> { self.mir }
fn tcx(&self) -> TyCtxt<'a, 'tcx, 'tcx> { self.tcx }
fn param_env(&self) -> ty::ParamEnv<'tcx> { self.param_env }
fn build_clone_shim<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
def_id: DefId,
self_ty: Ty<'tcx>)
- -> Mir<'tcx>
+ -> Body<'tcx>
{
debug!("build_clone_shim(def_id={:?})", def_id);
}
}
- fn into_mir(self) -> Mir<'tcx> {
- Mir::new(
+ fn into_mir(self) -> Body<'tcx> {
+ Body::new(
self.blocks,
IndexVec::from_elem_n(
SourceScopeData { span: self.span, parent_scope: None }, 1
rcvr_adjustment: Adjustment,
call_kind: CallKind,
untuple_args: Option<&[Ty<'tcx>]>)
- -> Mir<'tcx>
+ -> Body<'tcx>
{
debug!("build_call_shim(def_id={:?}, rcvr_adjustment={:?}, \
call_kind={:?}, untuple_args={:?})",
block(&mut blocks, vec![], TerminatorKind::Resume, true);
}
- let mut mir = Mir::new(
+ let mut mir = Body::new(
blocks,
IndexVec::from_elem_n(
SourceScopeData { span: span, parent_scope: None }, 1
ctor_id: hir::HirId,
fields: &[hir::StructField],
span: Span)
- -> Mir<'tcx>
+ -> Body<'tcx>
{
let tcx = infcx.tcx;
let gcx = tcx.global_tcx();
is_cleanup: false
};
- Mir::new(
+ Body::new(
IndexVec::from_elem_n(start_block, 1),
IndexVec::from_elem_n(
SourceScopeData { span: span, parent_scope: None }, 1
fn run_pass<'a, 'tcx>(&self,
_tcx: TyCtxt<'a, 'tcx, 'tcx>,
_src: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
self.add_call_guards(mir);
}
}
impl AddCallGuards {
- pub fn add_call_guards(&self, mir: &mut Mir<'_>) {
+ pub fn add_call_guards(&self, mir: &mut Body<'_>) {
let pred_count: IndexVec<_, _> =
mir.predecessors().iter().map(|ps| ps.len()).collect();
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
src: MirSource<'tcx>,
- mir: &mut Mir<'tcx>)
+ mir: &mut Body<'tcx>)
{
debug!("add_moves_for_packed_drops({:?} @ {:?})", src, mir.span);
add_moves_for_packed_drops(tcx, mir, src.def_id());
pub fn add_moves_for_packed_drops<'a, 'tcx>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &mut Mir<'tcx>,
+ mir: &mut Body<'tcx>,
def_id: DefId)
{
let patch = add_moves_for_packed_drops_patch(tcx, mir, def_id);
fn add_moves_for_packed_drops_patch<'a, 'tcx>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
def_id: DefId)
-> MirPatch<'tcx>
{
fn add_move_for_packed_drop<'a, 'tcx>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
patch: &mut MirPatch<'tcx>,
terminator: &Terminator<'tcx>,
loc: Location,
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
_src: MirSource<'tcx>,
- mir: &mut Mir<'tcx>)
+ mir: &mut Body<'tcx>)
{
if !tcx.sess.opts.debugging_opts.mir_emit_retag {
return;
use crate::util;
pub struct UnsafetyChecker<'a, 'tcx: 'a> {
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
const_context: bool,
min_const_fn: bool,
source_scope_local_data: &'a IndexVec<SourceScope, SourceScopeLocalData>,
fn new(
const_context: bool,
min_const_fn: bool,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
source_scope_local_data: &'a IndexVec<SourceScope, SourceScopeLocalData>,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
param_env: ty::ParamEnv<'tcx>,
let mut checker = UnsafetyChecker::new(
const_context, min_const_fn,
mir, source_scope_local_data, tcx, param_env);
- checker.visit_mir(mir);
+ checker.visit_body(mir);
check_unused_unsafe(tcx, def_id, &checker.used_unsafe, &mut checker.inherited_blocks);
UnsafetyCheckResult {
//! [`FakeRead`]: rustc::mir::StatementKind::FakeRead
//! [`Nop`]: rustc::mir::StatementKind::Nop
-use rustc::mir::{BorrowKind, Rvalue, Location, Mir};
+use rustc::mir::{BorrowKind, Rvalue, Location, Body};
use rustc::mir::{Statement, StatementKind};
use rustc::mir::visit::MutVisitor;
use rustc::ty::TyCtxt;
fn run_pass<'a, 'tcx>(&self,
_tcx: TyCtxt<'a, 'tcx, 'tcx>,
_source: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
let mut delete = DeleteNonCodegenStatements;
- delete.visit_mir(mir);
+ delete.visit_body(mir);
}
}
use rustc::hir::def::DefKind;
use rustc::mir::{
- AggregateKind, Constant, Location, Place, PlaceBase, Mir, Operand, Rvalue, Local,
+ AggregateKind, Constant, Location, Place, PlaceBase, Body, Operand, Rvalue, Local,
NullOp, UnOp, StatementKind, Statement, LocalKind, Static, StaticKind,
TerminatorKind, Terminator, ClearCrossCrate, SourceInfo, BinOp, ProjectionElem,
SourceScope, SourceScopeLocalData, LocalDecl, Promoted,
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
source: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
// will be evaluated by miri and produce its errors there
if source.promoted.is_some() {
return;
// That would require an uniform one-def no-mutation analysis
// and RPO (or recursing when needing the value of a local).
let mut optimization_finder = ConstPropagator::new(mir, tcx, source);
- optimization_finder.visit_mir(mir);
+ optimization_finder.visit_body(mir);
// put back the data we stole from `mir`
std::mem::replace(
param_env: ParamEnv<'tcx>,
source_scope_local_data: ClearCrossCrate<IndexVec<SourceScope, SourceScopeLocalData>>,
local_decls: IndexVec<Local, LocalDecl<'tcx>>,
- promoted: IndexVec<Promoted, Mir<'tcx>>,
+ promoted: IndexVec<Promoted, Body<'tcx>>,
}
impl<'a, 'b, 'tcx> LayoutOf for ConstPropagator<'a, 'b, 'tcx> {
impl<'a, 'mir, 'tcx> ConstPropagator<'a, 'mir, 'tcx> {
fn new(
- mir: &mut Mir<'tcx>,
+ mir: &mut Body<'tcx>,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
source: MirSource<'tcx>,
) -> ConstPropagator<'a, 'mir, 'tcx> {
can_const_prop,
places: IndexVec::from_elem(None, &mir.local_decls),
source_scope_local_data,
- //FIXME(wesleywiser) we can't steal this because `Visitor::super_visit_mir()` needs it
+ //FIXME(wesleywiser) we can't steal this because `Visitor::super_visit_body()` needs it
local_decls: mir.local_decls.clone(),
promoted,
}
type_size_of(self.tcx, self.param_env, ty).and_then(|n| Some(
ImmTy {
imm: Immediate::Scalar(
- Scalar::Bits {
- bits: n as u128,
- size: self.tcx.data_layout.pointer_size.bytes() as u8,
- }.into()
+ Scalar::from_uint(n, self.tcx.data_layout.pointer_size).into()
),
layout: self.tcx.layout_of(self.param_env.and(self.tcx.types.usize)).ok()?,
}.into()
impl CanConstProp {
/// returns true if `local` can be propagated
- fn check(mir: &Mir<'_>) -> IndexVec<Local, bool> {
+ fn check(mir: &Body<'_>) -> IndexVec<Local, bool> {
let mut cpv = CanConstProp {
can_const_prop: IndexVec::from_elem(true, &mir.local_decls),
found_assignment: IndexVec::from_elem(false, &mir.local_decls),
trace!("local {:?} can't be propagated because it's not a temporary", local);
}
}
- cpv.visit_mir(mir);
+ cpv.visit_body(mir);
cpv.can_const_prop
}
}
.eval_operand(len, source_info)
.expect("len must be const");
let len = match self.ecx.read_scalar(len) {
- Ok(ScalarMaybeUndef::Scalar(Scalar::Bits {
- bits, ..
- })) => bits,
+ Ok(ScalarMaybeUndef::Scalar(Scalar::Raw {
+ data, ..
+ })) => data,
other => bug!("const len not primitive: {:?}", other),
};
let index = self
.eval_operand(index, source_info)
.expect("index must be const");
let index = match self.ecx.read_scalar(index) {
- Ok(ScalarMaybeUndef::Scalar(Scalar::Bits {
- bits, ..
- })) => bits,
+ Ok(ScalarMaybeUndef::Scalar(Scalar::Raw {
+ data, ..
+ })) => data,
other => bug!("const index not primitive: {:?}", other),
};
format!(
//! future.
use rustc::mir::{
- Constant, Local, LocalKind, Location, Place, PlaceBase, Mir, Operand, Rvalue, StatementKind
+ Constant, Local, LocalKind, Location, Place, PlaceBase, Body, Operand, Rvalue, StatementKind
};
use rustc::mir::visit::MutVisitor;
use rustc::ty::TyCtxt;
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
_source: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
// We only run when the MIR optimization level is > 1.
// This avoids a slow pass, and messing up debug info.
if tcx.sess.opts.debugging_opts.mir_opt_level <= 1 {
}
fn eliminate_self_assignments(
- mir: &mut Mir<'_>,
+ mir: &mut Body<'_>,
def_use_analysis: &DefUseAnalysis,
) -> bool {
let mut changed = false;
}
impl<'tcx> Action<'tcx> {
- fn local_copy(mir: &Mir<'tcx>, def_use_analysis: &DefUseAnalysis, src_place: &Place<'tcx>)
+ fn local_copy(mir: &Body<'tcx>, def_use_analysis: &DefUseAnalysis, src_place: &Place<'tcx>)
-> Option<Action<'tcx>> {
// The source must be a local.
let src_local = if let Place::Base(PlaceBase::Local(local)) = *src_place {
}
fn perform(self,
- mir: &mut Mir<'tcx>,
+ mir: &mut Body<'tcx>,
def_use_analysis: &DefUseAnalysis,
dest_local: Local,
location: Location)
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
_source: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
let (basic_blocks, local_decls) = mir.basic_blocks_and_local_decls_mut();
let local_decls = &*local_decls;
for bb in basic_blocks {
use std::fs::File;
use std::io;
-use rustc::mir::Mir;
+use rustc::mir::Body;
use rustc::session::config::{OutputFilenames, OutputType};
use rustc::ty::TyCtxt;
use crate::transform::{MirPass, MirSource};
fn run_pass<'a, 'tcx>(&self,
_tcx: TyCtxt<'a, 'tcx, 'tcx>,
_source: MirSource<'tcx>,
- _mir: &mut Mir<'tcx>)
+ _mir: &mut Body<'tcx>)
{
}
}
pass_num: &dyn fmt::Display,
pass_name: &str,
source: MirSource<'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
is_after: bool) {
if mir_util::dump_enabled(tcx, pass_name, source) {
mir_util::dump_mir(tcx,
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
src: MirSource<'tcx>,
- mir: &mut Mir<'tcx>)
+ mir: &mut Body<'tcx>)
{
debug!("elaborate_drops({:?} @ {:?})", src, mir.span);
/// that can't drop anything.
fn find_dead_unwinds<'a, 'tcx>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
def_id: hir::def_id::DefId,
env: &MoveDataParamEnv<'tcx, 'tcx>)
-> BitSet<BasicBlock>
impl InitializationData {
fn apply_location<'a,'tcx>(&mut self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
env: &MoveDataParamEnv<'tcx, 'tcx>,
loc: Location)
{
&mut self.ctxt.patch
}
- fn mir(&self) -> &'a Mir<'tcx> {
+ fn mir(&self) -> &'a Body<'tcx> {
self.ctxt.mir
}
struct ElaborateDropsCtxt<'a, 'tcx: 'a> {
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
env: &'a MoveDataParamEnv<'tcx, 'tcx>,
flow_inits: DataflowResults<'tcx, MaybeInitializedPlaces<'a, 'tcx, 'tcx>>,
flow_uninits: DataflowResults<'tcx, MaybeUninitializedPlaces<'a, 'tcx, 'tcx>>,
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
_: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
- EraseRegionsVisitor::new(tcx).visit_mir(mir);
+ mir: &mut Body<'tcx>) {
+ EraseRegionsVisitor::new(tcx).visit_body(mir);
}
}
}
// Create a statement which reads the discriminant into a temporary
- fn get_discr(&self, mir: &mut Mir<'tcx>) -> (Statement<'tcx>, Place<'tcx>) {
+ fn get_discr(&self, mir: &mut Body<'tcx>) -> (Statement<'tcx>, Place<'tcx>) {
let temp_decl = LocalDecl::new_internal(self.tcx.types.isize, mir.span);
let local_decls_len = mir.local_decls.push(temp_decl);
let temp = Place::Base(PlaceBase::Local(local_decls_len));
fn make_generator_state_argument_indirect<'a, 'tcx>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
def_id: DefId,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
let gen_ty = mir.local_decls.raw[1].ty;
let region = ty::ReFree(ty::FreeRegion {
mir.local_decls.raw[1].ty = ref_gen_ty;
// Add a deref to accesses of the generator state
- DerefArgVisitor.visit_mir(mir);
+ DerefArgVisitor.visit_body(mir);
}
fn make_generator_state_argument_pinned<'a, 'tcx>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
let ref_gen_ty = mir.local_decls.raw[1].ty;
let pin_did = tcx.lang_items().pin_type().unwrap();
mir.local_decls.raw[1].ty = pin_ref_gen_ty;
// Add the Pin field access to accesses of the generator state
- PinArgVisitor { ref_gen_ty }.visit_mir(mir);
+ PinArgVisitor { ref_gen_ty }.visit_body(mir);
}
fn replace_result_variable<'tcx>(
ret_ty: Ty<'tcx>,
- mir: &mut Mir<'tcx>,
+ mir: &mut Body<'tcx>,
) -> Local {
let source_info = source_info(mir);
let new_ret = LocalDecl {
RenameLocalVisitor {
from: RETURN_PLACE,
to: new_ret_local,
- }.visit_mir(mir);
+ }.visit_body(mir);
new_ret_local
}
fn locals_live_across_suspend_points(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
source: MirSource<'tcx>,
movable: bool,
) -> (
// Find the MIR locals which do not use StorageLive/StorageDead statements.
// The storage of these locals are always live.
let mut ignored = StorageIgnored(BitSet::new_filled(mir.local_decls.len()));
- ignored.visit_mir(mir);
+ ignored.visit_body(mir);
// Calculate the MIR locals which have been previously
// borrowed (even if they are still active).
upvars: &Vec<Ty<'tcx>>,
interior: Ty<'tcx>,
movable: bool,
- mir: &mut Mir<'tcx>)
+ mir: &mut Body<'tcx>)
-> (FxHashMap<Local, (Ty<'tcx>, VariantIdx, usize)>,
GeneratorLayout<'tcx>,
FxHashMap<BasicBlock, liveness::LiveVarSet>)
(remap, layout, storage_liveness)
}
-fn insert_switch<'a, 'tcx>(mir: &mut Mir<'tcx>,
+fn insert_switch<'a, 'tcx>(mir: &mut Body<'tcx>,
cases: Vec<(usize, BasicBlock)>,
transform: &TransformVisitor<'a, 'tcx>,
default: TerminatorKind<'tcx>) {
fn elaborate_generator_drops<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
def_id: DefId,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
use crate::util::elaborate_drops::{elaborate_drop, Unwind};
use crate::util::patch::MirPatch;
use crate::shim::DropShimElaborator;
def_id: DefId,
source: MirSource<'tcx>,
gen_ty: Ty<'tcx>,
- mir: &Mir<'tcx>,
- drop_clean: BasicBlock) -> Mir<'tcx> {
+ mir: &Body<'tcx>,
+ drop_clean: BasicBlock) -> Body<'tcx> {
let mut mir = mir.clone();
let source_info = source_info(&mir);
mir
}
-fn insert_term_block<'tcx>(mir: &mut Mir<'tcx>, kind: TerminatorKind<'tcx>) -> BasicBlock {
+fn insert_term_block<'tcx>(mir: &mut Body<'tcx>, kind: TerminatorKind<'tcx>) -> BasicBlock {
let term_block = BasicBlock::new(mir.basic_blocks().len());
let source_info = source_info(mir);
mir.basic_blocks_mut().push(BasicBlockData {
}
fn insert_panic_block<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &mut Mir<'tcx>,
+ mir: &mut Body<'tcx>,
message: AssertMessage<'tcx>) -> BasicBlock {
let assert_block = BasicBlock::new(mir.basic_blocks().len());
let term = TerminatorKind::Assert {
transform: TransformVisitor<'a, 'tcx>,
def_id: DefId,
source: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
// Poison the generator when it unwinds
for block in mir.basic_blocks_mut() {
let source_info = block.terminator().source_info;
dump_mir(tcx, None, "generator_resume", &0, source, mir, |_, _| Ok(()) );
}
-fn source_info<'a, 'tcx>(mir: &Mir<'tcx>) -> SourceInfo {
+fn source_info<'a, 'tcx>(mir: &Body<'tcx>) -> SourceInfo {
SourceInfo {
span: mir.span,
scope: OUTERMOST_SOURCE_SCOPE,
}
}
-fn insert_clean_drop<'a, 'tcx>(mir: &mut Mir<'tcx>) -> BasicBlock {
+fn insert_clean_drop<'a, 'tcx>(mir: &mut Body<'tcx>) -> BasicBlock {
let return_block = insert_term_block(mir, TerminatorKind::Return);
// Create a block to destroy an unresumed generators. This can only destroy upvars.
drop_clean
}
-fn create_cases<'a, 'tcx, F>(mir: &mut Mir<'tcx>,
+fn create_cases<'a, 'tcx, F>(mir: &mut Body<'tcx>,
transform: &TransformVisitor<'a, 'tcx>,
target: F) -> Vec<(usize, BasicBlock)>
where F: Fn(&SuspensionPoint) -> Option<BasicBlock> {
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
source: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
let yield_ty = if let Some(yield_ty) = mir.yield_ty {
yield_ty
} else {
new_ret_local,
discr_ty,
};
- transform.visit_mir(mir);
+ transform.visit_body(mir);
// Update our MIR struct to reflect the changed we've made
mir.yield_ty = None;
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
source: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
if tcx.sess.opts.debugging_opts.mir_opt_level >= 2 {
Inliner { tcx, source }.run_pass(mir);
}
}
impl<'a, 'tcx> Inliner<'a, 'tcx> {
- fn run_pass(&self, caller_mir: &mut Mir<'tcx>) {
+ fn run_pass(&self, caller_mir: &mut Body<'tcx>) {
// Keep a queue of callsites to try inlining on. We take
// advantage of the fact that queries detect cycles here to
// allow us to try and fetch the fully optimized MIR of a
fn get_valid_function_call(&self,
bb: BasicBlock,
bb_data: &BasicBlockData<'tcx>,
- caller_mir: &Mir<'tcx>,
+ caller_mir: &Body<'tcx>,
param_env: ParamEnv<'tcx>,
) -> Option<CallSite<'tcx>> {
// Don't inline calls that are in cleanup blocks.
fn consider_optimizing(&self,
callsite: CallSite<'tcx>,
- callee_mir: &Mir<'tcx>)
+ callee_mir: &Body<'tcx>)
-> bool
{
debug!("consider_optimizing({:?})", callsite);
fn should_inline(&self,
callsite: CallSite<'tcx>,
- callee_mir: &Mir<'tcx>)
+ callee_mir: &Body<'tcx>)
-> bool
{
debug!("should_inline({:?})", callsite);
fn inline_call(&self,
callsite: CallSite<'tcx>,
- caller_mir: &mut Mir<'tcx>,
- mut callee_mir: Mir<'tcx>) -> bool {
+ caller_mir: &mut Body<'tcx>,
+ mut callee_mir: Body<'tcx>) -> bool {
let terminator = caller_mir[callsite.bb].terminator.take().unwrap();
match terminator.kind {
// FIXME: Handle inlining of diverging calls
&self,
args: Vec<Operand<'tcx>>,
callsite: &CallSite<'tcx>,
- caller_mir: &mut Mir<'tcx>,
+ caller_mir: &mut Body<'tcx>,
) -> Vec<Local> {
let tcx = self.tcx;
&self,
arg: Operand<'tcx>,
callsite: &CallSite<'tcx>,
- caller_mir: &mut Mir<'tcx>,
+ caller_mir: &mut Body<'tcx>,
) -> Local {
// FIXME: Analysis of the usage of the arguments to avoid
// unnecessary temporaries.
//! Performs various peephole optimizations.
-use rustc::mir::{Constant, Location, Place, PlaceBase, Mir, Operand, ProjectionElem, Rvalue, Local};
+use rustc::mir::{Constant, Location, Place, PlaceBase, Body, Operand, ProjectionElem, Rvalue,
+ Local};
use rustc::mir::visit::{MutVisitor, Visitor};
use rustc::ty::{self, TyCtxt};
use rustc::util::nodemap::{FxHashMap, FxHashSet};
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
_: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
// We only run when optimizing MIR (at any level).
if tcx.sess.opts.debugging_opts.mir_opt_level == 0 {
return
// `Place::ty()`).
let optimizations = {
let mut optimization_finder = OptimizationFinder::new(mir, tcx);
- optimization_finder.visit_mir(mir);
+ optimization_finder.visit_body(mir);
optimization_finder.optimizations
};
// Then carry out those optimizations.
- MutVisitor::visit_mir(&mut InstCombineVisitor { optimizations }, mir);
+ MutVisitor::visit_body(&mut InstCombineVisitor { optimizations }, mir);
}
}
/// Finds optimization opportunities on the MIR.
struct OptimizationFinder<'b, 'a, 'tcx:'a+'b> {
- mir: &'b Mir<'tcx>,
+ mir: &'b Body<'tcx>,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
optimizations: OptimizationList<'tcx>,
}
impl<'b, 'a, 'tcx:'b> OptimizationFinder<'b, 'a, 'tcx> {
- fn new(mir: &'b Mir<'tcx>, tcx: TyCtxt<'a, 'tcx, 'tcx>) -> OptimizationFinder<'b, 'a, 'tcx> {
+ fn new(mir: &'b Body<'tcx>, tcx: TyCtxt<'a, 'tcx, 'tcx>) -> OptimizationFinder<'b, 'a, 'tcx> {
OptimizationFinder {
mir,
tcx,
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
_src: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
let debugging_override = tcx.sess.opts.debugging_opts.lower_128bit_ops;
let target_default = tcx.sess.host.options.i128_lowering;
if !debugging_override.unwrap_or(target_default) {
}
impl Lower128Bit {
- fn lower_128bit_ops<'a, 'tcx>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, mir: &mut Mir<'tcx>) {
+ fn lower_128bit_ops<'a, 'tcx>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, mir: &mut Body<'tcx>) {
let mut new_blocks = Vec::new();
let cur_len = mir.basic_blocks().len();
use crate::build;
use rustc::hir::def_id::{CrateNum, DefId, LOCAL_CRATE};
-use rustc::mir::{Mir, MirPhase, Promoted};
+use rustc::mir::{Body, MirPhase, Promoted};
use rustc::ty::{TyCtxt, InstanceDef};
use rustc::ty::query::Providers;
use rustc::ty::steal::Steal;
tcx.arena.alloc(set)
}
-fn mir_built<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> &'tcx Steal<Mir<'tcx>> {
+fn mir_built<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> &'tcx Steal<Body<'tcx>> {
let mir = build::mir_build(tcx, def_id);
tcx.alloc_steal_mir(mir)
}
-/// Where a specific Mir comes from.
+/// Where a specific `mir::Body` comes from.
#[derive(Debug, Copy, Clone)]
pub struct MirSource<'tcx> {
pub instance: InstanceDef<'tcx>,
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
source: MirSource<'tcx>,
- mir: &mut Mir<'tcx>);
+ mir: &mut Body<'tcx>);
}
pub fn run_passes(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &mut Mir<'tcx>,
+ mir: &mut Body<'tcx>,
instance: InstanceDef<'tcx>,
mir_phase: MirPhase,
passes: &[&dyn MirPass],
) {
let phase_index = mir_phase.phase_index();
- let run_passes = |mir: &mut Mir<'tcx>, promoted| {
+ let run_passes = |mir: &mut Body<'tcx>, promoted| {
if mir.phase >= mir_phase {
return;
}
}
}
-fn mir_const<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> &'tcx Steal<Mir<'tcx>> {
+fn mir_const<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> &'tcx Steal<Body<'tcx>> {
// Unsafety check uses the raw mir, so make sure it is run
let _ = tcx.unsafety_check_result(def_id);
tcx.alloc_steal_mir(mir)
}
-fn mir_validated<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> &'tcx Steal<Mir<'tcx>> {
+fn mir_validated<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> &'tcx Steal<Body<'tcx>> {
let hir_id = tcx.hir().as_local_hir_id(def_id).unwrap();
if let hir::BodyOwnerKind::Const = tcx.hir().body_owner_kind_by_hir_id(hir_id) {
// Ensure that we compute the `mir_const_qualif` for constants at
tcx.alloc_steal_mir(mir)
}
-fn optimized_mir<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> &'tcx Mir<'tcx> {
- // (Mir-)Borrowck uses `mir_validated`, so we have to force it to
+fn optimized_mir<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> &'tcx Body<'tcx> {
+ // `mir_borrowck` uses `mir_validated`, so we have to force it to
// execute before we can steal.
tcx.ensure().mir_borrowck(def_id);
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
_: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
no_landing_pads(tcx, mir)
}
}
-pub fn no_landing_pads<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, mir: &mut Mir<'tcx>) {
+pub fn no_landing_pads<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, mir: &mut Body<'tcx>) {
if tcx.sess.no_landing_pads() {
- NoLandingPads.visit_mir(mir);
+ NoLandingPads.visit_body(mir);
}
}
struct TempCollector<'tcx> {
temps: IndexVec<Local, TempState>,
span: Span,
- mir: &'tcx Mir<'tcx>,
+ mir: &'tcx Body<'tcx>,
}
impl<'tcx> Visitor<'tcx> for TempCollector<'tcx> {
}
}
-pub fn collect_temps(mir: &Mir<'_>,
+pub fn collect_temps(mir: &Body<'_>,
rpo: &mut ReversePostorder<'_, '_>) -> IndexVec<Local, TempState> {
let mut collector = TempCollector {
temps: IndexVec::from_elem(TempState::Undefined, &mir.local_decls),
struct Promoter<'a, 'tcx: 'a> {
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- source: &'a mut Mir<'tcx>,
- promoted: Mir<'tcx>,
+ source: &'a mut Body<'tcx>,
+ promoted: Body<'tcx>,
temps: &'a mut IndexVec<Local, TempState>,
/// If true, all nested temps are also kept in the
}
}
-pub fn promote_candidates<'a, 'tcx>(mir: &mut Mir<'tcx>,
+pub fn promote_candidates<'a, 'tcx>(mir: &mut Body<'tcx>,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
mut temps: IndexVec<Local, TempState>,
candidates: Vec<Candidate>) {
}
- // Declare return place local so that `Mir::new` doesn't complain.
+ // Declare return place local so that `mir::Body::new` doesn't complain.
let initial_locals = iter::once(
LocalDecl::new_return_place(tcx.types.never, mir.span)
).collect();
let promoter = Promoter {
- promoted: Mir::new(
+ promoted: Body::new(
IndexVec::new(),
// FIXME: maybe try to filter this to avoid blowing up
// memory usage?
tcx: TyCtxt<'a, 'tcx, 'tcx>,
param_env: ty::ParamEnv<'tcx>,
mode: Mode,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
per_local: PerQualif<BitSet<Local>>,
}
impl<'a, 'tcx> Checker<'a, 'tcx> {
fn new(tcx: TyCtxt<'a, 'tcx, 'tcx>,
def_id: DefId,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
mode: Mode)
-> Self {
assert!(def_id.is_local());
let mir = &tcx.mir_const(def_id).borrow();
if mir.return_ty().references_error() {
- tcx.sess.delay_span_bug(mir.span, "mir_const_qualif: Mir had errors");
+ tcx.sess.delay_span_bug(mir.span, "mir_const_qualif: MIR had errors");
return (1 << IsNotPromotable::IDX, tcx.arena.alloc(BitSet::new_empty(0)));
}
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
src: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
// There's not really any point in promoting errorful MIR.
if mir.return_ty().references_error() {
- tcx.sess.delay_span_bug(mir.span, "QualifyAndPromoteConstants: Mir had errors");
+ tcx.sess.delay_span_bug(mir.span, "QualifyAndPromoteConstants: MIR had errors");
return;
}
pub fn is_min_const_fn(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
def_id: DefId,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
) -> McfResult {
let mut current = def_id;
loop {
fn check_rvalue(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
rvalue: &Rvalue<'tcx>,
span: Span,
) -> McfResult {
fn check_statement(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
statement: &Statement<'tcx>,
) -> McfResult {
let span = statement.source_info.span;
}
}
-fn check_place(place: &Place<'tcx>, span: Span) -> McfResult {
+fn check_place(
+ place: &Place<'tcx>,
+ span: Span,
+) -> McfResult {
place.iterate(|place_base, place_projection| {
for proj in place_projection {
match proj.elem {
fn check_terminator(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
terminator: &Terminator<'tcx>,
) -> McfResult {
let span = terminator.source_info.span;
pub fn remove_noop_landing_pads<'a, 'tcx>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &mut Mir<'tcx>)
+ mir: &mut Body<'tcx>)
{
if tcx.sess.no_landing_pads() {
return
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
_src: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
remove_noop_landing_pads(tcx, mir);
}
}
fn is_nop_landing_pad(
&self,
bb: BasicBlock,
- mir: &Mir<'_>,
+ mir: &Body<'_>,
nop_landing_pads: &BitSet<BasicBlock>,
) -> bool {
for stmt in &mir[bb].statements {
}
}
- fn remove_nop_landing_pads(&self, mir: &mut Mir<'_>) {
+ fn remove_nop_landing_pads(&self, mir: &mut Body<'_>) {
// make sure there's a single resume block
let resume_block = {
let patch = MirPatch::new(mir);
use rustc::ty::{self, TyCtxt};
use rustc::hir::def_id::DefId;
-use rustc::mir::{self, Mir, Location};
+use rustc::mir::{self, Body, Location};
use rustc_data_structures::bit_set::BitSet;
use crate::transform::{MirPass, MirSource};
impl MirPass for SanityCheck {
fn run_pass<'a, 'tcx>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>,
- src: MirSource<'tcx>, mir: &mut Mir<'tcx>) {
+ src: MirSource<'tcx>, mir: &mut Body<'tcx>) {
let def_id = src.def_id();
if !tcx.has_attr(def_id, sym::rustc_mir) {
debug!("skipping rustc_peek::SanityCheck on {}", tcx.def_path_str(def_id));
/// expression form above, then that emits an error as well, but those
/// errors are not intended to be used for unit tests.)
pub fn sanity_check_via_rustc_peek<'a, 'tcx, O>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
def_id: DefId,
_attributes: &[ast::Attribute],
results: &DataflowResults<'tcx, O>)
}
fn each_block<'a, 'tcx, O>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
results: &DataflowResults<'tcx, O>,
bb: mir::BasicBlock) where
O: BitDenotation<'tcx, Idx=MovePathIndex> + HasMoveData<'tcx>
}
}
-pub fn simplify_cfg(mir: &mut Mir<'_>) {
+pub fn simplify_cfg(mir: &mut Body<'_>) {
CfgSimplifier::new(mir).simplify();
remove_dead_blocks(mir);
fn run_pass<'a, 'tcx>(&self,
_tcx: TyCtxt<'a, 'tcx, 'tcx>,
_src: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
debug!("SimplifyCfg({:?}) - simplifying {:?}", self.label, mir);
simplify_cfg(mir);
}
}
impl<'a, 'tcx: 'a> CfgSimplifier<'a, 'tcx> {
- pub fn new(mir: &'a mut Mir<'tcx>) -> Self {
+ pub fn new(mir: &'a mut Body<'tcx>) -> Self {
let mut pred_count = IndexVec::from_elem(0u32, mir.basic_blocks());
// we can't use mir.predecessors() here because that counts
}
}
-pub fn remove_dead_blocks(mir: &mut Mir<'_>) {
+pub fn remove_dead_blocks(mir: &mut Body<'_>) {
let mut seen = BitSet::new_empty(mir.basic_blocks().len());
for (bb, _) in traversal::preorder(mir) {
seen.insert(bb.index());
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
_: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
let mut marker = DeclMarker { locals: BitSet::new_empty(mir.local_decls.len()) };
- marker.visit_mir(mir);
+ marker.visit_body(mir);
// Return pointer and arguments are always live
marker.locals.insert(RETURN_PLACE);
for arg in mir.args_iter() {
let map = make_local_map(&mut mir.local_decls, marker.locals);
// Update references to all vars and tmps now
- LocalUpdater { map }.visit_mir(mir);
+ LocalUpdater { map }.visit_body(mir);
mir.local_decls.shrink_to_fit();
}
}
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
_src: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
for block in mir.basic_blocks_mut() {
let terminator = block.terminator_mut();
terminator.kind = match terminator.kind {
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
_src: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
let mut patch = MirPatch::new(mir);
{
let mut visitor = UniformArrayMoveOutVisitor{mir, patch: &mut patch, tcx};
- visitor.visit_mir(mir);
+ visitor.visit_body(mir);
}
patch.apply(mir);
}
}
struct UniformArrayMoveOutVisitor<'a, 'tcx: 'a> {
- mir: &'a Mir<'tcx>,
+ mir: &'a Body<'tcx>,
patch: &'a mut MirPatch<'tcx>,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
}
fn run_pass<'a, 'tcx>(&self,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
_src: MirSource<'tcx>,
- mir: &mut Mir<'tcx>) {
+ mir: &mut Body<'tcx>) {
let mut patch = MirPatch::new(mir);
{
let mut visitor = RestoreDataCollector {
locals_use: IndexVec::from_elem(LocalUse::new(), &mir.local_decls),
candidates: vec![],
};
- visitor.visit_mir(mir);
+ visitor.visit_body(mir);
for candidate in &visitor.candidates {
let statement = &mir[candidate.block].statements[candidate.statement_index];
}
fn try_get_item_source<'a, 'tcx>(local_use: &LocalUse,
- mir: &'a Mir<'tcx>) -> Option<(u32, &'a Place<'tcx>)> {
+ mir: &'a Body<'tcx>) -> Option<(u32, &'a Place<'tcx>)> {
if let Some(location) = local_use.first_use {
let block = &mir[location.block];
if block.statements.len() > location.statement_index {
use rustc::mir::{Local, Location};
-use rustc::mir::Mir;
+use rustc::mir::Body;
use rustc::mir::visit::PlaceContext;
use rustc::mir::visit::Visitor;
fn find_assignments(&self, local: Local) -> Vec<Location>;
}
-impl<'tcx> FindAssignments for Mir<'tcx>{
+impl<'tcx> FindAssignments for Body<'tcx>{
fn find_assignments(&self, local: Local) -> Vec<Location>{
let mut visitor = FindLocalAssignmentVisitor{ needle: local, locations: vec![]};
- visitor.visit_mir(self);
+ visitor.visit_body(self);
visitor.locations
}
}
//! Def-use analysis.
-use rustc::mir::{Local, Location, Mir};
+use rustc::mir::{Local, Location, Body};
use rustc::mir::visit::{PlaceContext, MutVisitor, Visitor};
use rustc_data_structures::indexed_vec::IndexVec;
use std::mem;
}
impl DefUseAnalysis {
- pub fn new(mir: &Mir<'_>) -> DefUseAnalysis {
+ pub fn new(mir: &Body<'_>) -> DefUseAnalysis {
DefUseAnalysis {
info: IndexVec::from_elem_n(Info::new(), mir.local_decls.len()),
}
}
- pub fn analyze(&mut self, mir: &Mir<'_>) {
+ pub fn analyze(&mut self, mir: &Body<'_>) {
self.clear();
let mut finder = DefUseFinder {
info: mem::replace(&mut self.info, IndexVec::new()),
};
- finder.visit_mir(mir);
+ finder.visit_body(mir);
self.info = finder.info
}
&self.info[local]
}
- fn mutate_defs_and_uses<F>(&self, local: Local, mir: &mut Mir<'_>, mut callback: F)
+ fn mutate_defs_and_uses<F>(&self, local: Local, mir: &mut Body<'_>, mut callback: F)
where F: for<'a> FnMut(&'a mut Local,
PlaceContext,
Location) {
// FIXME(pcwalton): this should update the def-use chains.
pub fn replace_all_defs_and_uses_with(&self,
local: Local,
- mir: &mut Mir<'_>,
+ mir: &mut Body<'_>,
new_local: Local) {
self.mutate_defs_and_uses(local, mir, |local, _, _| *local = new_local)
}
}
impl<F> MutateUseVisitor<F> {
- fn new(query: Local, callback: F, _: &Mir<'_>)
+ fn new(query: Local, callback: F, _: &Body<'_>)
-> MutateUseVisitor<F>
where F: for<'a> FnMut(&'a mut Local, PlaceContext, Location) {
MutateUseVisitor {
type Path : Copy + fmt::Debug;
fn patch(&mut self) -> &mut MirPatch<'tcx>;
- fn mir(&self) -> &'a Mir<'tcx>;
+ fn mir(&self) -> &'a Body<'tcx>;
fn tcx(&self) -> TyCtxt<'a, 'tcx, 'tcx>;
fn param_env(&self) -> ty::ParamEnv<'tcx>;
/// Write a graphviz DOT graph of the MIR.
pub fn write_mir_fn_graphviz<'tcx, W>(tcx: TyCtxt<'_, '_, 'tcx>,
def_id: DefId,
- mir: &Mir<'_>,
+ mir: &Body<'_>,
w: &mut W) -> io::Result<()>
where W: Write
{
/// `init` and `fini` are callbacks for emitting additional rows of
/// data (using HTML enclosed with `<tr>` in the emitted text).
pub fn write_node_label<W: Write, INIT, FINI>(block: BasicBlock,
- mir: &Mir<'_>,
+ mir: &Body<'_>,
w: &mut W,
num_cols: u32,
init: INIT,
}
/// Write a graphviz DOT node for the given basic block.
-fn write_node<W: Write>(block: BasicBlock, mir: &Mir<'_>, w: &mut W) -> io::Result<()> {
+fn write_node<W: Write>(block: BasicBlock, mir: &Body<'_>, w: &mut W) -> io::Result<()> {
// Start a new node with the label to follow, in one of DOT's pseudo-HTML tables.
write!(w, r#" {} [shape="none", label=<"#, node(block))?;
write_node_label(block, mir, w, 1, |_| Ok(()), |_| Ok(()))?;
}
/// Write graphviz DOT edges with labels between the given basic block and all of its successors.
-fn write_edges<W: Write>(source: BasicBlock, mir: &Mir<'_>, w: &mut W) -> io::Result<()> {
+fn write_edges<W: Write>(source: BasicBlock, mir: &Body<'_>, w: &mut W) -> io::Result<()> {
let terminator = mir[source].terminator();
let labels = terminator.kind.fmt_successor_labels();
/// all the variables and temporaries.
fn write_graph_label<'a, 'gcx, 'tcx, W: Write>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
def_id: DefId,
- mir: &Mir<'_>,
+ mir: &Body<'_>,
w: &mut W)
-> io::Result<()> {
write!(w, " label=<fn {}(", dot::escape_html(&tcx.def_path_str(def_id)))?;
/// Computes which local variables are live within the given function
/// `mir`, including drops.
pub fn liveness_of_locals<'tcx>(
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
) -> LivenessResult {
let num_live_vars = mir.local_decls.len();
tcx: TyCtxt<'a, 'tcx, 'tcx>,
pass_name: &str,
source: MirSource<'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
result: &LivenessResult,
) {
if !dump_enabled(tcx, pass_name, source) {
pass_name: &str,
node_path: &str,
source: MirSource<'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
result: &LivenessResult,
) {
let mut file_path = PathBuf::new();
pub fn write_mir_fn<'a, 'tcx>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
src: MirSource<'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
w: &mut dyn Write,
result: &LivenessResult,
) -> io::Result<()> {
}
impl<'tcx> MirPatch<'tcx> {
- pub fn new(mir: &Mir<'tcx>) -> Self {
+ pub fn new(mir: &Body<'tcx>) -> Self {
let mut result = MirPatch {
patch_map: IndexVec::from_elem(None, mir.basic_blocks()),
new_blocks: vec![],
self.patch_map[bb].is_some()
}
- pub fn terminator_loc(&self, mir: &Mir<'tcx>, bb: BasicBlock) -> Location {
+ pub fn terminator_loc(&self, mir: &Body<'tcx>, bb: BasicBlock) -> Location {
let offset = match bb.index().checked_sub(mir.basic_blocks().len()) {
Some(index) => self.new_blocks[index].statements.len(),
None => mir[bb].statements.len()
self.make_nop.push(loc);
}
- pub fn apply(self, mir: &mut Mir<'tcx>) {
+ pub fn apply(self, mir: &mut Body<'tcx>) {
debug!("MirPatch: make nops at: {:?}", self.make_nop);
for loc in self.make_nop {
mir.make_statement_nop(loc);
}
}
- pub fn source_info_for_location(&self, mir: &Mir<'_>, loc: Location) -> SourceInfo {
+ pub fn source_info_for_location(&self, mir: &Body<'_>, loc: Location) -> SourceInfo {
let data = match loc.block.index().checked_sub(mir.basic_blocks().len()) {
Some(new) => &self.new_blocks[new],
None => &mir[loc.block]
pass_name: &str,
disambiguator: &dyn Display,
source: MirSource<'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
extra_data: F,
) where
F: FnMut(PassWhere, &mut dyn Write) -> io::Result<()>,
node_path: &str,
disambiguator: &dyn Display,
source: MirSource<'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
mut extra_data: F,
) where
F: FnMut(PassWhere, &mut dyn Write) -> io::Result<()>,
pub fn write_mir_fn<'a, 'gcx, 'tcx, F>(
tcx: TyCtxt<'a, 'gcx, 'tcx>,
src: MirSource<'tcx>,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
extra_data: &mut F,
w: &mut dyn Write,
) -> io::Result<()>
pub fn write_basic_block<'cx, 'gcx, 'tcx, F>(
tcx: TyCtxt<'cx, 'gcx, 'tcx>,
block: BasicBlock,
- mir: &Mir<'tcx>,
+ mir: &Body<'tcx>,
extra_data: &mut F,
w: &mut dyn Write,
) -> io::Result<()>
/// Prints local variables in a scope tree.
fn write_scope_tree(
tcx: TyCtxt<'_, '_, '_>,
- mir: &Mir<'_>,
+ mir: &Body<'_>,
scope_tree: &FxHashMap<SourceScope, Vec<SourceScope>>,
w: &mut dyn Write,
parent: SourceScope,
pub fn write_mir_intro<'a, 'gcx, 'tcx>(
tcx: TyCtxt<'a, 'gcx, 'tcx>,
src: MirSource<'tcx>,
- mir: &Mir<'_>,
+ mir: &Body<'_>,
w: &mut dyn Write,
) -> io::Result<()> {
write_mir_sig(tcx, src, mir, w)?;
fn write_mir_sig(
tcx: TyCtxt<'_, '_, '_>,
src: MirSource<'tcx>,
- mir: &Mir<'_>,
+ mir: &Body<'_>,
w: &mut dyn Write,
) -> io::Result<()> {
use rustc::hir::def::DefKind;
Ok(())
}
-fn write_user_type_annotations(mir: &Mir<'_>, w: &mut dyn Write) -> io::Result<()> {
+fn write_user_type_annotations(mir: &Body<'_>, w: &mut dyn Write) -> io::Result<()> {
if !mir.user_type_annotations.is_empty() {
writeln!(w, "| User Type Annotations")?;
}
}
fn debug_ex_clause(&mut self, value: &'v ChalkExClause<'tcx>) -> Box<dyn Debug + 'v> {
- let string = format!("{:?}", self.infcx.resolve_type_vars_if_possible(value));
+ let string = format!("{:?}", self.infcx.resolve_vars_if_possible(value));
Box::new(string)
}
use rustc::traits::WhereClause::*;
use rustc::infer::canonical::OriginalQueryValues;
- let goal = self.infcx.resolve_type_vars_if_possible(goal);
+ let goal = self.infcx.resolve_vars_if_possible(goal);
debug!("program_clauses(goal = {:?})", goal);
) -> Fallible<ChalkExClause<'tcx>> {
debug!(
"apply_answer_subst(ex_clause = {:?}, selected_goal = {:?})",
- self.infcx.resolve_type_vars_if_possible(&ex_clause),
- self.infcx.resolve_type_vars_if_possible(selected_goal)
+ self.infcx.resolve_vars_if_possible(&ex_clause),
+ self.infcx.resolve_vars_if_possible(selected_goal)
);
let (answer_subst, _) = self.infcx.instantiate_canonical_with_fresh_inference_vars(
ty::Predicate::TypeOutlives(ref data) => match data.no_bound_vars() {
None => vec![],
Some(ty::OutlivesPredicate(ty_a, r_b)) => {
- let ty_a = infcx.resolve_type_vars_if_possible(&ty_a);
+ let ty_a = infcx.resolve_vars_if_possible(&ty_a);
let mut components = smallvec![];
tcx.push_outlives_components(ty_a, &mut components);
implied_bounds_from_components(r_b, components)
None,
);
- let normalized_value = infcx.resolve_type_vars_if_possible(&normalized_value);
+ let normalized_value = infcx.resolve_vars_if_possible(&normalized_value);
let normalized_value = infcx.tcx.erase_regions(&normalized_value);
tcx.lift_to_global(&normalized_value).unwrap()
}
// Now that we know the types can be unified we find the unified type and use
// it to type the entire expression.
- let common_type = self.resolve_type_vars_if_possible(&lhs_ty);
+ let common_type = self.resolve_vars_if_possible(&lhs_ty);
// subtyping doesn't matter here, as the value is some kind of scalar
self.demand_eqtype_pat(pat.span, expected, lhs_ty, discrim_span);
body_id,
param_env,
steps: vec![],
- cur_ty: infcx.resolve_type_vars_if_possible(&base_ty),
+ cur_ty: infcx.resolve_vars_if_possible(&base_ty),
obligations: vec![],
at_start: true,
include_raw_pointers: false,
ty, normalized_ty, obligations);
self.obligations.extend(obligations);
- Some(self.infcx.resolve_type_vars_if_possible(&normalized_ty))
+ Some(self.infcx.resolve_vars_if_possible(&normalized_ty))
}
/// Returns the final type, generating an error if it is an
/// Returns the final type we ended up with, which may well be an
/// inference variable (we will resolve it first, if possible).
pub fn maybe_ambiguous_final_ty(&self) -> Ty<'tcx> {
- self.infcx.resolve_type_vars_if_possible(&self.cur_ty)
+ self.infcx.resolve_vars_if_possible(&self.cur_ty)
}
pub fn step_count(&self) -> usize {
{
debug!("pointer_kind({:?}, {:?})", t, span);
- let t = self.resolve_type_vars_if_possible(&t);
+ let t = self.resolve_vars_if_possible(&t);
if t.references_error() {
return Err(ErrorReported);
let tstr = fcx.ty_to_string(self.cast_ty);
let mut err = type_error_struct!(fcx.tcx.sess, self.span, self.expr_ty, E0620,
"cast to unsized type: `{}` as `{}`",
- fcx.resolve_type_vars_if_possible(&self.expr_ty),
+ fcx.resolve_vars_if_possible(&self.expr_ty),
tstr);
match self.expr_ty.sty {
ty::Ref(_, _, mt) => {
let input_tys = if is_fn {
let arg_param_ty = trait_ref.skip_binder().substs.type_at(1);
- let arg_param_ty = self.resolve_type_vars_if_possible(&arg_param_ty);
+ let arg_param_ty = self.resolve_vars_if_possible(&arg_param_ty);
debug!("deduce_sig_from_projection: arg_param_ty={:?}", arg_param_ty);
match arg_param_ty.sty {
};
let ret_param_ty = projection.skip_binder().ty;
- let ret_param_ty = self.resolve_type_vars_if_possible(&ret_param_ty);
+ let ret_param_ty = self.resolve_vars_if_possible(&ret_param_ty);
debug!("deduce_sig_from_projection: ret_param_ty={:?}", ret_param_ty);
let sig = self.tcx.mk_fn_sig(
// Uncertain or unimplemented.
Ok(None) => {
if trait_ref.def_id() == unsize_did {
- let trait_ref = self.resolve_type_vars_if_possible(&trait_ref);
+ let trait_ref = self.resolve_vars_if_possible(&trait_ref);
let self_ty = trait_ref.skip_binder().self_ty();
let unsize_ty = trait_ref.skip_binder().input_types().nth(1).unwrap();
debug!("coerce_unsized: ambiguous unsize case for {:?}", trait_ref);
});
if let Some(yield_span) = live_across_yield {
- let ty = self.fcx.resolve_type_vars_if_possible(&ty);
+ let ty = self.fcx.resolve_vars_if_possible(&ty);
debug!("type in expr = {:?}, scope = {:?}, type = {:?}, count = {}, yield_span = {:?}",
expr, scope, ty, self.expr_count, yield_span);
let base_ty = self.tables.borrow().expr_adjustments(base_expr).last()
.map_or_else(|| self.node_ty(expr.hir_id), |adj| adj.target);
- let base_ty = self.resolve_type_vars_if_possible(&base_ty);
+ let base_ty = self.resolve_vars_if_possible(&base_ty);
// Need to deref because overloaded place ops take self by-reference.
let base_ty = base_ty.builtin_deref(false)
scope: ProbeScope)
-> probe::PickResult<'tcx> {
let mode = probe::Mode::MethodCall;
- let self_ty = self.resolve_type_vars_if_possible(&self_ty);
+ let self_ty = self.resolve_vars_if_possible(&self_ty);
self.probe_for_name(span, mode, method_name, IsSuggestion(false),
self_ty, call_expr.hir_id, scope)
}
// and point at it rather than reporting the entire
// trait-ref?
result = ProbeResult::NoMatch;
- let trait_ref = self.resolve_type_vars_if_possible(&trait_ref);
+ let trait_ref = self.resolve_vars_if_possible(&trait_ref);
possibly_unsatisfied_predicates.push(trait_ref);
}
}
// Evaluate those obligations to see if they might possibly hold.
for o in candidate_obligations.into_iter().chain(sub_obligations) {
- let o = self.resolve_type_vars_if_possible(&o);
+ let o = self.resolve_vars_if_possible(&o);
if !self.predicate_may_hold(&o) {
result = ProbeResult::NoMatch;
if let &ty::Predicate::Trait(ref pred) = &o.predicate {
if let (Some(return_ty), Some(xform_ret_ty)) =
(self.return_type, probe.xform_ret_ty)
{
- let xform_ret_ty = self.resolve_type_vars_if_possible(&xform_ret_ty);
+ let xform_ret_ty = self.resolve_vars_if_possible(&xform_ret_ty);
debug!("comparing return_ty {:?} with xform ret ty {:?}",
return_ty,
probe.xform_ret_ty);
}) => {
let tcx = self.tcx;
- let actual = self.resolve_type_vars_if_possible(&rcvr_ty);
+ let actual = self.resolve_vars_if_possible(&rcvr_ty);
let ty_str = self.ty_to_string(actual);
let is_method = mode == Mode::MethodCall;
let item_kind = if is_method {
match self {
NoExpectation => NoExpectation,
ExpectCastableToType(t) => {
- ExpectCastableToType(fcx.resolve_type_vars_if_possible(&t))
+ ExpectCastableToType(fcx.resolve_vars_if_possible(&t))
}
ExpectHasType(t) => {
- ExpectHasType(fcx.resolve_type_vars_if_possible(&t))
+ ExpectHasType(fcx.resolve_vars_if_possible(&t))
}
ExpectRvalueLikeUnsized(t) => {
- ExpectRvalueLikeUnsized(fcx.resolve_type_vars_if_possible(&t))
+ ExpectRvalueLikeUnsized(fcx.resolve_vars_if_possible(&t))
}
}
}
}
/// Resolves type variables in `ty` if possible. Unlike the infcx
- /// version (resolve_type_vars_if_possible), this version will
+ /// version (resolve_vars_if_possible), this version will
/// also select obligations if it seems useful, in an effort
/// to get more type information.
fn resolve_type_vars_with_obligations(&self, mut ty: Ty<'tcx>) -> Ty<'tcx> {
}
// If `ty` is a type variable, see whether we already know what it is.
- ty = self.resolve_type_vars_if_possible(&ty);
+ ty = self.resolve_vars_if_possible(&ty);
if !ty.has_infer_types() {
debug!("resolve_type_vars_with_obligations: ty={:?}", ty);
return ty;
// indirect dependencies that don't seem worth tracking
// precisely.
self.select_obligations_where_possible(false);
- ty = self.resolve_type_vars_if_possible(&ty);
+ ty = self.resolve_vars_if_possible(&ty);
debug!("resolve_type_vars_with_obligations: ty={:?}", ty);
ty
#[inline]
pub fn write_ty(&self, id: hir::HirId, ty: Ty<'tcx>) {
debug!("write_ty({:?}, {:?}) in fcx {}",
- id, self.resolve_type_vars_if_possible(&ty), self.tag());
+ id, self.resolve_vars_if_possible(&ty), self.tag());
self.tables.borrow_mut().node_types_mut().insert(id, ty);
if ty.references_error() {
} else {
// is the missing argument of type `()`?
let sugg_unit = if expected_arg_tys.len() == 1 && supplied_arg_count == 0 {
- self.resolve_type_vars_if_possible(&expected_arg_tys[0]).is_unit()
+ self.resolve_vars_if_possible(&expected_arg_tys[0]).is_unit()
} else if fn_inputs.len() == 1 && supplied_arg_count == 0 {
- self.resolve_type_vars_if_possible(&fn_inputs[0]).is_unit()
+ self.resolve_vars_if_possible(&fn_inputs[0]).is_unit()
} else {
false
};
}
ty::FnDef(..) => {
let ptr_ty = self.tcx.mk_fn_ptr(arg_ty.fn_sig(self.tcx));
- let ptr_ty = self.resolve_type_vars_if_possible(&ptr_ty);
+ let ptr_ty = self.resolve_vars_if_possible(&ptr_ty);
variadic_error(tcx.sess, arg.span, arg_ty, &ptr_ty.to_string());
}
_ => {}
// Record all the argument types, with the substitutions
// produced from the above subtyping unification.
Ok(formal_args.iter().map(|ty| {
- self.resolve_type_vars_if_possible(ty)
+ self.resolve_vars_if_possible(ty)
}).collect())
}).unwrap_or_default();
debug!("expected_inputs_for_expected_output(formal={:?} -> {:?}, expected={:?} -> {:?})",
// Find the type of `e`. Supply hints based on the type we are casting to,
// if appropriate.
let t_cast = self.to_ty_saving_user_provided_ty(t);
- let t_cast = self.resolve_type_vars_if_possible(&t_cast);
+ let t_cast = self.resolve_vars_if_possible(&t_cast);
let t_expr = self.check_expr_with_expectation(e, ExpectCastableToType(t_cast));
- let t_cast = self.resolve_type_vars_if_possible(&t_cast);
+ let t_cast = self.resolve_vars_if_possible(&t_cast);
// Eagerly check for some obvious errors.
if t_expr.references_error() || t_cast.references_error() {
method.sig.output()
}
Err(()) => {
- let actual = self.resolve_type_vars_if_possible(&operand_ty);
+ let actual = self.resolve_vars_if_possible(&operand_ty);
if !actual.references_error() {
let mut err = struct_span_err!(self.tcx.sess, ex.span, E0600,
"cannot apply unary operator `{}` to type `{}`",
/// of b will be `&<R0>.i32` and then `*b` will require that `<R0>` be bigger than the let and
/// the `*b` expression, so we will effectively resolve `<R0>` to be the block B.
pub fn resolve_type(&self, unresolved_ty: Ty<'tcx>) -> Ty<'tcx> {
- self.resolve_type_vars_if_possible(&unresolved_ty)
+ self.resolve_vars_if_possible(&unresolved_ty)
}
/// Try to resolve the type for the given node.
hir::ExprKind::Unary(hir::UnNeg, ref inner)
| hir::ExprKind::Unary(hir::UnNot, ref inner) => {
let inner_ty = self.fcx.node_ty(inner.hir_id);
- let inner_ty = self.fcx.resolve_type_vars_if_possible(&inner_ty);
+ let inner_ty = self.fcx.resolve_vars_if_possible(&inner_ty);
if inner_ty.is_scalar() {
let mut tables = self.fcx.tables.borrow_mut();
hir::ExprKind::Binary(ref op, ref lhs, ref rhs)
| hir::ExprKind::AssignOp(ref op, ref lhs, ref rhs) => {
let lhs_ty = self.fcx.node_ty(lhs.hir_id);
- let lhs_ty = self.fcx.resolve_type_vars_if_possible(&lhs_ty);
+ let lhs_ty = self.fcx.resolve_vars_if_possible(&lhs_ty);
let rhs_ty = self.fcx.node_ty(rhs.hir_id);
- let rhs_ty = self.fcx.resolve_type_vars_if_possible(&rhs_ty);
+ let rhs_ty = self.fcx.resolve_vars_if_possible(&rhs_ty);
if lhs_ty.is_scalar() && rhs_ty.is_scalar() {
let mut tables = self.fcx.tables.borrow_mut();
// All valid indexing looks like this; might encounter non-valid indexes at this point
if let ty::Ref(_, base_ty, _) = tables.expr_ty_adjusted(&base).sty {
let index_ty = tables.expr_ty_adjusted(&index);
- let index_ty = self.fcx.resolve_type_vars_if_possible(&index_ty);
+ let index_ty = self.fcx.resolve_vars_if_possible(&index_ty);
if base_ty.builtin_index().is_some() && index_ty == self.fcx.tcx.types.usize {
// Remove the method call record
assert_eq!(reader.get_ref().pos, expected);
}
+ #[test]
+ fn test_buffered_reader_seek_underflow_discard_buffer_between_seeks() {
+ // gimmick reader that returns Err after first seek
+ struct ErrAfterFirstSeekReader {
+ first_seek: bool,
+ }
+ impl Read for ErrAfterFirstSeekReader {
+ fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
+ for x in &mut *buf {
+ *x = 0;
+ }
+ Ok(buf.len())
+ }
+ }
+ impl Seek for ErrAfterFirstSeekReader {
+ fn seek(&mut self, _: SeekFrom) -> io::Result<u64> {
+ if self.first_seek {
+ self.first_seek = false;
+ Ok(0)
+ } else {
+ Err(io::Error::new(io::ErrorKind::Other, "oh no!"))
+ }
+ }
+ }
+
+ let mut reader = BufReader::with_capacity(5, ErrAfterFirstSeekReader { first_seek: true });
+ assert_eq!(reader.fill_buf().ok(), Some(&[0, 0, 0, 0, 0][..]));
+
+ // The following seek will require two underlying seeks. The first will
+ // succeed but the second will fail. This should still invalidate the
+ // buffer.
+ assert!(reader.seek(SeekFrom::Current(i64::min_value())).is_err());
+ assert_eq!(reader.buffer().len(), 0);
+ }
+
#[test]
fn test_buffered_writer() {
let inner = Vec::new();
/// let metadata = f.metadata()?;
/// let permissions = metadata.permissions();
///
- /// println!("permissions: {}", permissions.mode());
+ /// println!("permissions: {:o}", permissions.mode());
/// Ok(())
/// }
/// ```
/// let metadata = f.metadata()?;
/// let permissions = metadata.permissions();
///
- /// println!("permissions: {}", permissions.mode());
+ /// println!("permissions: {:o}", permissions.mode());
/// Ok(()) }
/// ```
#[stable(feature = "fs_ext", since = "1.1.0")]
debug!("find_width_of_character_at_span: local_begin=`{:?}`, local_end=`{:?}`",
local_begin, local_end);
+ if local_begin.sf.start_pos != local_end.sf.start_pos {
+ debug!("find_width_of_character_at_span: begin and end are in different files");
+ return 1;
+ }
+
let start_index = local_begin.pos.to_usize();
let end_index = local_end.pos.to_usize();
debug!("find_width_of_character_at_span: start_index=`{:?}`, end_index=`{:?}`",
# Checks that all the targets returned by `rustc --print target-list` are valid
# target specifications
-# TODO remove the '*ios*' case when rust-lang/rust#29812 is fixed
all:
for target in $(shell $(BARE_RUSTC) --print target-list); do \
- case $$target in \
- *ios*) \
- ;; \
- *) \
- $(BARE_RUSTC) --target $$target --print sysroot \
- ;; \
- esac \
+ $(BARE_RUSTC) --target $$target --print sysroot; \
done
use std::rc::Rc;
#[derive(Clone)]
-enum CachedMir<'mir> {
+enum Cached<'mir> {
Ref(&'mir String),
Owned(Rc<String>),
}
-impl<'mir> CachedMir<'mir> {
+impl<'mir> Cached<'mir> {
fn get_ref<'a>(&'a self) -> &'a String {
match *self {
- CachedMir::Ref(r) => r,
- CachedMir::Owned(ref rc) => &rc,
+ Cached::Ref(r) => r,
+ Cached::Owned(ref rc) => &rc,
}
}
}
fn main() {
loop {
|_: [_; break]| {} //~ ERROR: `break` outside of loop
+ //~^ ERROR mismatched types
}
loop {
|_: [_; continue]| {} //~ ERROR: `continue` outside of loop
+ //~^ ERROR mismatched types
}
}
| ^^^^^ cannot break outside of a loop
error[E0268]: `continue` outside of loop
- --> $DIR/array-break-length.rs:7:17
+ --> $DIR/array-break-length.rs:8:17
|
LL | |_: [_; continue]| {}
| ^^^^^^^^ cannot break outside of a loop
-error: aborting due to 2 previous errors
+error[E0308]: mismatched types
+ --> $DIR/array-break-length.rs:3:9
+ |
+LL | |_: [_; break]| {}
+ | ^^^^^^^^^^^^^^^^^^ expected (), found closure
+ |
+ = note: expected type `()`
+ found type `[closure@$DIR/array-break-length.rs:3:9: 3:27]`
+
+error[E0308]: mismatched types
+ --> $DIR/array-break-length.rs:8:9
+ |
+LL | |_: [_; continue]| {}
+ | ^^^^^^^^^^^^^^^^^^^^^ expected (), found closure
+ |
+ = note: expected type `()`
+ found type `[closure@$DIR/array-break-length.rs:8:9: 8:30]`
+
+error: aborting due to 4 previous errors
-For more information about this error, try `rustc --explain E0268`.
+Some errors have detailed explanations: E0268, E0308.
+For more information about an error, try `rustc --explain E0268`.
|_: [_; continue]| {}; //~ ERROR: `continue` outside of loop
while |_: [_; continue]| {} {} //~ ERROR: `continue` outside of loop
+ //~^ ERROR mismatched types
while |_: [_; break]| {} {} //~ ERROR: `break` outside of loop
+ //~^ ERROR mismatched types
}
| ^^^^^^^^ cannot break outside of a loop
error[E0268]: `break` outside of loop
- --> $DIR/closure-array-break-length.rs:6:19
+ --> $DIR/closure-array-break-length.rs:7:19
|
LL | while |_: [_; break]| {} {}
| ^^^^^ cannot break outside of a loop
-error: aborting due to 3 previous errors
+error[E0308]: mismatched types
+ --> $DIR/closure-array-break-length.rs:4:11
+ |
+LL | while |_: [_; continue]| {} {}
+ | ^^^^^^^^^^^^^^^^^^^^^ expected bool, found closure
+ |
+ = note: expected type `bool`
+ found type `[closure@$DIR/closure-array-break-length.rs:4:11: 4:32]`
+
+error[E0308]: mismatched types
+ --> $DIR/closure-array-break-length.rs:7:11
+ |
+LL | while |_: [_; break]| {} {}
+ | ^^^^^^^^^^^^^^^^^^ expected bool, found closure
+ |
+ = note: expected type `bool`
+ found type `[closure@$DIR/closure-array-break-length.rs:7:11: 7:29]`
+
+error: aborting due to 5 previous errors
-For more information about this error, try `rustc --explain E0268`.
+Some errors have detailed explanations: E0268, E0308.
+For more information about an error, try `rustc --explain E0268`.
--- /dev/null
+// run-pass
+
+#![feature(const_generics)]
+//~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash
+
+pub trait Foo {
+ fn foo(&self);
+}
+
+
+impl<T, const N: usize> Foo for [T; N] {
+ fn foo(&self) {
+ let _ = &self;
+ }
+}
+
+fn main() {}
--- /dev/null
+warning: the feature `const_generics` is incomplete and may cause the compiler to crash
+ --> $DIR/broken-mir-1.rs:3:12
+ |
+LL | #![feature(const_generics)]
+ | ^^^^^^^^^^^^^^
+
--- /dev/null
+#![feature(const_generics)]
+//~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash
+
+use std::fmt::Debug;
+
+#[derive(Debug)]
+struct S<T: Debug, const N: usize>([T; N]); //~ ERROR `[T; _]` doesn't implement `std::fmt::Debug`
+
+fn main() {}
--- /dev/null
+warning: the feature `const_generics` is incomplete and may cause the compiler to crash
+ --> $DIR/broken-mir-2.rs:1:12
+ |
+LL | #![feature(const_generics)]
+ | ^^^^^^^^^^^^^^
+
+error[E0277]: `[T; _]` doesn't implement `std::fmt::Debug`
+ --> $DIR/broken-mir-2.rs:7:36
+ |
+LL | struct S<T: Debug, const N: usize>([T; N]);
+ | ^^^^^^ `[T; _]` cannot be formatted using `{:?}` because it doesn't implement `std::fmt::Debug`
+ |
+ = help: the trait `std::fmt::Debug` is not implemented for `[T; _]`
+ = note: required because of the requirements on the impl of `std::fmt::Debug` for `&[T; _]`
+ = note: required for the cast to the object type `dyn std::fmt::Debug`
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0277`.
fn main() {
let _ = Foo::<3>([1, 2, 3]); //~ ERROR type annotations needed
+ //~^ ERROR mismatched types
}
LL | let _ = Foo::<3>([1, 2, 3]);
| ^ cannot infer type for `{integer}`
-error: aborting due to previous error
+error[E0308]: mismatched types
+ --> $DIR/cannot-infer-type-for-const-param.rs:10:22
+ |
+LL | let _ = Foo::<3>([1, 2, 3]);
+ | ^^^^^^^^^ expected `3`, found `3usize`
+ |
+ = note: expected type `[u8; _]`
+ found type `[u8; 3]`
+
+error: aborting due to 2 previous errors
-For more information about this error, try `rustc --explain E0282`.
+Some errors have detailed explanations: E0282, E0308.
+For more information about an error, try `rustc --explain E0282`.
--- /dev/null
+// run-pass
+
+#![feature(const_generics)]
+//~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash
+
+use std::fmt::Display;
+
+fn print_slice<T: Display, const N: usize>(slice: &[T; N]) {
+ for x in slice.iter() {
+ println!("{}", x);
+ }
+}
+
+fn main() {
+ print_slice(&[1, 2, 3]);
+}
--- /dev/null
+warning: the feature `const_generics` is incomplete and may cause the compiler to crash
+ --> $DIR/fn-taking-const-generic-array.rs:3:12
+ |
+LL | #![feature(const_generics)]
+ | ^^^^^^^^^^^^^^
+
--- /dev/null
+// run-pass
+
+#![feature(const_generics)]
+//~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash
+
+use std::fmt;
+
+struct Array<T, const N: usize>([T; N]);
+
+impl<T: fmt::Debug, const N: usize> fmt::Debug for Array<T, {N}> {
+ fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+ f.debug_list().entries(self.0.iter()).finish()
+ }
+}
+
+fn main() {
+ assert_eq!(format!("{:?}", Array([1, 2, 3])), "[1, 2, 3]");
+}
--- /dev/null
+warning: the feature `const_generics` is incomplete and may cause the compiler to crash
+ --> $DIR/uninferred-consts-during-codegen-1.rs:3:12
+ |
+LL | #![feature(const_generics)]
+ | ^^^^^^^^^^^^^^
+
--- /dev/null
+// run-pass
+
+#![feature(const_generics)]
+//~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash
+
+use std::fmt;
+
+struct Array<T>(T);
+
+impl<T: fmt::Debug, const N: usize> fmt::Debug for Array<[T; N]> {
+ fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+ f.debug_list().entries((&self.0 as &[T]).iter()).finish()
+ }
+}
+
+fn main() {
+ assert_eq!(format!("{:?}", Array([1, 2, 3])), "[1, 2, 3]");
+}
--- /dev/null
+warning: the feature `const_generics` is incomplete and may cause the compiler to crash
+ --> $DIR/uninferred-consts-during-codegen-2.rs:3:12
+ |
+LL | #![feature(const_generics)]
+ | ^^^^^^^^^^^^^^
+
const IDX: usize = 3;
const VAL: i32 = ARR[IDX];
const BONG: [i32; (ARR[0] - 41) as usize] = [5];
-const BLUB: [i32; (ARR[0] - 40) as usize] = [5]; //~ ERROR: mismatched types
-const BOO: [i32; (ARR[0] - 41) as usize] = [5, 99]; //~ ERROR: mismatched types
+const BLUB: [i32; (ARR[0] - 40) as usize] = [5];
+//~^ ERROR: mismatched types
+//~| expected an array with a fixed size of 2 elements, found one with 1 element
+const BOO: [i32; (ARR[0] - 41) as usize] = [5, 99];
+//~^ ERROR: mismatched types
+//~| expected an array with a fixed size of 1 element, found one with 2 elements
fn main() {
let _ = VAL;
--> $DIR/const-array-oob-arith.rs:7:45
|
LL | const BLUB: [i32; (ARR[0] - 40) as usize] = [5];
- | ^^^ expected an array with a fixed size of 2 elements, found one with 1 elements
+ | ^^^ expected an array with a fixed size of 2 elements, found one with 1 element
|
= note: expected type `[i32; 2]`
found type `[i32; 1]`
error[E0308]: mismatched types
- --> $DIR/const-array-oob-arith.rs:8:44
+ --> $DIR/const-array-oob-arith.rs:10:44
|
LL | const BOO: [i32; (ARR[0] - 41) as usize] = [5, 99];
- | ^^^^^^^ expected an array with a fixed size of 1 elements, found one with 2 elements
+ | ^^^^^^^ expected an array with a fixed size of 1 element, found one with 2 elements
|
= note: expected type `[i32; 1]`
found type `[i32; 2]`
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-
-use proc_macro::TokenStream;
-
-#[proc_macro_derive(Foo)]
-pub fn derive_foo(input: TokenStream) -> TokenStream {
- input
-}
-
-#[proc_macro_derive(Bar)]
-pub fn derive_bar(input: TokenStream) -> TokenStream {
- panic!("lolnope");
-}
-
-#[proc_macro_derive(WithHelper, attributes(helper))]
-pub fn with_helper(input: TokenStream) -> TokenStream {
- TokenStream::new()
-}
-
-#[proc_macro_attribute]
-pub fn helper(_: TokenStream, input: TokenStream) -> TokenStream {
- input
-}
+++ /dev/null
-// compile-pass
-// aux-build:plugin.rs
-
-extern crate plugin;
-
-mod inner {
- use plugin::WithHelper;
-
- #[derive(WithHelper)]
- struct S;
-}
-
-fn main() {}
+++ /dev/null
-// aux-build:plugin.rs
-
-#[macro_use(WithHelper)]
-extern crate plugin;
-
-use plugin::helper;
-
-#[derive(WithHelper)]
-#[helper] //~ ERROR `helper` is ambiguous
-struct S;
-
-fn main() {}
+++ /dev/null
-error[E0659]: `helper` is ambiguous (derive helper attribute vs any other name)
- --> $DIR/helper-attr-blocked-by-import-ambig.rs:9:3
- |
-LL | #[helper]
- | ^^^^^^ ambiguous name
- |
-note: `helper` could refer to the derive helper attribute defined here
- --> $DIR/helper-attr-blocked-by-import-ambig.rs:8:10
- |
-LL | #[derive(WithHelper)]
- | ^^^^^^^^^^
-note: `helper` could also refer to the attribute macro imported here
- --> $DIR/helper-attr-blocked-by-import-ambig.rs:6:5
- |
-LL | use plugin::helper;
- | ^^^^^^^^^^^^^^
- = help: use `crate::helper` to refer to this attribute macro unambiguously
-
-error: aborting due to previous error
-
-For more information about this error, try `rustc --explain E0659`.
+++ /dev/null
-// compile-pass
-// aux-build:plugin.rs
-
-#[macro_use(WithHelper)]
-extern crate plugin;
-
-use self::one::*;
-use self::two::*;
-
-mod helper {}
-
-mod one {
- use helper;
-
- #[derive(WithHelper)]
- #[helper]
- struct One;
-}
-
-mod two {
- use helper;
-
- #[derive(WithHelper)]
- #[helper]
- struct Two;
-}
-
-fn main() {}
+++ /dev/null
-// aux-build:plugin.rs
-
-
-#[macro_use] extern crate plugin;
-
-#[derive(Foo, Bar)] //~ ERROR proc-macro derive panicked
-struct Baz {
- a: i32,
- b: i32,
-}
-
-fn main() {}
+++ /dev/null
-error: proc-macro derive panicked
- --> $DIR/issue-36935.rs:6:15
- |
-LL | #[derive(Foo, Bar)]
- | ^^^
- |
- = help: message: lolnope
-
-error: aborting due to previous error
-
--- /dev/null
+// Test to show what happens if we were not careful and allowed invariant
+// lifetimes to escape though an impl trait.
+//
+// Specifically we swap a long lived and short lived reference, giving us a
+// dangling pointer.
+
+use std::cell::RefCell;
+use std::rc::Rc;
+
+trait Swap: Sized {
+ fn swap(self, other: Self);
+}
+
+impl<T> Swap for &mut T {
+ fn swap(self, other: Self) {
+ std::mem::swap(self, other);
+ }
+}
+
+impl<T> Swap for Rc<RefCell<T>> {
+ fn swap(self, other: Self) {
+ <RefCell<T>>::swap(&self, &other);
+ }
+}
+
+// Here we are hiding `'b` making the caller believe that `&'a mut &'s T` and
+// `&'a mut &'l T` are the same type.
+fn hide_ref<'a, 'b, T: 'static>(x: &'a mut &'b T) -> impl Swap + 'a {
+ //~^ ERROR hidden type
+ x
+}
+
+fn dangle_ref() -> &'static [i32; 3] {
+ let mut res = &[4, 5, 6];
+ let x = [1, 2, 3];
+ hide_ref(&mut res).swap(hide_ref(&mut &x));
+ res
+}
+
+// Here we are hiding `'b` making the caller believe that `Rc<RefCell<&'s T>>`
+// and `Rc<RefCell<&'l T>>` are the same type.
+//
+// This is different to the previous example because the concrete return type
+// only has a single lifetime.
+fn hide_rc_refcell<'a, 'b: 'a, T: 'static>(x: Rc<RefCell<&'b T>>) -> impl Swap + 'a {
+ //~^ ERROR hidden type
+ x
+}
+
+fn dangle_rc_refcell() -> &'static [i32; 3] {
+ let long = Rc::new(RefCell::new(&[4, 5, 6]));
+ let x = [1, 2, 3];
+ let short = Rc::new(RefCell::new(&x));
+ hide_rc_refcell(long.clone()).swap(hide_rc_refcell(short));
+ let res: &'static [i32; 3] = *long.borrow();
+ res
+}
+
+fn main() {
+ // both will print nonsense values.
+ println!("{:?}", dangle_ref());
+ println!("{:?}", dangle_rc_refcell())
+}
--- /dev/null
+error[E0700]: hidden type for `impl Trait` captures lifetime that does not appear in bounds
+ --> $DIR/hidden-lifetimes.rs:28:54
+ |
+LL | fn hide_ref<'a, 'b, T: 'static>(x: &'a mut &'b T) -> impl Swap + 'a {
+ | ^^^^^^^^^^^^^^
+ |
+note: hidden type `&'a mut &'b T` captures the lifetime 'b as defined on the function body at 28:17
+ --> $DIR/hidden-lifetimes.rs:28:17
+ |
+LL | fn hide_ref<'a, 'b, T: 'static>(x: &'a mut &'b T) -> impl Swap + 'a {
+ | ^^
+
+error[E0700]: hidden type for `impl Trait` captures lifetime that does not appear in bounds
+ --> $DIR/hidden-lifetimes.rs:45:70
+ |
+LL | fn hide_rc_refcell<'a, 'b: 'a, T: 'static>(x: Rc<RefCell<&'b T>>) -> impl Swap + 'a {
+ | ^^^^^^^^^^^^^^
+ |
+note: hidden type `std::rc::Rc<std::cell::RefCell<&'b T>>` captures the lifetime 'b as defined on the function body at 45:24
+ --> $DIR/hidden-lifetimes.rs:45:24
+ |
+LL | fn hide_rc_refcell<'a, 'b: 'a, T: 'static>(x: Rc<RefCell<&'b T>>) -> impl Swap + 'a {
+ | ^^
+
+error: aborting due to 2 previous errors
+
+For more information about this error, try `rustc --explain E0700`.
--- /dev/null
+// Test that multiple liftimes are allowed in impl trait types.
+// compile-pass
+
+trait X<'x>: Sized {}
+
+impl<U> X<'_> for U {}
+
+fn multiple_lifeteimes<'a, 'b, T: 'static>(x: &'a mut &'b T) -> impl X<'b> + 'a {
+ x
+}
+
+fn main() {}
--- /dev/null
+// compile-pass
+
+// This test is the same code as in ui/symbol-names/issue-60925.rs but this checks that the
+// reproduction compiles successfully and doesn't segfault, whereas that test just checks that the
+// symbol mangling fix produces the correct result.
+
+fn dummy() {}
+
+mod llvm {
+ pub(crate) struct Foo;
+}
+mod foo {
+ pub(crate) struct Foo<T>(T);
+
+ impl Foo<::llvm::Foo> {
+ pub(crate) fn foo() {
+ for _ in 0..0 {
+ for _ in &[::dummy()] {
+ ::dummy();
+ ::dummy();
+ ::dummy();
+ }
+ }
+ }
+ }
+
+ pub(crate) fn foo() {
+ Foo::foo();
+ Foo::foo();
+ }
+}
+
+pub fn foo() {
+ foo::foo();
+}
+
+fn main() {}
-// aux-build:attr_proc_macro.rs
+// aux-build:test-macros.rs
-extern crate attr_proc_macro;
-use attr_proc_macro::*;
+#[macro_use]
+extern crate test_macros;
-#[attr_proc_macro] // OK
+#[identity_attr] // OK
#[derive(Clone)]
struct Before;
#[derive(Clone)]
-#[attr_proc_macro] //~ ERROR macro attributes must be placed before `#[derive]`
+#[identity_attr] //~ ERROR macro attributes must be placed before `#[derive]`
struct After;
fn main() {}
error: macro attributes must be placed before `#[derive]`
--> $DIR/attribute-order-restricted.rs:11:1
|
-LL | #[attr_proc_macro]
- | ^^^^^^^^^^^^^^^^^^
+LL | #[identity_attr]
+ | ^^^^^^^^^^^^^^^^
error: aborting due to previous error
-// aux-build:attribute-with-error.rs
+// aux-build:test-macros.rs
#![feature(custom_inner_attributes)]
-extern crate attribute_with_error;
+#[macro_use]
+extern crate test_macros;
-use attribute_with_error::foo;
-
-#[foo]
+#[recollect_attr]
fn test1() {
let a: i32 = "foo";
//~^ ERROR: mismatched types
}
fn test2() {
- #![foo]
+ #![recollect_attr]
// FIXME: should have a type error here and assert it works but it doesn't
}
trait A {
- // FIXME: should have a #[foo] attribute here and assert that it works
+ // FIXME: should have a #[recollect_attr] attribute here and assert that it works
fn foo(&self) {
let a: i32 = "foo";
//~^ ERROR: mismatched types
struct B;
impl A for B {
- #[foo]
+ #[recollect_attr]
fn foo(&self) {
let a: i32 = "foo";
//~^ ERROR: mismatched types
}
}
-#[foo]
+#[recollect_attr]
fn main() {
}
error[E0308]: mismatched types
- --> $DIR/attribute-with-error.rs:11:18
+ --> $DIR/attribute-with-error.rs:10:18
|
LL | let a: i32 = "foo";
| ^^^^^ expected i32, found reference
found type `&'static str`
error[E0308]: mismatched types
- --> $DIR/attribute-with-error.rs:13:18
+ --> $DIR/attribute-with-error.rs:12:18
|
LL | let b: i32 = "f'oo";
| ^^^^^^ expected i32, found reference
found type `&'static str`
error[E0308]: mismatched types
- --> $DIR/attribute-with-error.rs:26:22
+ --> $DIR/attribute-with-error.rs:25:22
|
LL | let a: i32 = "foo";
| ^^^^^ expected i32, found reference
found type `&'static str`
error[E0308]: mismatched types
- --> $DIR/attribute-with-error.rs:36:22
+ --> $DIR/attribute-with-error.rs:35:22
|
LL | let a: i32 = "foo";
| ^^^^^ expected i32, found reference
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-
-use proc_macro::TokenStream;
-
-#[proc_macro_attribute]
-pub fn attr_proc_macro(_: TokenStream, input: TokenStream) -> TokenStream {
- input
-}
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-
-use proc_macro::TokenStream;
-
-#[proc_macro_attribute]
-pub fn foo(_attr: TokenStream, input: TokenStream) -> TokenStream {
- input.into_iter().collect()
-}
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-
-use proc_macro::TokenStream;
-
-#[proc_macro]
-pub fn bang_proc_macro(input: TokenStream) -> TokenStream {
- input
-}
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-use proc_macro::TokenStream;
-
-#[proc_macro_derive(A)]
-pub fn derive_a(_: TokenStream) -> TokenStream {
- "".parse().unwrap()
-}
-
-#[proc_macro_derive(B)]
-pub fn derive_b(_: TokenStream) -> TokenStream {
- "".parse().unwrap()
-}
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-
-use proc_macro::TokenStream;
-
-#[proc_macro_derive(A)]
-pub fn derive_a(input: TokenStream) -> TokenStream {
- "".parse().unwrap()
-}
#[macro_export]
-macro_rules! my_attr { () => () }
+macro_rules! empty_helper { () => () }
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-use proc_macro::*;
-
-#[proc_macro_derive(MyTrait, attributes(my_attr))]
-pub fn foo(_: TokenStream) -> TokenStream {
- TokenStream::new()
-}
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-use proc_macro::*;
-
-#[proc_macro_attribute]
-pub fn my_attr(_: TokenStream, input: TokenStream) -> TokenStream {
- input
-}
-
-#[proc_macro_derive(MyTrait, attributes(my_attr))]
-pub fn derive(input: TokenStream) -> TokenStream {
- TokenStream::new()
-}
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-
-use proc_macro::TokenStream;
-
-#[proc_macro_derive(A)]
-pub fn derive_a(_input: TokenStream) -> TokenStream {
- panic!("nope!");
-}
#[macro_export]
macro_rules! external {
() => {
- dollar_crate::m! {
+ print_bang! {
struct M($crate::S);
}
- #[dollar_crate::a]
+ #[print_attr]
struct A($crate::S);
- #[derive(dollar_crate::d)]
+ #[derive(Print)]
struct D($crate::S);
};
}
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-use proc_macro::TokenStream;
-
-#[proc_macro]
-pub fn m_empty(input: TokenStream) -> TokenStream {
- println!("PROC MACRO INPUT (PRETTY-PRINTED): {}", input);
- println!("PROC MACRO INPUT: {:#?}", input);
- TokenStream::new()
-}
-
-#[proc_macro]
-pub fn m(input: TokenStream) -> TokenStream {
- println!("PROC MACRO INPUT (PRETTY-PRINTED): {}", input);
- println!("PROC MACRO INPUT: {:#?}", input);
- input.into_iter().collect()
-}
-
-#[proc_macro_attribute]
-pub fn a(_args: TokenStream, input: TokenStream) -> TokenStream {
- println!("ATTRIBUTE INPUT (PRETTY-PRINTED): {}", input);
- println!("ATTRIBUTE INPUT: {:#?}", input);
- input.into_iter().collect()
-}
-
-#[proc_macro_derive(d)]
-pub fn d(input: TokenStream) -> TokenStream {
- println!("DERIVE INPUT (PRETTY-PRINTED): {}", input);
- println!("DERIVE INPUT: {:#?}", input);
- input.into_iter().collect()
-}
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-use proc_macro::TokenStream;
-
-#[proc_macro_attribute]
-pub fn emit_unchanged(_args: TokenStream, input: TokenStream) -> TokenStream {
- input
-}
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-
-use proc_macro::*;
-
-#[proc_macro_derive(MyTrait, attributes(my_attr))]
-pub fn foo(_: TokenStream) -> TokenStream {
- TokenStream::new()
-}
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-use proc_macro::*;
-
-#[proc_macro_attribute]
-pub fn doit(_: TokenStream, input: TokenStream) -> TokenStream {
- input.into_iter().collect()
-}
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-
-use proc_macro::*;
-
-#[proc_macro_attribute]
-pub fn foo(_: TokenStream, item: TokenStream) -> TokenStream {
- item.into_iter().collect()
-}
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-
-use proc_macro::*;
-
-#[proc_macro]
-pub fn m(a: TokenStream) -> TokenStream {
- a
-}
-
-#[proc_macro_attribute]
-pub fn a(_a: TokenStream, b: TokenStream) -> TokenStream {
- b
-}
+++ /dev/null
-// force-host
-// no-prefer-dynamic
-
-#![crate_type = "proc-macro"]
-
-extern crate proc_macro;
-
-use proc_macro::TokenStream;
-
-#[proc_macro_attribute]
-pub fn foo(_: TokenStream, input: TokenStream) -> TokenStream {
- input.into_iter().collect()
-}
// force-host
// no-prefer-dynamic
+// Proc macros commonly used by tests.
+// `panic`/`print` -> `panic_bang`/`print_bang` to avoid conflicts with standard macros.
+
#![crate_type = "proc-macro"]
extern crate proc_macro;
-
use proc_macro::TokenStream;
+// Macro that return empty token stream.
+
+#[proc_macro]
+pub fn empty(_: TokenStream) -> TokenStream {
+ TokenStream::new()
+}
+
#[proc_macro_attribute]
-pub fn nop_attr(_attr: TokenStream, input: TokenStream) -> TokenStream {
- assert!(_attr.to_string().is_empty());
- input
+pub fn empty_attr(_: TokenStream, _: TokenStream) -> TokenStream {
+ TokenStream::new()
+}
+
+#[proc_macro_derive(Empty, attributes(empty_helper))]
+pub fn empty_derive(_: TokenStream) -> TokenStream {
+ TokenStream::new()
+}
+
+// Macro that panics.
+
+#[proc_macro]
+pub fn panic_bang(_: TokenStream) -> TokenStream {
+ panic!("panic-bang");
}
#[proc_macro_attribute]
-pub fn no_output(_attr: TokenStream, _input: TokenStream) -> TokenStream {
- assert!(_attr.to_string().is_empty());
- assert!(!_input.to_string().is_empty());
- "".parse().unwrap()
+pub fn panic_attr(_: TokenStream, _: TokenStream) -> TokenStream {
+ panic!("panic-attr");
+}
+
+#[proc_macro_derive(Panic, attributes(panic_helper))]
+pub fn panic_derive(_: TokenStream) -> TokenStream {
+ panic!("panic-derive");
}
+// Macros that return the input stream.
+
#[proc_macro]
-pub fn emit_input(input: TokenStream) -> TokenStream {
+pub fn identity(input: TokenStream) -> TokenStream {
input
}
+
+#[proc_macro_attribute]
+pub fn identity_attr(_: TokenStream, input: TokenStream) -> TokenStream {
+ input
+}
+
+#[proc_macro_derive(Identity, attributes(identity_helper))]
+pub fn identity_derive(input: TokenStream) -> TokenStream {
+ input
+}
+
+// Macros that iterate and re-collect the input stream.
+
+#[proc_macro]
+pub fn recollect(input: TokenStream) -> TokenStream {
+ input.into_iter().collect()
+}
+
+#[proc_macro_attribute]
+pub fn recollect_attr(_: TokenStream, input: TokenStream) -> TokenStream {
+ input.into_iter().collect()
+}
+
+#[proc_macro_derive(Recollect, attributes(recollect_helper))]
+pub fn recollect_derive(input: TokenStream) -> TokenStream {
+ input.into_iter().collect()
+}
+
+// Macros that print their input in the original and re-collected forms (if they differ).
+
+fn print_helper(input: TokenStream, kind: &str) -> TokenStream {
+ let input_display = format!("{}", input);
+ let input_debug = format!("{:#?}", input);
+ let recollected = input.into_iter().collect();
+ let recollected_display = format!("{}", recollected);
+ let recollected_debug = format!("{:#?}", recollected);
+ println!("PRINT-{} INPUT (DISPLAY): {}", kind, input_display);
+ if recollected_display != input_display {
+ println!("PRINT-{} RE-COLLECTED (DISPLAY): {}", kind, recollected_display);
+ }
+ println!("PRINT-{} INPUT (DEBUG): {}", kind, input_debug);
+ if recollected_debug != input_debug {
+ println!("PRINT-{} RE-COLLECTED (DEBUG): {}", kind, recollected_debug);
+ }
+ recollected
+}
+
+#[proc_macro]
+pub fn print_bang(input: TokenStream) -> TokenStream {
+ print_helper(input, "BANG")
+}
+
+#[proc_macro_attribute]
+pub fn print_attr(_: TokenStream, input: TokenStream) -> TokenStream {
+ print_helper(input, "ATTR")
+}
+
+#[proc_macro_derive(Print, attributes(print_helper))]
+pub fn print_derive(input: TokenStream) -> TokenStream {
+ print_helper(input, "DERIVE")
+}
// compile-pass
-// aux-build:derive-helper-shadowed.rs
+// aux-build:test-macros.rs
// aux-build:derive-helper-shadowed-2.rs
#[macro_use]
-extern crate derive_helper_shadowed;
-#[macro_use(my_attr)]
+extern crate test_macros;
+#[macro_use(empty_helper)]
extern crate derive_helper_shadowed_2;
-macro_rules! my_attr { () => () }
+macro_rules! empty_helper { () => () }
-#[derive(MyTrait)]
-#[my_attr] // OK
+#[derive(Empty)]
+#[empty_helper] // OK
struct S;
fn main() {}
-// aux-build:derive-helper-shadowing.rs
+// aux-build:test-macros.rs
-extern crate derive_helper_shadowing;
-use derive_helper_shadowing::*;
+#[macro_use]
+extern crate test_macros;
-#[my_attr] //~ ERROR `my_attr` is ambiguous
-#[derive(MyTrait)]
+use test_macros::empty_attr as empty_helper;
+
+#[empty_helper] //~ ERROR `empty_helper` is ambiguous
+#[derive(Empty)]
struct S {
// FIXME No ambiguity, attributes in non-macro positions are not resolved properly
- #[my_attr]
+ #[empty_helper]
field: [u8; {
// FIXME No ambiguity, derive helpers are not put into scope for non-attributes
- use my_attr;
+ use empty_helper;
// FIXME No ambiguity, derive helpers are not put into scope for inner items
- #[my_attr]
+ #[empty_helper]
struct U;
mod inner {
- #[my_attr] //~ ERROR attribute `my_attr` is currently unknown
+ #[empty_helper] //~ ERROR attribute `empty_helper` is currently unknown
struct V;
}
-error[E0658]: The attribute `my_attr` is currently unknown to the compiler and may have meaning added to it in the future
- --> $DIR/derive-helper-shadowing.rs:20:15
+error[E0658]: The attribute `empty_helper` is currently unknown to the compiler and may have meaning added to it in the future
+ --> $DIR/derive-helper-shadowing.rs:22:15
|
-LL | #[my_attr]
- | ^^^^^^^
+LL | #[empty_helper]
+ | ^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29642
= help: add #![feature(custom_attribute)] to the crate attributes to enable
-error[E0659]: `my_attr` is ambiguous (derive helper attribute vs any other name)
- --> $DIR/derive-helper-shadowing.rs:6:3
+error[E0659]: `empty_helper` is ambiguous (derive helper attribute vs any other name)
+ --> $DIR/derive-helper-shadowing.rs:8:3
|
-LL | #[my_attr]
- | ^^^^^^^ ambiguous name
+LL | #[empty_helper]
+ | ^^^^^^^^^^^^ ambiguous name
|
-note: `my_attr` could refer to the derive helper attribute defined here
- --> $DIR/derive-helper-shadowing.rs:7:10
+note: `empty_helper` could refer to the derive helper attribute defined here
+ --> $DIR/derive-helper-shadowing.rs:9:10
|
-LL | #[derive(MyTrait)]
- | ^^^^^^^
-note: `my_attr` could also refer to the attribute macro imported here
- --> $DIR/derive-helper-shadowing.rs:4:5
+LL | #[derive(Empty)]
+ | ^^^^^
+note: `empty_helper` could also refer to the attribute macro imported here
+ --> $DIR/derive-helper-shadowing.rs:6:5
|
-LL | use derive_helper_shadowing::*;
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^
- = help: use `crate::my_attr` to refer to this attribute macro unambiguously
+LL | use test_macros::empty_attr as empty_helper;
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ = help: use `crate::empty_helper` to refer to this attribute macro unambiguously
error: aborting due to 2 previous errors
--- /dev/null
+// compile-pass
+// aux-build:test-macros.rs
+
+extern crate test_macros;
+
+mod inner {
+ use test_macros::Empty;
+
+ #[derive(Empty)]
+ struct S;
+}
+
+fn main() {}
-// aux-build:derive-a.rs
-
-#![allow(warnings)]
+// aux-build:test-macros.rs
#[macro_use]
-extern crate derive_a;
+extern crate test_macros;
-#[derive_A] //~ ERROR attribute `derive_A` is currently unknown
+#[derive_Empty] //~ ERROR attribute `derive_Empty` is currently unknown
struct A;
fn main() {}
-error[E0658]: The attribute `derive_A` is currently unknown to the compiler and may have meaning added to it in the future
- --> $DIR/derive-still-gated.rs:8:3
+error[E0658]: The attribute `derive_Empty` is currently unknown to the compiler and may have meaning added to it in the future
+ --> $DIR/derive-still-gated.rs:6:3
|
-LL | #[derive_A]
- | ^^^^^^^^ help: a built-in attribute with a similar name exists: `derive`
+LL | #[derive_Empty]
+ | ^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29642
= help: add #![feature(custom_attribute)] to the crate attributes to enable
// compile-pass
// edition:2018
-// aux-build:dollar-crate.rs
+// aux-build:test-macros.rs
// Anonymize unstable non-dummy spans while still showing dummy spans `0..0`.
// normalize-stdout-test "bytes\([^0]\w*\.\.(\w+)\)" -> "bytes(LO..$1)"
// normalize-stdout-test "bytes\((\w+)\.\.[^0]\w*\)" -> "bytes($1..HI)"
-extern crate dollar_crate;
+#[macro_use]
+extern crate test_macros;
type S = u8;
macro_rules! m {
() => {
- dollar_crate::m_empty! {
+ print_bang! {
struct M($crate::S);
}
- #[dollar_crate::a]
+ #[print_attr]
struct A($crate::S);
};
}
-PROC MACRO INPUT (PRETTY-PRINTED): struct M ( $crate :: S ) ;
-PROC MACRO INPUT: TokenStream [
+PRINT-BANG INPUT (DISPLAY): struct M ( $crate :: S ) ;
+PRINT-BANG INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
span: #2 bytes(LO..HI),
span: #2 bytes(LO..HI),
},
]
-ATTRIBUTE INPUT (PRETTY-PRINTED): struct A(crate::S);
-ATTRIBUTE INPUT: TokenStream [
+PRINT-ATTR INPUT (DISPLAY): struct A(crate::S);
+PRINT-ATTR RE-COLLECTED (DISPLAY): struct A ( $crate :: S ) ;
+PRINT-ATTR INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
span: #2 bytes(LO..HI),
// edition:2018
-// aux-build:dollar-crate.rs
+// aux-build:test-macros.rs
// aux-build:dollar-crate-external.rs
// Anonymize unstable non-dummy spans while still showing dummy spans `0..0`.
// normalize-stdout-test "bytes\([^0]\w*\.\.(\w+)\)" -> "bytes(LO..$1)"
// normalize-stdout-test "bytes\((\w+)\.\.[^0]\w*\)" -> "bytes($1..HI)"
-extern crate dollar_crate;
+#[macro_use]
+extern crate test_macros;
extern crate dollar_crate_external;
type S = u8;
mod local {
- use crate::dollar_crate;
-
macro_rules! local {
() => {
- dollar_crate::m! {
+ print_bang! {
struct M($crate::S);
}
- #[dollar_crate::a]
+ #[print_attr]
struct A($crate::S);
- #[derive(dollar_crate::d)]
+ #[derive(Print)]
struct D($crate::S); //~ ERROR the name `D` is defined multiple times
};
}
error[E0428]: the name `D` is defined multiple times
- --> $DIR/dollar-crate.rs:27:13
+ --> $DIR/dollar-crate.rs:26:13
|
LL | struct D($crate::S);
| ^^^^^^^^^^^^^^^^^^^^
= note: `D` must be defined only once in the type namespace of this module
error[E0428]: the name `D` is defined multiple times
- --> $DIR/dollar-crate.rs:37:5
+ --> $DIR/dollar-crate.rs:36:5
|
LL | dollar_crate_external::external!();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-PROC MACRO INPUT (PRETTY-PRINTED): struct M ( $crate :: S ) ;
-PROC MACRO INPUT: TokenStream [
+PRINT-BANG INPUT (DISPLAY): struct M ( $crate :: S ) ;
+PRINT-BANG INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
span: #2 bytes(LO..HI),
span: #2 bytes(LO..HI),
},
]
-ATTRIBUTE INPUT (PRETTY-PRINTED): struct A(crate::S);
-ATTRIBUTE INPUT: TokenStream [
+PRINT-ATTR INPUT (DISPLAY): struct A(crate::S);
+PRINT-ATTR RE-COLLECTED (DISPLAY): struct A ( $crate :: S ) ;
+PRINT-ATTR INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
span: #2 bytes(LO..HI),
span: #2 bytes(LO..HI),
},
]
-DERIVE INPUT (PRETTY-PRINTED): struct D(crate::S);
-DERIVE INPUT: TokenStream [
+PRINT-DERIVE INPUT (DISPLAY): struct D(crate::S);
+PRINT-DERIVE RE-COLLECTED (DISPLAY): struct D ( $crate :: S ) ;
+PRINT-DERIVE INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
span: #2 bytes(LO..HI),
span: #2 bytes(LO..HI),
},
]
-PROC MACRO INPUT (PRETTY-PRINTED): struct M ( $crate :: S ) ;
-PROC MACRO INPUT: TokenStream [
+PRINT-BANG INPUT (DISPLAY): struct M ( $crate :: S ) ;
+PRINT-BANG INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
span: #10 bytes(LO..HI),
span: #10 bytes(LO..HI),
},
]
-ATTRIBUTE INPUT (PRETTY-PRINTED): struct A(::dollar_crate_external::S);
-ATTRIBUTE INPUT: TokenStream [
+PRINT-ATTR INPUT (DISPLAY): struct A(::dollar_crate_external::S);
+PRINT-ATTR RE-COLLECTED (DISPLAY): struct A ( $crate :: S ) ;
+PRINT-ATTR INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
span: #10 bytes(LO..HI),
span: #10 bytes(LO..HI),
},
]
-DERIVE INPUT (PRETTY-PRINTED): struct D(::dollar_crate_external::S);
-DERIVE INPUT: TokenStream [
+PRINT-DERIVE INPUT (DISPLAY): struct D(::dollar_crate_external::S);
+PRINT-DERIVE RE-COLLECTED (DISPLAY): struct D ( $crate :: S ) ;
+PRINT-DERIVE INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
span: #10 bytes(LO..HI),
--- /dev/null
+// aux-build:test-macros.rs
+
+#[macro_use(Empty)]
+extern crate test_macros;
+use test_macros::empty_attr as empty_helper;
+
+#[derive(Empty)]
+#[empty_helper] //~ ERROR `empty_helper` is ambiguous
+struct S;
+
+fn main() {}
--- /dev/null
+error[E0659]: `empty_helper` is ambiguous (derive helper attribute vs any other name)
+ --> $DIR/helper-attr-blocked-by-import-ambig.rs:8:3
+ |
+LL | #[empty_helper]
+ | ^^^^^^^^^^^^ ambiguous name
+ |
+note: `empty_helper` could refer to the derive helper attribute defined here
+ --> $DIR/helper-attr-blocked-by-import-ambig.rs:7:10
+ |
+LL | #[derive(Empty)]
+ | ^^^^^
+note: `empty_helper` could also refer to the attribute macro imported here
+ --> $DIR/helper-attr-blocked-by-import-ambig.rs:5:5
+ |
+LL | use test_macros::empty_attr as empty_helper;
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ = help: use `crate::empty_helper` to refer to this attribute macro unambiguously
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0659`.
--- /dev/null
+// compile-pass
+// aux-build:test-macros.rs
+
+#[macro_use(Empty)]
+extern crate test_macros;
+
+use self::one::*;
+use self::two::*;
+
+mod empty_helper {}
+
+mod one {
+ use empty_helper;
+
+ #[derive(Empty)]
+ #[empty_helper]
+ struct One;
+}
+
+mod two {
+ use empty_helper;
+
+ #[derive(Empty)]
+ #[empty_helper]
+ struct Two;
+}
+
+fn main() {}
-// aux-build:derive-a.rs
+// aux-build:test-macros.rs
-#![allow(warnings)]
+extern crate test_macros;
-#[macro_use]
-extern crate derive_a;
-
-use derive_a::derive_a;
-//~^ ERROR: unresolved import `derive_a::derive_a`
+use test_macros::empty_derive;
+//~^ ERROR: unresolved import `test_macros::empty_derive`
fn main() {}
-error[E0432]: unresolved import `derive_a::derive_a`
- --> $DIR/import.rs:8:5
+error[E0432]: unresolved import `test_macros::empty_derive`
+ --> $DIR/import.rs:5:5
|
-LL | use derive_a::derive_a;
- | ^^^^^^^^^^^^^^^^^^ no `derive_a` in the root
+LL | use test_macros::empty_derive;
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^ no `empty_derive` in the root
error: aborting due to previous error
--- /dev/null
+// aux-build:test-macros.rs
+
+#[macro_use]
+extern crate test_macros;
+
+#[derive(Identity, Panic)] //~ ERROR proc-macro derive panicked
+struct Baz {
+ a: i32,
+ b: i32,
+}
+
+fn main() {}
--- /dev/null
+error: proc-macro derive panicked
+ --> $DIR/issue-36935.rs:6:20
+ |
+LL | #[derive(Identity, Panic)]
+ | ^^^^^
+ |
+ = help: message: panic-derive
+
+error: aborting due to previous error
+
-// aux-build:derive-a-b.rs
+// aux-build:test-macros.rs
#[macro_use]
-extern crate derive_a_b;
+extern crate test_macros;
fn main() {
// Test that constructing the `visible_parent_map` (in `cstore_impl.rs`) does not ICE.
-// aux-build:issue-41211.rs
+// aux-build:test-macros.rs
// FIXME: https://github.com/rust-lang/rust/issues/41430
// This is a temporary regression test for the ICE reported in #41211
#![feature(custom_inner_attributes)]
-#![emit_unchanged]
-//~^ ERROR attribute `emit_unchanged` is currently unknown to the compiler
+#![identity_attr]
+//~^ ERROR attribute `identity_attr` is currently unknown to the compiler
//~| ERROR inconsistent resolution for a macro: first custom attribute, then attribute macro
-extern crate issue_41211;
-use issue_41211::emit_unchanged;
+extern crate test_macros;
+use test_macros::identity_attr;
fn main() {}
-error[E0658]: The attribute `emit_unchanged` is currently unknown to the compiler and may have meaning added to it in the future
+error[E0658]: The attribute `identity_attr` is currently unknown to the compiler and may have meaning added to it in the future
--> $DIR/issue-41211.rs:8:4
|
-LL | #![emit_unchanged]
- | ^^^^^^^^^^^^^^
+LL | #![identity_attr]
+ | ^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29642
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error: inconsistent resolution for a macro: first custom attribute, then attribute macro
--> $DIR/issue-41211.rs:8:4
|
-LL | #![emit_unchanged]
- | ^^^^^^^^^^^^^^
+LL | #![identity_attr]
+ | ^^^^^^^^^^^^^
error: aborting due to 2 previous errors
// compile-pass
-// aux-build:issue-53481.rs
+// aux-build:test-macros.rs
#[macro_use]
-extern crate issue_53481;
+extern crate test_macros;
mod m1 {
- use m2::MyTrait;
+ use m2::Empty;
- #[derive(MyTrait)]
+ #[derive(Empty)]
struct A {}
}
mod m2 {
- pub type MyTrait = u8;
+ pub type Empty = u8;
- #[derive(MyTrait)]
- #[my_attr]
+ #[derive(Empty)]
+ #[empty_helper]
struct B {}
}
-// aux-build:derive-panic.rs
-// compile-flags:--error-format human
+// aux-build:test-macros.rs
#[macro_use]
-extern crate derive_panic;
+extern crate test_macros;
-#[derive(A)]
+#[derive(Panic)]
//~^ ERROR: proc-macro derive panicked
struct Foo;
error: proc-macro derive panicked
- --> $DIR/load-panic.rs:7:10
+ --> $DIR/load-panic.rs:6:10
|
-LL | #[derive(A)]
- | ^
+LL | #[derive(Panic)]
+ | ^^^^^
|
- = help: message: nope!
+ = help: message: panic-derive
error: aborting due to previous error
-// aux-build:macro-brackets.rs
+// aux-build:test-macros.rs
-extern crate macro_brackets as bar;
-use bar::doit;
+#[macro_use]
+extern crate test_macros;
macro_rules! id {
($($t:tt)*) => ($($t)*)
}
-#[doit]
+#[identity_attr]
id![static X: u32 = 'a';]; //~ ERROR: mismatched types
// compile-pass
-// aux-build:attr_proc_macro.rs
+// aux-build:test-macros.rs
-#[macro_use] extern crate attr_proc_macro;
+#[macro_use]
+extern crate test_macros;
-#[attr_proc_macro]
+#[identity_attr]
struct Foo;
fn main() {
// compile-pass
-// aux-build:bang_proc_macro.rs
+// aux-build:test-macros.rs
#![feature(proc_macro_hygiene)]
#[macro_use]
-extern crate bang_proc_macro;
+extern crate test_macros;
fn main() {
- bang_proc_macro!(println!("Hello, world!"));
+ identity!(println!("Hello, world!"));
}
// aux-build:test-macros.rs
// ignore-wasm32
+#[macro_use]
extern crate test_macros;
-use test_macros::{nop_attr, no_output, emit_input};
-
fn main() {
assert_eq!(unsafe { rust_get_test_int() }, 0isize);
assert_eq!(unsafe { rust_dbg_extern_identity_u32(0xDEADBEEF) }, 0xDEADBEEF);
#[link(name = "rust_test_helpers", kind = "static")]
extern {
- #[no_output]
+ #[empty_attr]
//~^ ERROR macro invocations in `extern {}` blocks are experimental
fn some_definitely_unknown_symbol_which_should_be_removed();
- #[nop_attr]
+ #[identity_attr]
//~^ ERROR macro invocations in `extern {}` blocks are experimental
fn rust_get_test_int() -> isize;
- emit_input!(fn rust_dbg_extern_identity_u32(arg: u32) -> u32;);
+ identity!(fn rust_dbg_extern_identity_u32(arg: u32) -> u32;);
//~^ ERROR macro invocations in `extern {}` blocks are experimental
}
error[E0658]: macro invocations in `extern {}` blocks are experimental
- --> $DIR/macros-in-extern.rs:15:5
+ --> $DIR/macros-in-extern.rs:14:5
|
-LL | #[no_output]
- | ^^^^^^^^^^^^
+LL | #[empty_attr]
+ | ^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/49476
= help: add #![feature(macros_in_extern)] to the crate attributes to enable
error[E0658]: macro invocations in `extern {}` blocks are experimental
- --> $DIR/macros-in-extern.rs:19:5
+ --> $DIR/macros-in-extern.rs:18:5
|
-LL | #[nop_attr]
- | ^^^^^^^^^^^
+LL | #[identity_attr]
+ | ^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/49476
= help: add #![feature(macros_in_extern)] to the crate attributes to enable
error[E0658]: macro invocations in `extern {}` blocks are experimental
- --> $DIR/macros-in-extern.rs:23:5
+ --> $DIR/macros-in-extern.rs:22:5
|
-LL | emit_input!(fn rust_dbg_extern_identity_u32(arg: u32) -> u32;);
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+LL | identity!(fn rust_dbg_extern_identity_u32(arg: u32) -> u32;);
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/49476
= help: add #![feature(macros_in_extern)] to the crate attributes to enable
-// aux-build:nested-item-spans.rs
+// aux-build:test-macros.rs
-extern crate nested_item_spans;
+#[macro_use]
+extern crate test_macros;
-use nested_item_spans::foo;
-
-#[foo]
+#[recollect_attr]
fn another() {
fn bar() {
let x: u32 = "x"; //~ ERROR: mismatched types
}
fn main() {
- #[foo]
+ #[recollect_attr]
fn bar() {
let x: u32 = "x"; //~ ERROR: mismatched types
}
error[E0308]: mismatched types
- --> $DIR/nested-item-spans.rs:10:22
+ --> $DIR/nested-item-spans.rs:9:22
|
LL | let x: u32 = "x";
| ^^^ expected u32, found reference
found type `&'static str`
error[E0308]: mismatched types
- --> $DIR/nested-item-spans.rs:19:22
+ --> $DIR/nested-item-spans.rs:18:22
|
LL | let x: u32 = "x";
| ^^^ expected u32, found reference
-// aux-build:derive-a.rs
+// aux-build:test-macros.rs
#![feature(rustc_attrs)]
#![warn(unused_extern_crates)]
-extern crate derive_a;
+extern crate test_macros;
//~^ WARN unused extern crate
#[rustc_error]
warning: unused extern crate
--> $DIR/no-macro-use-attr.rs:6:1
|
-LL | extern crate derive_a;
- | ^^^^^^^^^^^^^^^^^^^^^^ help: remove it
+LL | extern crate test_macros;
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^ help: remove it
|
note: lint level defined here
--> $DIR/no-macro-use-attr.rs:4:9
-// aux-build:proc-macro-gates.rs
+// aux-build:test-macros.rs
// gate-test-proc_macro_hygiene
#![feature(stmt_expr_attributes)]
-extern crate proc_macro_gates as foo;
-
-use foo::*;
+#[macro_use]
+extern crate test_macros;
fn _test_inner() {
- #![a] //~ ERROR: non-builtin inner attributes are unstable
+ #![empty_attr] //~ ERROR: non-builtin inner attributes are unstable
}
-#[a] //~ ERROR: custom attributes cannot be applied to modules
+#[empty_attr] //~ ERROR: custom attributes cannot be applied to modules
mod _test2 {}
mod _test2_inner {
- #![a] //~ ERROR: custom attributes cannot be applied to modules
+ #![empty_attr] //~ ERROR: custom attributes cannot be applied to modules
//~| ERROR: non-builtin inner attributes are unstable
}
-#[a = "y"] //~ ERROR: must only be followed by a delimiter token
+#[empty_attr = "y"] //~ ERROR: must only be followed by a delimiter token
fn _test3() {}
fn attrs() {
// Statement, item
- #[a] // OK
+ #[empty_attr] // OK
struct S;
// Statement, macro
- #[a] //~ ERROR: custom attributes cannot be applied to statements
+ #[empty_attr] //~ ERROR: custom attributes cannot be applied to statements
println!();
// Statement, semi
- #[a] //~ ERROR: custom attributes cannot be applied to statements
+ #[empty_attr] //~ ERROR: custom attributes cannot be applied to statements
S;
// Statement, local
- #[a] //~ ERROR: custom attributes cannot be applied to statements
+ #[empty_attr] //~ ERROR: custom attributes cannot be applied to statements
let _x = 2;
// Expr
- let _x = #[a] 2; //~ ERROR: custom attributes cannot be applied to expressions
+ let _x = #[identity_attr] 2; //~ ERROR: custom attributes cannot be applied to expressions
// Opt expr
- let _x = [#[a] 2]; //~ ERROR: custom attributes cannot be applied to expressions
+ let _x = [#[identity_attr] 2]; //~ ERROR: custom attributes cannot be applied to expressions
// Expr macro
- let _x = #[a] println!(); //~ ERROR: custom attributes cannot be applied to expressions
+ let _x = #[identity_attr] println!();
+ //~^ ERROR: custom attributes cannot be applied to expressions
}
fn main() {
- let _x: m!(u32) = 3; //~ ERROR: procedural macros cannot be expanded to types
- if let m!(Some(_x)) = Some(3) {} //~ ERROR: procedural macros cannot be expanded to patterns
+ let _x: identity!(u32) = 3; //~ ERROR: procedural macros cannot be expanded to types
+ if let identity!(Some(_x)) = Some(3) {}
+ //~^ ERROR: procedural macros cannot be expanded to patterns
- m!(struct S;); //~ ERROR: procedural macros cannot be expanded to statements
- m!(let _x = 3;); //~ ERROR: procedural macros cannot be expanded to statements
+ empty!(struct S;); //~ ERROR: procedural macros cannot be expanded to statements
+ empty!(let _x = 3;); //~ ERROR: procedural macros cannot be expanded to statements
- let _x = m!(3); //~ ERROR: procedural macros cannot be expanded to expressions
- let _x = [m!(3)]; //~ ERROR: procedural macros cannot be expanded to expressions
+ let _x = identity!(3); //~ ERROR: procedural macros cannot be expanded to expressions
+ let _x = [empty!(3)]; //~ ERROR: procedural macros cannot be expanded to expressions
}
error[E0658]: non-builtin inner attributes are unstable
- --> $DIR/proc-macro-gates.rs:11:5
+ --> $DIR/proc-macro-gates.rs:10:5
|
-LL | #![a]
- | ^^^^^
+LL | #![empty_attr]
+ | ^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54726
= help: add #![feature(custom_inner_attributes)] to the crate attributes to enable
error[E0658]: non-builtin inner attributes are unstable
- --> $DIR/proc-macro-gates.rs:18:5
+ --> $DIR/proc-macro-gates.rs:17:5
|
-LL | #![a]
- | ^^^^^
+LL | #![empty_attr]
+ | ^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54726
= help: add #![feature(custom_inner_attributes)] to the crate attributes to enable
error[E0658]: custom attributes cannot be applied to modules
- --> $DIR/proc-macro-gates.rs:14:1
+ --> $DIR/proc-macro-gates.rs:13:1
|
-LL | #[a]
- | ^^^^
+LL | #[empty_attr]
+ | ^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
error[E0658]: custom attributes cannot be applied to modules
- --> $DIR/proc-macro-gates.rs:18:5
+ --> $DIR/proc-macro-gates.rs:17:5
|
-LL | #![a]
- | ^^^^^
+LL | #![empty_attr]
+ | ^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
error: custom attribute invocations must be of the form #[foo] or #[foo(..)], the macro name must only be followed by a delimiter token
- --> $DIR/proc-macro-gates.rs:22:1
+ --> $DIR/proc-macro-gates.rs:21:1
|
-LL | #[a = "y"]
- | ^^^^^^^^^^
+LL | #[empty_attr = "y"]
+ | ^^^^^^^^^^^^^^^^^^^
error[E0658]: custom attributes cannot be applied to statements
- --> $DIR/proc-macro-gates.rs:31:5
+ --> $DIR/proc-macro-gates.rs:30:5
|
-LL | #[a]
- | ^^^^
+LL | #[empty_attr]
+ | ^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
error[E0658]: custom attributes cannot be applied to statements
- --> $DIR/proc-macro-gates.rs:35:5
+ --> $DIR/proc-macro-gates.rs:34:5
|
-LL | #[a]
- | ^^^^
+LL | #[empty_attr]
+ | ^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
error[E0658]: custom attributes cannot be applied to statements
- --> $DIR/proc-macro-gates.rs:39:5
+ --> $DIR/proc-macro-gates.rs:38:5
|
-LL | #[a]
- | ^^^^
+LL | #[empty_attr]
+ | ^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
error[E0658]: custom attributes cannot be applied to expressions
- --> $DIR/proc-macro-gates.rs:43:14
+ --> $DIR/proc-macro-gates.rs:42:14
|
-LL | let _x = #[a] 2;
- | ^^^^
+LL | let _x = #[identity_attr] 2;
+ | ^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
error[E0658]: custom attributes cannot be applied to expressions
- --> $DIR/proc-macro-gates.rs:46:15
+ --> $DIR/proc-macro-gates.rs:45:15
|
-LL | let _x = [#[a] 2];
- | ^^^^
+LL | let _x = [#[identity_attr] 2];
+ | ^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
error[E0658]: custom attributes cannot be applied to expressions
- --> $DIR/proc-macro-gates.rs:49:14
+ --> $DIR/proc-macro-gates.rs:48:14
|
-LL | let _x = #[a] println!();
- | ^^^^
+LL | let _x = #[identity_attr] println!();
+ | ^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
error[E0658]: procedural macros cannot be expanded to types
--> $DIR/proc-macro-gates.rs:53:13
|
-LL | let _x: m!(u32) = 3;
- | ^^^^^^^
+LL | let _x: identity!(u32) = 3;
+ | ^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
error[E0658]: procedural macros cannot be expanded to patterns
--> $DIR/proc-macro-gates.rs:54:12
|
-LL | if let m!(Some(_x)) = Some(3) {}
- | ^^^^^^^^^^^^
+LL | if let identity!(Some(_x)) = Some(3) {}
+ | ^^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
error[E0658]: procedural macros cannot be expanded to statements
- --> $DIR/proc-macro-gates.rs:56:5
+ --> $DIR/proc-macro-gates.rs:57:5
|
-LL | m!(struct S;);
- | ^^^^^^^^^^^^^^
+LL | empty!(struct S;);
+ | ^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
error[E0658]: procedural macros cannot be expanded to statements
- --> $DIR/proc-macro-gates.rs:57:5
+ --> $DIR/proc-macro-gates.rs:58:5
|
-LL | m!(let _x = 3;);
- | ^^^^^^^^^^^^^^^^
+LL | empty!(let _x = 3;);
+ | ^^^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
error[E0658]: procedural macros cannot be expanded to expressions
- --> $DIR/proc-macro-gates.rs:59:14
+ --> $DIR/proc-macro-gates.rs:60:14
|
-LL | let _x = m!(3);
- | ^^^^^
+LL | let _x = identity!(3);
+ | ^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
error[E0658]: procedural macros cannot be expanded to expressions
- --> $DIR/proc-macro-gates.rs:60:15
+ --> $DIR/proc-macro-gates.rs:61:15
|
-LL | let _x = [m!(3)];
- | ^^^^^
+LL | let _x = [empty!(3)];
+ | ^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/54727
= help: add #![feature(proc_macro_hygiene)] to the crate attributes to enable
-// aux-build:proc-macro-gates.rs
+// aux-build:test-macros.rs
#![feature(stmt_expr_attributes)]
-extern crate proc_macro_gates as foo;
-
-use foo::*;
+#[macro_use]
+extern crate test_macros;
// NB. these errors aren't the best errors right now, but they're definitely
// intended to be errors. Somehow using a custom attribute in these positions
// should either require a feature gate or not be allowed on stable.
-fn _test6<#[a] T>() {}
+fn _test6<#[empty_attr] T>() {}
//~^ ERROR: unknown to the compiler
fn _test7() {
match 1 {
- #[a] //~ ERROR: unknown to the compiler
+ #[empty_attr] //~ ERROR: unknown to the compiler
0 => {}
_ => {}
}
-error[E0658]: The attribute `a` is currently unknown to the compiler and may have meaning added to it in the future
- --> $DIR/proc-macro-gates2.rs:13:11
+error[E0658]: The attribute `empty_attr` is currently unknown to the compiler and may have meaning added to it in the future
+ --> $DIR/proc-macro-gates2.rs:12:11
|
-LL | fn _test6<#[a] T>() {}
- | ^^^^
+LL | fn _test6<#[empty_attr] T>() {}
+ | ^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29642
= help: add #![feature(custom_attribute)] to the crate attributes to enable
-error[E0658]: The attribute `a` is currently unknown to the compiler and may have meaning added to it in the future
- --> $DIR/proc-macro-gates2.rs:18:9
+error[E0658]: The attribute `empty_attr` is currently unknown to the compiler and may have meaning added to it in the future
+ --> $DIR/proc-macro-gates2.rs:17:9
|
-LL | #[a]
- | ^^^^
+LL | #[empty_attr]
+ | ^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29642
= help: add #![feature(custom_attribute)] to the crate attributes to enable
// aux-build:derive-foo.rs
// aux-build:derive-clona.rs
-// aux-build:attr_proc_macro.rs
-// aux-build:bang_proc_macro.rs
+// aux-build:test-macros.rs
#![feature(custom_attribute)]
extern crate derive_foo;
#[macro_use]
extern crate derive_clona;
-extern crate attr_proc_macro;
-extern crate bang_proc_macro;
+extern crate test_macros;
-use attr_proc_macro::attr_proc_macro;
-use bang_proc_macro::bang_proc_macro;
+use test_macros::empty as bang_proc_macro;
+use test_macros::empty_attr as attr_proc_macro;
macro_rules! FooWithLongNam {
() => {}
error: cannot find derive macro `FooWithLongNan` in this scope
- --> $DIR/resolve-error.rs:26:10
+ --> $DIR/resolve-error.rs:24:10
|
LL | #[derive(FooWithLongNan)]
| ^^^^^^^^^^^^^^ help: try: `FooWithLongName`
error: cannot find derive macro `Dlone` in this scope
- --> $DIR/resolve-error.rs:36:10
+ --> $DIR/resolve-error.rs:34:10
|
LL | #[derive(Dlone)]
| ^^^^^ help: try: `Clone`
error: cannot find derive macro `Dlona` in this scope
- --> $DIR/resolve-error.rs:40:10
+ --> $DIR/resolve-error.rs:38:10
|
LL | #[derive(Dlona)]
| ^^^^^ help: try: `Clona`
error: cannot find derive macro `attr_proc_macra` in this scope
- --> $DIR/resolve-error.rs:44:10
+ --> $DIR/resolve-error.rs:42:10
|
LL | #[derive(attr_proc_macra)]
| ^^^^^^^^^^^^^^^
error: cannot find macro `FooWithLongNama!` in this scope
- --> $DIR/resolve-error.rs:49:5
+ --> $DIR/resolve-error.rs:47:5
|
LL | FooWithLongNama!();
| ^^^^^^^^^^^^^^^ help: you could try the macro: `FooWithLongNam`
error: cannot find macro `attr_proc_macra!` in this scope
- --> $DIR/resolve-error.rs:52:5
+ --> $DIR/resolve-error.rs:50:5
|
LL | attr_proc_macra!();
| ^^^^^^^^^^^^^^^ help: you could try the macro: `attr_proc_mac`
error: cannot find macro `Dlona!` in this scope
- --> $DIR/resolve-error.rs:55:5
+ --> $DIR/resolve-error.rs:53:5
|
LL | Dlona!();
| ^^^^^
error: cannot find macro `bang_proc_macrp!` in this scope
- --> $DIR/resolve-error.rs:58:5
+ --> $DIR/resolve-error.rs:56:5
|
LL | bang_proc_macrp!();
| ^^^^^^^^^^^^^^^ help: you could try the macro: `bang_proc_macro`
-// aux-build:derive-a.rs
+// aux-build:test-macros.rs
#[macro_use]
-extern crate derive_a;
+extern crate test_macros;
#[macro_use]
-extern crate derive_a; //~ ERROR the name `derive_a` is defined multiple times
+extern crate test_macros; //~ ERROR the name `test_macros` is defined multiple times
fn main() {}
-error[E0259]: the name `derive_a` is defined multiple times
+error[E0259]: the name `test_macros` is defined multiple times
--> $DIR/shadow.rs:6:1
|
-LL | extern crate derive_a;
- | ---------------------- previous import of the extern crate `derive_a` here
+LL | extern crate test_macros;
+ | ------------------------- previous import of the extern crate `test_macros` here
LL | #[macro_use]
-LL | extern crate derive_a;
- | ^^^^^^^^^^^^^^^^^^^^^^ `derive_a` reimported here
+LL | extern crate test_macros;
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^ `test_macros` reimported here
|
- = note: `derive_a` must be defined only once in the type namespace of this module
+ = note: `test_macros` must be defined only once in the type namespace of this module
error: aborting due to previous error
//~ ERROR mismatched types
-// aux-build:span-preservation.rs
+// aux-build:test-macros.rs
// For each of these, we should get the appropriate type mismatch error message,
// and the function should be echoed.
-extern crate span_preservation as foo;
+#[macro_use]
+extern crate test_macros;
-use foo::foo;
-
-#[foo]
+#[recollect_attr]
fn a() {
let x: usize = "hello";;;;; //~ ERROR mismatched types
}
-#[foo]
+#[recollect_attr]
fn b(x: Option<isize>) -> usize {
match x {
Some(x) => { return x }, //~ ERROR mismatched types
}
}
-#[foo]
+#[recollect_attr]
fn c() {
struct Foo {
a: usize
// FIXME: This doesn't work at the moment. See the one below. The pretty-printer
// injects a "C" between `extern` and `fn` which causes a "probably_eq"
// `TokenStream` mismatch. The lack of `"C"` should be preserved in the AST.
-#[foo]
+#[recollect_attr]
extern fn bar() {
0
}
-#[foo]
+#[recollect_attr]
extern "C" fn baz() {
0 //~ ERROR mismatched types
}
found type `{integer}`
error[E0308]: mismatched types
- --> $DIR/span-preservation.rs:13:20
+ --> $DIR/span-preservation.rs:12:20
|
LL | let x: usize = "hello";;;;;
| ^^^^^^^ expected usize, found reference
found type `&'static str`
error[E0308]: mismatched types
- --> $DIR/span-preservation.rs:19:29
+ --> $DIR/span-preservation.rs:18:29
|
LL | fn b(x: Option<isize>) -> usize {
| ----- expected `usize` because of return type
| ^ expected usize, found isize
error[E0308]: mismatched types
- --> $DIR/span-preservation.rs:35:22
+ --> $DIR/span-preservation.rs:34:22
|
LL | let x = Foo { a: 10isize };
| ^^^^^^^ expected usize, found isize
error[E0560]: struct `c::Foo` has no field named `b`
- --> $DIR/span-preservation.rs:36:26
+ --> $DIR/span-preservation.rs:35:26
|
LL | let y = Foo { a: 10, b: 10isize };
| ^ `c::Foo` does not have this field
= note: available fields are: `a`
error[E0308]: mismatched types
- --> $DIR/span-preservation.rs:49:5
+ --> $DIR/span-preservation.rs:48:5
|
LL | extern "C" fn baz() {
| - possibly return type missing here?
--- /dev/null
+#![feature(rustc_attrs)]
+
+// This test is the same code as in ui/issue-53912.rs but this test checks that the symbol mangling
+// fix produces the correct result, whereas that test just checks that the reproduction compiles
+// successfully and doesn't segfault
+
+fn dummy() {}
+
+mod llvm {
+ pub(crate) struct Foo;
+}
+mod foo {
+ pub(crate) struct Foo<T>(T);
+
+ impl Foo<::llvm::Foo> {
+ #[rustc_symbol_name]
+//~^ ERROR _ZN11issue_609253foo36Foo$LT$issue_60925..llv$6d$..Foo$GT$3foo17h059a991a004536adE
+ pub(crate) fn foo() {
+ for _ in 0..0 {
+ for _ in &[::dummy()] {
+ ::dummy();
+ ::dummy();
+ ::dummy();
+ }
+ }
+ }
+ }
+
+ pub(crate) fn foo() {
+ Foo::foo();
+ Foo::foo();
+ }
+}
+
+pub fn foo() {
+ foo::foo();
+}
+
+fn main() {}
--- /dev/null
+error: symbol-name(_ZN11issue_609253foo36Foo$LT$issue_60925..llv$6d$..Foo$GT$3foo17h059a991a004536adE)
+ --> $DIR/issue-60925.rs:16:9
+ |
+LL | #[rustc_symbol_name]
+ | ^^^^^^^^^^^^^^^^^^^^
+
+error: aborting due to previous error
+
//~^ ERROR mismatched types
//~| expected type `(isize, f64)`
//~| found type `(isize,)`
- //~| expected a tuple with 2 elements, found one with 1 elements
+ //~| expected a tuple with 2 elements, found one with 1 element
}
--> $DIR/tuple-arity-mismatch.rs:12:20
|
LL | let y = first ((1,));
- | ^^^^ expected a tuple with 2 elements, found one with 1 elements
+ | ^^^^ expected a tuple with 2 elements, found one with 1 element
|
= note: expected type `(isize, f64)`
found type `(isize,)`
const l: usize = v.count(); //~ ERROR attempt to use a non-constant value in a constant
let s: [u32; l] = v.into_iter().collect();
//~^ ERROR evaluation of constant value failed
+ //~^^ ERROR a collection of type
}
LL | let s: [u32; l] = v.into_iter().collect();
| ^ referenced constant has errors
-error: aborting due to 2 previous errors
+error[E0277]: a collection of type `[u32; _]` cannot be built from an iterator over elements of type `{integer}`
+ --> $DIR/type-dependent-def-issue-49241.rs:4:37
+ |
+LL | let s: [u32; l] = v.into_iter().collect();
+ | ^^^^^^^ a collection of type `[u32; _]` cannot be built from `std::iter::Iterator<Item={integer}>`
+ |
+ = help: the trait `std::iter::FromIterator<{integer}>` is not implemented for `[u32; _]`
+
+error: aborting due to 3 previous errors
-Some errors have detailed explanations: E0080, E0435.
+Some errors have detailed explanations: E0080, E0277, E0435.
For more information about an error, try `rustc --explain E0080`.
-Subproject commit 46e64911ad43c519c61d22afef7f82625dd9c4a8
+Subproject commit fb33fad08e8f08c032b443bab0df299ce22fe61b