really useful commands to the build system (`./x.py`), which could save you a
lot of time.
-[rustcguidebuild]: https://rust-lang.github.io/rustc-guide/how-to-build-and-run.html
+[rustcguidebuild]: https://rust-lang.github.io/rustc-guide/building/how-to-build-and-run.html
## Pull Requests
[pull-requests]: #pull-requests
"rustc",
"rustc_codegen_utils",
"rustc_data_structures",
- "rustc_target",
"serde_json",
"syntax",
"syntax_pos",
"rustc_errors",
"rustc_index",
"rustc_lexer",
- "rustc_target",
"scoped-tls",
"serialize",
"smallvec 1.0.0",
"rustc_errors",
"rustc_index",
"rustc_lexer",
- "rustc_target",
"scoped-tls",
"serialize",
"smallvec 1.0.0",
the compiler. More information about it may be found by running `./x.py --help`
or reading the [rustc guide][rustcguidebuild].
-[rustcguidebuild]: https://rust-lang.github.io/rustc-guide/how-to-build-and-run.html
+[rustcguidebuild]: https://rust-lang.github.io/rustc-guide/building/how-to-build-and-run.html
### Building on *nix
1. Make sure you have installed the dependencies:
## prefer-dynamic
By default, `rustc` prefers to statically link dependencies. This option will
-make it use dynamic linking instead.
+indicate that dynamic linking should be used if possible if both a static and
+dynamic versions of a library are available. There is an internal algorithm
+for determining whether or not it is possible to statically or dynamically
+link with a dependency. For example, `cdylib` crate types may only use static
+linkage.
## no-integrated-as
<a id="option-l-search-path"></a>
## `-L`: add a directory to the library search path
-When looking for external crates or libraries, a directory passed to this flag
-will be searched.
+The `-L` flag adds a path to search for external crates and libraries.
The kind of search path can optionally be specified with the form `-L
KIND=PATH` where `KIND` may be one of:
<a id="option-extern"></a>
## `--extern`: specify where an external library is located
-This flag allows you to pass the name and location of an external crate that
-will be linked into the crate you are building. This flag may be specified
-multiple times. The format of the value should be `CRATENAME=PATH`.
+This flag allows you to pass the name and location for an external crate of a
+direct dependency. Indirect dependencies (dependencies of dependencies) are
+located using the [`-L` flag](#option-l-search-path). The given crate name is
+added to the [extern prelude], which is the same as specifying `extern crate`
+within the root module. The given crate name does not need to match the name
+the library was built with.
+
+This flag may be specified multiple times. This flag takes an argument with
+either of the following formats:
+
+* `CRATENAME=PATH` — Indicates the given crate is found at the given path.
+* `CRATENAME` — Indicates the given crate may be found in the search path,
+ such as within the sysroot or via the `-L` flag.
+
+The same crate name may be specified multiple times for different crate types.
+If both an `rlib` and `dylib` are found, an internal algorithm is used to
+decide which to use for linking. The [`-C prefer-dynamic`
+flag][prefer-dynamic] may be used to influence which is used.
+
+If the same crate name is specified with and without a path, the one with the
+path is used and the pathless flag has no effect.
+
+[extern prelude]: ../reference/items/extern-crates.html#extern-prelude
+[prefer-dynamic]: codegen-options/index.md#prefer-dynamic
<a id="option-sysroot"></a>
## `--sysroot`: Override the system root
+++ /dev/null
-# `on_unimplemented`
-
-The tracking issue for this feature is: [#29628]
-
-[#29628]: https://github.com/rust-lang/rust/issues/29628
-
-------------------------
-
-The `on_unimplemented` feature provides the `#[rustc_on_unimplemented]`
-attribute, which allows trait definitions to add specialized notes to error
-messages when an implementation was expected but not found. You can refer
-to the trait's generic arguments by name and to the resolved type using
-`Self`.
-
-For example:
-
-```rust,compile_fail
-#![feature(on_unimplemented)]
-
-#[rustc_on_unimplemented="an iterator over elements of type `{A}` \
- cannot be built from a collection of type `{Self}`"]
-trait MyIterator<A> {
- fn next(&mut self) -> A;
-}
-
-fn iterate_chars<I: MyIterator<char>>(i: I) {
- // ...
-}
-
-fn main() {
- iterate_chars(&[1, 2, 3][..]);
-}
-```
-
-When the user compiles this, they will see the following;
-
-```txt
-error[E0277]: the trait bound `&[{integer}]: MyIterator<char>` is not satisfied
- --> <anon>:14:5
- |
-14 | iterate_chars(&[1, 2, 3][..]);
- | ^^^^^^^^^^^^^ an iterator over elements of type `char` cannot be built from a collection of type `&[{integer}]`
- |
- = help: the trait `MyIterator<char>` is not implemented for `&[{integer}]`
- = note: required by `iterate_chars`
-```
-
-`on_unimplemented` also supports advanced filtering for better targeting
-of messages, as well as modifying specific parts of the error message. You
-target the text of:
-
- - the main error message (`message`)
- - the label (`label`)
- - an extra note (`note`)
-
-For example, the following attribute
-
-```rust,compile_fail
-#[rustc_on_unimplemented(
- message="message",
- label="label",
- note="note"
-)]
-trait MyIterator<A> {
- fn next(&mut self) -> A;
-}
-```
-
-Would generate the following output:
-
-```text
-error[E0277]: message
- --> <anon>:14:5
- |
-14 | iterate_chars(&[1, 2, 3][..]);
- | ^^^^^^^^^^^^^ label
- |
- = note: note
- = help: the trait `MyIterator<char>` is not implemented for `&[{integer}]`
- = note: required by `iterate_chars`
-```
-
-To allow more targeted error messages, it is possible to filter the
-application of these fields based on a variety of attributes when using
-`on`:
-
- - `crate_local`: whether the code causing the trait bound to not be
- fulfilled is part of the user's crate. This is used to avoid suggesting
- code changes that would require modifying a dependency.
- - Any of the generic arguments that can be substituted in the text can be
- referred by name as well for filtering, like `Rhs="i32"`, except for
- `Self`.
- - `_Self`: to filter only on a particular calculated trait resolution, like
- `Self="std::iter::Iterator<char>"`. This is needed because `Self` is a
- keyword which cannot appear in attributes.
- - `direct`: user-specified rather than derived obligation.
- - `from_method`: usable both as boolean (whether the flag is present, like
- `crate_local`) or matching against a particular method. Currently used
- for `try`.
- - `from_desugaring`: usable both as boolean (whether the flag is present)
- or matching against a particular desugaring. The desugaring is identified
- with its variant name in the `DesugaringKind` enum.
-
-For example, the `Iterator` trait can be annotated in the following way:
-
-```rust,compile_fail
-#[rustc_on_unimplemented(
- on(
- _Self="&str",
- note="call `.chars()` or `.as_bytes()` on `{Self}"
- ),
- message="`{Self}` is not an iterator",
- label="`{Self}` is not an iterator",
- note="maybe try calling `.iter()` or a similar method"
-)]
-pub trait Iterator {}
-```
-
-Which would produce the following outputs:
-
-```text
-error[E0277]: `Foo` is not an iterator
- --> src/main.rs:4:16
- |
-4 | for foo in Foo {}
- | ^^^ `Foo` is not an iterator
- |
- = note: maybe try calling `.iter()` or a similar method
- = help: the trait `std::iter::Iterator` is not implemented for `Foo`
- = note: required by `std::iter::IntoIterator::into_iter`
-
-error[E0277]: `&str` is not an iterator
- --> src/main.rs:5:16
- |
-5 | for foo in "" {}
- | ^^ `&str` is not an iterator
- |
- = note: call `.chars()` or `.bytes() on `&str`
- = help: the trait `std::iter::Iterator` is not implemented for `&str`
- = note: required by `std::iter::IntoIterator::into_iter`
-```
-
-If you need to filter on multiple attributes, you can use `all`, `any` or
-`not` in the following way:
-
-```rust,compile_fail
-#[rustc_on_unimplemented(
- on(
- all(_Self="&str", T="std::string::String"),
- note="you can coerce a `{T}` into a `{Self}` by writing `&*variable`"
- )
-)]
-pub trait From<T>: Sized { /* ... */ }
-```
os.path.join(os.path.dirname(__file__), '../test/ui/derives/'))
TEMPLATE = """\
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
{error_deriving}
#[stable(feature = "rust1", since = "1.0.0")]
pub struct IterMut<'a, T: 'a> {
// We do *not* exclusively own the entire list here, references to node's `element`
- // have been handed out by the iterator! So be careful when using this; the methods
+ // have been handed out by the iterator! So be careful when using this; the methods
// called must be aware that there can be aliasing pointers to `element`.
list: &'a mut LinkedList<T>,
head: Option<NonNull<Node<T>>>,
#![feature(unsize)]
#![feature(unsized_locals)]
#![feature(allocator_internals)]
-#![feature(on_unimplemented)]
+#![cfg_attr(bootstrap, feature(on_unimplemented))]
#![feature(rustc_const_unstable)]
#![feature(slice_partition_dedup)]
#![feature(maybe_uninit_extra, maybe_uninit_slice)]
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Rc<T: ?Sized> {
ptr: NonNull<RcBox<T>>,
- phantom: PhantomData<T>,
+ phantom: PhantomData<RcBox<T>>,
}
#[stable(feature = "rust1", since = "1.0.0")]
/// checked:
///
/// * The memory at `ptr` needs to have been previously allocated by the
- /// same allocator the standard library uses.
+ /// same allocator the standard library uses, with a required alignment of exactly 1.
/// * `length` needs to be less than or equal to `capacity`.
/// * `capacity` needs to be the correct value.
///
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Arc<T: ?Sized> {
ptr: NonNull<ArcInner<T>>,
- phantom: PhantomData<T>,
+ phantom: PhantomData<ArcInner<T>>,
}
#[stable(feature = "rust1", since = "1.0.0")]
//! Memory allocation APIs
+// ignore-tidy-undocumented-unsafe
+
#![stable(feature = "alloc_module", since = "1.28.0")]
use crate::cmp;
#[inline]
pub fn downcast_ref<T: Any>(&self) -> Option<&T> {
if self.is::<T>() {
+ // SAFETY: just checked whether we are pointing to the correct type
unsafe {
Some(&*(self as *const dyn Any as *const T))
}
#[inline]
pub fn downcast_mut<T: Any>(&mut self) -> Option<&mut T> {
if self.is::<T>() {
+ // SAFETY: just checked whether we are pointing to the correct type
unsafe {
Some(&mut *(self as *mut dyn Any as *mut T))
}
#[rustc_const_unstable(feature="const_type_id")]
pub const fn of<T: ?Sized + 'static>() -> TypeId {
TypeId {
+ #[cfg(bootstrap)]
+ // SAFETY: going away soon
t: unsafe { intrinsics::type_id::<T>() },
+ #[cfg(not(bootstrap))]
+ t: intrinsics::type_id::<T>(),
}
}
}
/// iterator (either via `IntoIterator` for arrays or via another way).
#[unstable(feature = "array_value_iter", issue = "65798")]
pub fn new(array: [T; N]) -> Self {
- // The transmute here is actually safe. The docs of `MaybeUninit`
+ // SAFETY: The transmute here is actually safe. The docs of `MaybeUninit`
// promise:
//
// > `MaybeUninit<T>` is guaranteed to have the same size and alignment
/// Returns an immutable slice of all elements that have not been yielded
/// yet.
fn as_slice(&self) -> &[T] {
- // This transmute is safe. As mentioned in `new`, `MaybeUninit` retains
+ let slice = &self.data[self.alive.clone()];
+ // SAFETY: This transmute is safe. As mentioned in `new`, `MaybeUninit` retains
// the size and alignment of `T`. Furthermore, we know that all
// elements within `alive` are properly initialized.
- let slice = &self.data[self.alive.clone()];
unsafe {
mem::transmute::<&[MaybeUninit<T>], &[T]>(slice)
}
let idx = self.alive.start;
self.alive.start += 1;
- // Read the element from the array. This is safe: `idx` is an index
+ // Read the element from the array.
+ // SAFETY: This is safe: `idx` is an index
// into the "alive" region of the array. Reading this element means
// that `data[idx]` is regarded as dead now (i.e. do not touch). As
// `idx` was the start of the alive-zone, the alive zone is now
// + 1]`.
self.alive.end -= 1;
- // Read the element from the array. This is safe: `alive.end` is an
+ // Read the element from the array.
+ // SAFETY: This is safe: `alive.end` is an
// index into the "alive" region of the array. Compare the previous
// comment that states that the alive region is
// `data[alive.start..alive.end + 1]`. Reading this element means that
[T; N]: LengthAtMost32,
{
fn clone(&self) -> Self {
+ // SAFETY: each point of unsafety is documented inside the unsafe block
unsafe {
// This creates a new uninitialized array. Note that the `assume_init`
// refers to the array, not the individual elements. And it is Ok if
fn try_from(slice: &[T]) -> Result<&[T; N], TryFromSliceError> {
if slice.len() == N {
let ptr = slice.as_ptr() as *const [T; N];
+ // SAFETY: ok because we just checked that the length fits
unsafe { Ok(&*ptr) }
} else {
Err(TryFromSliceError(()))
fn try_from(slice: &mut [T]) -> Result<&mut [T; N], TryFromSliceError> {
if slice.len() == N {
let ptr = slice.as_mut_ptr() as *mut [T; N];
+ // SAFETY: ok because we just checked that the length fits
unsafe { Ok(&mut *ptr) }
} else {
Err(TryFromSliceError(()))
#[stable(feature = "ascii_escape_display", since = "1.39.0")]
impl fmt::Display for EscapeDefault {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ // SAFETY: ok because `escape_default` created only valid utf-8 data
f.write_str(unsafe { from_utf8_unchecked(&self.data[self.range.clone()]) })
}
}
}
fn case07_fake_simd_u32(bytes: &mut [u8]) {
+ // SAFETY: transmuting a sequence of `u8` to `u32` is always fine
let (before, aligned, after) = unsafe {
bytes.align_to_mut::<u32>()
};
}
fn case08_fake_simd_u64(bytes: &mut [u8]) {
+ // SAFETY: transmuting a sequence of `u8` to `u64` is always fine
let (before, aligned, after) = unsafe {
bytes.align_to_mut::<u64>()
};
//! use std::cell::Cell;
//! use std::ptr::NonNull;
//! use std::intrinsics::abort;
+//! use std::marker::PhantomData;
//!
//! struct Rc<T: ?Sized> {
-//! ptr: NonNull<RcBox<T>>
+//! ptr: NonNull<RcBox<T>>,
+//! phantom: PhantomData<RcBox<T>>,
//! }
//!
//! struct RcBox<T: ?Sized> {
//! impl<T: ?Sized> Clone for Rc<T> {
//! fn clone(&self) -> Rc<T> {
//! self.inc_strong();
-//! Rc { ptr: self.ptr }
+//! Rc {
+//! ptr: self.ptr,
+//! phantom: PhantomData,
+//! }
//! }
//! }
//!
//! ```
//!
+// ignore-tidy-undocumented-unsafe
+
#![stable(feature = "rust1", since = "1.0.0")]
use crate::cmp::Ordering;
if (i > MAX as u32) || (i >= 0xD800 && i <= 0xDFFF) {
Err(CharTryFromError(()))
} else {
+ // SAFETY: checked that it's a legal unicode value
Ok(unsafe { from_u32_unchecked(i) })
}
}
};
if u < 0xD800 || 0xDFFF < u {
- // not a surrogate
+ // SAFETY: not a surrogate
Some(Ok(unsafe { from_u32_unchecked(u as u32) }))
} else if u >= 0xDC00 {
// a trailing surrogate
// all ok, so lets decode it.
let c = (((u - 0xD800) as u32) << 10 | (u2 - 0xDC00) as u32) + 0x1_0000;
+ // SAFETY: we checked that it's a legal unicode value
Some(Ok(unsafe { from_u32_unchecked(c) }))
}
}
#[inline]
pub fn encode_utf8(self, dst: &mut [u8]) -> &mut str {
let code = self as u32;
+ // SAFETY: each arm checks the size of the slice and only uses `get_unchecked` unsafe ops
unsafe {
let len = if code < MAX_ONE_B && !dst.is_empty() {
*dst.get_unchecked_mut(0) = code as u8;
#[inline]
pub fn encode_utf16(self, dst: &mut [u16]) -> &mut [u16] {
let mut code = self as u32;
+ // SAFETY: each arm checks whether there are enough bits to write into
unsafe {
if (code & 0xFFFF) == code && !dst.is_empty() {
// The BMP falls through (assuming non-surrogate, as it should)
#[inline]
fn clone(&self) -> Self {
let mut dest = crate::mem::MaybeUninit::uninit();
+ // SAFETY: we write to the `MaybeUninit`, thus it is initialized and `assume_init` is legal
unsafe {
va_copy(dest.as_mut_ptr(), self);
dest.assume_init()
use crate::mem::MaybeUninit;
use crate::num::flt2dec;
+// ignore-tidy-undocumented-unsafe
+
// Don't inline this so callers don't use the stack space this function
// requires unless they have to.
#[inline(never)]
//! Utilities for formatting and printing strings.
+// ignore-tidy-undocumented-unsafe
+
#![stable(feature = "rust1", since = "1.0.0")]
use crate::cell::{UnsafeCell, Cell, RefCell, Ref, RefMut};
//! Integer and floating-point number formatting
+// ignore-tidy-undocumented-unsafe
+
use crate::fmt;
use crate::ops::{Div, Rem, Sub};
//! }
//! ```
+// ignore-tidy-undocumented-unsafe
+
#![stable(feature = "rust1", since = "1.0.0")]
use crate::fmt;
//! An implementation of SipHash.
+// ignore-tidy-undocumented-unsafe
+
#![allow(deprecated)] // the types in this module are deprecated
use crate::marker::PhantomData;
//! Hints to compiler that affects how code should be emitted or optimized.
+// ignore-tidy-undocumented-unsafe
+
use crate::intrinsics;
/// Informs the compiler that this point in the code is not reachable, enabling
// overflow handling
loop {
let mul = n.checked_mul(step);
- if unsafe { intrinsics::likely(mul.is_some()) } {
- return self.iter.nth(mul.unwrap() - 1);
+ #[cfg(bootstrap)]
+ {
+ // SAFETY: going away soon
+ if unsafe { intrinsics::likely(mul.is_some()) } {
+ return self.iter.nth(mul.unwrap() - 1);
+ }
+ }
+ #[cfg(not(bootstrap))]
+ {
+ if intrinsics::likely(mul.is_some()) {
+ return self.iter.nth(mul.unwrap() - 1);
+ }
}
let div_n = usize::MAX / n;
let div_step = usize::MAX / step;
+// ignore-tidy-undocumented-unsafe
+
use crate::cmp;
use super::super::{Iterator, DoubleEndedIterator, ExactSizeIterator, FusedIterator, TrustedLen};
/// .collect()
/// }
/// ```
+#[rustc_diagnostic_item = "IntoIterator"]
#[stable(feature = "rust1", since = "1.0.0")]
pub trait IntoIterator {
/// The type of the elements being iterated over.
#![feature(nll)]
#![feature(exhaustive_patterns)]
#![feature(no_core)]
-#![feature(on_unimplemented)]
+#![cfg_attr(bootstrap, feature(on_unimplemented))]
#![feature(optin_builtin_traits)]
#![feature(prelude_import)]
#![feature(repr_simd, platform_intrinsics)]
use crate::intrinsics;
use crate::mem::ManuallyDrop;
+// ignore-tidy-undocumented-unsafe
+
/// A wrapper type to construct uninitialized instances of `T`.
///
/// # Initialization invariant
MaybeUninit { uninit: () }
}
+ /// Create a new array of `MaybeUninit<T>` items, in an uninitialized state.
+ ///
+ /// Note: in a future Rust version this method may become unnecessary
+ /// when array literal syntax allows
+ /// [repeating const expressions](https://github.com/rust-lang/rust/issues/49147).
+ /// The example below could then use `let mut buf = [MaybeUninit::<u8>::uninit(); 32];`.
+ ///
+ /// # Examples
+ ///
+ /// ```no_run
+ /// #![feature(maybe_uninit_uninit_array, maybe_uninit_extra, maybe_uninit_slice_assume_init)]
+ ///
+ /// use std::mem::MaybeUninit;
+ ///
+ /// extern "C" {
+ /// fn read_into_buffer(ptr: *mut u8, max_len: usize) -> usize;
+ /// }
+ ///
+ /// /// Returns a (possibly smaller) slice of data that was actually read
+ /// fn read(buf: &mut [MaybeUninit<u8>]) -> &[u8] {
+ /// unsafe {
+ /// let len = read_into_buffer(buf.as_mut_ptr() as *mut u8, buf.len());
+ /// MaybeUninit::slice_get_ref(&buf[..len])
+ /// }
+ /// }
+ ///
+ /// let mut buf: [MaybeUninit<u8>; 32] = MaybeUninit::uninit_array();
+ /// let data = read(&mut buf);
+ /// ```
+ #[unstable(feature = "maybe_uninit_uninit_array", issue = "0")]
+ #[inline(always)]
+ pub fn uninit_array<const LEN: usize>() -> [Self; LEN] {
+ unsafe {
+ MaybeUninit::<[MaybeUninit<T>; LEN]>::uninit().assume_init()
+ }
+ }
+
/// A promotable constant, equivalent to `uninit()`.
#[unstable(feature = "internal_uninit_const", issue = "0",
reason = "hack to work around promotability")]
/// ```
#[stable(feature = "maybe_uninit", since = "1.36.0")]
#[inline(always)]
+ #[cfg_attr(all(not(bootstrap)), rustc_diagnostic_item = "assume_init")]
pub unsafe fn assume_init(self) -> T {
intrinsics::panic_if_uninhabited::<T>();
ManuallyDrop::into_inner(self.value)
&mut *self.value
}
+ /// Assuming all the elements are initialized, get a slice to them.
+ ///
+ /// # Safety
+ ///
+ /// It is up to the caller to guarantee that the `MaybeUninit<T>` elements
+ /// really are in an initialized state.
+ /// Calling this when the content is not yet fully initialized causes undefined behavior.
+ #[unstable(feature = "maybe_uninit_slice_assume_init", issue = "0")]
+ #[inline(always)]
+ pub unsafe fn slice_get_ref(slice: &[Self]) -> &[T] {
+ &*(slice as *const [Self] as *const [T])
+ }
+
+ /// Assuming all the elements are initialized, get a mutable slice to them.
+ ///
+ /// # Safety
+ ///
+ /// It is up to the caller to guarantee that the `MaybeUninit<T>` elements
+ /// really are in an initialized state.
+ /// Calling this when the content is not yet fully initialized causes undefined behavior.
+ #[unstable(feature = "maybe_uninit_slice_assume_init", issue = "0")]
+ #[inline(always)]
+ pub unsafe fn slice_get_mut(slice: &mut [Self]) -> &mut [T] {
+ &mut *(slice as *mut [Self] as *mut [T])
+ }
+
/// Gets a pointer to the first element of the array.
#[unstable(feature = "maybe_uninit_slice", issue = "63569")]
#[inline(always)]
#[inline]
#[unstable(feature = "forget_unsized", issue = "0")]
pub fn forget_unsized<T: ?Sized>(t: T) {
+ // SAFETY: the forget intrinsic could be safe, but there's no point in making it safe since
+ // we'll be implementing this function soon via `ManuallyDrop`
unsafe { intrinsics::forget(t) }
}
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
pub fn size_of_val<T: ?Sized>(val: &T) -> usize {
+ #[cfg(bootstrap)]
+ // SAFETY: going away soon
unsafe { intrinsics::size_of_val(val) }
+ #[cfg(not(bootstrap))]
+ intrinsics::size_of_val(val)
}
/// Returns the [ABI]-required minimum alignment of a type.
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_deprecated(reason = "use `align_of_val` instead", since = "1.2.0")]
pub fn min_align_of_val<T: ?Sized>(val: &T) -> usize {
+ #[cfg(bootstrap)]
+ // SAFETY: going away soon
unsafe { intrinsics::min_align_of_val(val) }
+ #[cfg(not(bootstrap))]
+ intrinsics::min_align_of_val(val)
}
/// Returns the [ABI]-required minimum alignment of a type.
/// ```
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
+#[allow(deprecated)]
pub fn align_of_val<T: ?Sized>(val: &T) -> usize {
- unsafe { intrinsics::min_align_of_val(val) }
+ min_align_of_val(val)
}
/// Returns `true` if dropping values of type `T` matters.
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
pub fn swap<T>(x: &mut T, y: &mut T) {
+ // SAFETY: the raw pointers have been created from safe mutable references satisfying all the
+ // constraints on `ptr::swap_nonoverlapping_one`
unsafe {
ptr::swap_nonoverlapping_one(x, y);
}
/// ```
#[stable(feature = "discriminant_value", since = "1.21.0")]
pub fn discriminant<T>(v: &T) -> Discriminant<T> {
+ #[cfg(bootstrap)]
+ // SAFETY: going away soon
unsafe {
Discriminant(intrinsics::discriminant_value(v), PhantomData)
}
+ #[cfg(not(bootstrap))]
+ Discriminant(intrinsics::discriminant_value(v), PhantomData)
}
pub struct FPUControlWord(u16);
fn set_cw(cw: u16) {
+ // SAFETY: the `fldcw` instruction has been audited to be able to work correctly with
+ // any `u16`
unsafe { asm!("fldcw $0" :: "m" (cw) :: "volatile") }
}
// Get the original value of the control word to restore it later, when the
// `FPUControlWord` structure is dropped
+ // SAFETY: the `fnstcw` instruction has been audited to be able to work correctly with
+ // any `u16`
unsafe { asm!("fnstcw $0" : "=*m" (&cw) ::: "volatile") }
// Set the control word to the desired precision. This is achieved by masking away the old
#[stable(feature = "float_bits_conv", since = "1.20.0")]
#[inline]
pub fn to_bits(self) -> u32 {
+ // SAFETY: `u32` is a plain old datatype so we can always transmute to it
unsafe { mem::transmute(self) }
}
#[stable(feature = "float_bits_conv", since = "1.20.0")]
#[inline]
pub fn from_bits(v: u32) -> Self {
+ // SAFETY: `u32` is a plain old datatype so we can always transmute from it
// It turns out the safety issues with sNaN were overblown! Hooray!
unsafe { mem::transmute(v) }
}
#[stable(feature = "float_bits_conv", since = "1.20.0")]
#[inline]
pub fn to_bits(self) -> u64 {
+ // SAFETY: `u64` is a plain old datatype so we can always transmute to it
unsafe { mem::transmute(self) }
}
#[stable(feature = "float_bits_conv", since = "1.20.0")]
#[inline]
pub fn from_bits(v: u64) -> Self {
+ // SAFETY: `u64` is a plain old datatype so we can always transmute from it
// It turns out the safety issues with sNaN were overblown! Hooray!
unsafe { mem::transmute(v) }
}
#[inline]
pub fn new(n: $Int) -> Option<Self> {
if n != 0 {
+ // SAFETY: we just checked that there's no `0`
Some(unsafe { $Ty(n) })
} else {
None
if rhs == 0 || (self == Self::min_value() && rhs == -1) {
None
} else {
+ // SAFETY: div by zero and by INT_MIN have been checked above
Some(unsafe { intrinsics::unchecked_div(self, rhs) })
}
}
if rhs == 0 || (self == Self::min_value() && rhs == -1) {
None
} else {
+ // SAFETY: div by zero and by INT_MIN have been checked above
Some(unsafe { intrinsics::unchecked_rem(self, rhs) })
}
}
without modifying the original"]
#[inline]
pub const fn wrapping_shl(self, rhs: u32) -> Self {
+ // SAFETY: the masking by the bitsize of the type ensures that we do not shift
+ // out of bounds
unsafe {
intrinsics::unchecked_shl(self, (rhs & ($BITS - 1)) as $SelfT)
}
without modifying the original"]
#[inline]
pub const fn wrapping_shr(self, rhs: u32) -> Self {
+ // SAFETY: the masking by the bitsize of the type ensures that we do not shift
+ // out of bounds
unsafe {
intrinsics::unchecked_shr(self, (rhs & ($BITS - 1)) as $SelfT)
}
#[rustc_const_unstable(feature = "const_int_conversion")]
#[inline]
pub const fn to_ne_bytes(self) -> [u8; mem::size_of::<Self>()] {
+ // SAFETY: integers are plain old datatypes so we can always transmute them to
+ // arrays of bytes
unsafe { mem::transmute(self) }
}
}
#[rustc_const_unstable(feature = "const_int_conversion")]
#[inline]
pub const fn from_ne_bytes(bytes: [u8; mem::size_of::<Self>()]) -> Self {
+ // SAFETY: integers are plain old datatypes so we can always transmute to them
unsafe { mem::transmute(bytes) }
}
}
pub fn checked_div(self, rhs: Self) -> Option<Self> {
match rhs {
0 => None,
+ // SAFETY: div by zero has been checked above and unsigned types have no other
+ // failure modes for division
rhs => Some(unsafe { intrinsics::unchecked_div(self, rhs) }),
}
}
if rhs == 0 {
None
} else {
+ // SAFETY: div by zero has been checked above and unsigned types have no other
+ // failure modes for division
Some(unsafe { intrinsics::unchecked_rem(self, rhs) })
}
}
without modifying the original"]
#[inline]
pub const fn wrapping_shl(self, rhs: u32) -> Self {
+ // SAFETY: the masking by the bitsize of the type ensures that we do not shift
+ // out of bounds
unsafe {
intrinsics::unchecked_shl(self, (rhs & ($BITS - 1)) as $SelfT)
}
without modifying the original"]
#[inline]
pub const fn wrapping_shr(self, rhs: u32) -> Self {
+ // SAFETY: the masking by the bitsize of the type ensures that we do not shift
+ // out of bounds
unsafe {
intrinsics::unchecked_shr(self, (rhs & ($BITS - 1)) as $SelfT)
}
fn one_less_than_next_power_of_two(self) -> Self {
if self <= 1 { return 0; }
- // Because `p > 0`, it cannot consist entirely of leading zeros.
+ let p = self - 1;
+ // SAFETY: Because `p > 0`, it cannot consist entirely of leading zeros.
// That means the shift is always in-bounds, and some processors
// (such as intel pre-haswell) have more efficient ctlz
// intrinsics when the argument is non-zero.
- let p = self - 1;
let z = unsafe { intrinsics::ctlz_nonzero(p) };
<$SelfT>::max_value() >> z
}
#[rustc_const_unstable(feature = "const_int_conversion")]
#[inline]
pub const fn to_ne_bytes(self) -> [u8; mem::size_of::<Self>()] {
+ // SAFETY: integers are plain old datatypes so we can always transmute them to
+ // arrays of bytes
unsafe { mem::transmute(self) }
}
}
#[rustc_const_unstable(feature = "const_int_conversion")]
#[inline]
pub const fn from_ne_bytes(bytes: [u8; mem::size_of::<Self>()]) -> Self {
+ // SAFETY: integers are plain old datatypes so we can always transmute to them
unsafe { mem::transmute(bytes) }
}
}
//! [`Box<T>`]: ../../std/boxed/struct.Box.html
//! [`i32`]: ../../std/primitive.i32.html
+// ignore-tidy-undocumented-unsafe
+
#![stable(feature = "rust1", since = "1.0.0")]
use crate::iter::{FromIterator, FusedIterator, TrustedLen};
//! one function. Currently, the actual symbol is declared in the standard
//! library, but the location of this may change over time.
+// ignore-tidy-undocumented-unsafe
+
#![allow(dead_code, missing_docs)]
#![unstable(feature = "core_panic",
reason = "internal details of the implementation of the `panic!` \
#[stable(feature = "pin", since = "1.33.0")]
#[inline(always)]
pub fn as_ref(&self) -> Pin<&P::Target> {
+ // SAFETY: see documentation on this function
unsafe { Pin::new_unchecked(&*self.pointer) }
}
#[stable(feature = "pin", since = "1.33.0")]
#[inline(always)]
pub fn as_mut(&mut self) -> Pin<&mut P::Target> {
+ // SAFETY: see documentation on this function
unsafe { Pin::new_unchecked(&mut *self.pointer) }
}
//! * A [null] pointer is *never* valid, not even for accesses of [size zero][zst].
//! * All pointers (except for the null pointer) are valid for all operations of
//! [size zero][zst].
+//! * For a pointer to be valid, it is necessary, but not always sufficient, that the pointer
+//! be *dereferencable*: the memory range of the given size starting at the pointer must all be
+//! within the bounds of a single allocated object. Note that in Rust,
+//! every (stack-allocated) variable is considered a separate allocated object.
//! * All accesses performed by functions in this module are *non-atomic* in the sense
//! of [atomic operations] used to synchronize between threads. This means it is
//! undefined behavior to perform two concurrent accesses to the same location from different
//! [`write_volatile`]: ./fn.write_volatile.html
//! [`NonNull::dangling`]: ./struct.NonNull.html#method.dangling
+// ignore-tidy-undocumented-unsafe
+
#![stable(feature = "rust1", since = "1.0.0")]
use crate::intrinsics;
pub(crate) len: usize,
}
-/// Forms a slice from a pointer and a length.
+/// Forms a raw slice from a pointer and a length.
///
/// The `len` argument is the number of **elements**, not the number of bytes.
///
+/// This function is safe, but actually using the return value is unsafe.
+/// See the documentation of [`from_raw_parts`] for slice safety requirements.
+///
+/// [`from_raw_parts`]: ../../std/slice/fn.from_raw_parts.html
+///
/// # Examples
///
/// ```rust
unsafe { Repr { raw: FatPtr { data, len } }.rust }
}
-/// Performs the same functionality as [`from_raw_parts`], except that a
-/// mutable slice is returned.
+/// Performs the same functionality as [`slice_from_raw_parts`], except that a
+/// raw mutable slice is returned, as opposed to a raw immutable slice.
///
-/// See the documentation of [`from_raw_parts`] for more details.
+/// See the documentation of [`slice_from_raw_parts`] for more details.
///
-/// [`from_raw_parts`]: ../../std/slice/fn.from_raw_parts.html
+/// This function is safe, but actually using the return value is unsafe.
+/// See the documentation of [`from_raw_parts_mut`] for slice safety requirements.
+///
+/// [`slice_from_raw_parts`]: fn.slice_from_raw_parts.html
+/// [`from_raw_parts_mut`]: ../../std/slice/fn.from_raw_parts_mut.html
#[inline]
#[unstable(feature = "slice_from_raw_parts", reason = "recently added", issue = "36925")]
pub fn slice_from_raw_parts_mut<T>(data: *mut T, len: usize) -> *mut [T] {
use crate::ptr::Unique;
use crate::cmp::Ordering;
+// ignore-tidy-undocumented-unsafe
+
/// `*mut T` but non-zero and covariant.
///
/// This is often the correct thing to use when building data structures using
use crate::mem;
use crate::ptr::NonNull;
+// ignore-tidy-undocumented-unsafe
+
/// A wrapper around a raw non-null `*mut T` that indicates that the possessor
/// of this wrapper owns the referent. Useful for building abstractions like
/// `Box<T>`, `Vec<T>`, `String`, and `HashMap<K, V>`.
// Original implementation taken from rust-memchr.
// Copyright 2015 Andrew Gallant, bluss and Nicolas Koch
+// ignore-tidy-undocumented-unsafe
+
use crate::cmp;
use crate::mem;
// ignore-tidy-filelength
+// ignore-tidy-undocumented-unsafe
//! Slice management and manipulation.
//!
///
/// # Safety
///
-/// This function is unsafe as there is no guarantee that the given pointer is
-/// valid for `len` elements, nor whether the lifetime inferred is a suitable
-/// lifetime for the returned slice.
+/// Behavior is undefined if any of the following conditions are violated:
///
-/// `data` must be non-null and aligned, even for zero-length slices. One
-/// reason for this is that enum layout optimizations may rely on references
-/// (including slices of any length) being aligned and non-null to distinguish
-/// them from other data. You can obtain a pointer that is usable as `data`
-/// for zero-length slices using [`NonNull::dangling()`].
+/// * `data` must be [valid] for reads for `len * mem::size_of::<T>()` many bytes,
+/// and it must be properly aligned. This means in particular:
///
-/// The total size of the slice must be no larger than `isize::MAX` **bytes**
-/// in memory. See the safety documentation of [`pointer::offset`].
+/// * The entire memory range of this slice must be contained within a single allocated object!
+/// Slices can never span across multiple allocated objects.
+/// * `data` must be non-null and aligned even for zero-length slices. One
+/// reason for this is that enum layout optimizations may rely on references
+/// (including slices of any length) being aligned and non-null to distinguish
+/// them from other data. You can obtain a pointer that is usable as `data`
+/// for zero-length slices using [`NonNull::dangling()`].
+///
+/// * The memory referenced by the returned slice must not be mutated for the duration
+/// of lifetime `'a`, except inside an `UnsafeCell`.
+///
+/// * The total size `len * mem::size_of::<T>()` of the slice must be no larger than `isize::MAX`.
+/// See the safety documentation of [`pointer::offset`].
///
/// # Caveat
///
/// assert_eq!(slice[0], 42);
/// ```
///
+/// [valid]: ../../std/ptr/index.html#safety
/// [`NonNull::dangling()`]: ../../std/ptr/struct.NonNull.html#method.dangling
/// [`pointer::offset`]: ../../std/primitive.pointer.html#method.offset
#[inline]
pub unsafe fn from_raw_parts<'a, T>(data: *const T, len: usize) -> &'a [T] {
debug_assert!(is_aligned_and_not_null(data), "attempt to create unaligned or null slice");
debug_assert!(mem::size_of::<T>().saturating_mul(len) <= isize::MAX as usize,
- "attempt to create slice covering half the address space");
+ "attempt to create slice covering at least half the address space");
&*ptr::slice_from_raw_parts(data, len)
}
/// Performs the same functionality as [`from_raw_parts`], except that a
/// mutable slice is returned.
///
-/// This function is unsafe for the same reasons as [`from_raw_parts`], as well
-/// as not being able to provide a non-aliasing guarantee of the returned
-/// mutable slice. `data` must be non-null and aligned even for zero-length
-/// slices as with [`from_raw_parts`]. The total size of the slice must be no
-/// larger than `isize::MAX` **bytes** in memory.
+/// # Safety
+///
+/// Behavior is undefined if any of the following conditions are violated:
+///
+/// * `data` must be [valid] for writes for `len * mem::size_of::<T>()` many bytes,
+/// and it must be properly aligned. This means in particular:
///
-/// See the documentation of [`from_raw_parts`] for more details.
+/// * The entire memory range of this slice must be contained within a single allocated object!
+/// Slices can never span across multiple allocated objects.
+/// * `data` must be non-null and aligned even for zero-length slices. One
+/// reason for this is that enum layout optimizations may rely on references
+/// (including slices of any length) being aligned and non-null to distinguish
+/// them from other data. You can obtain a pointer that is usable as `data`
+/// for zero-length slices using [`NonNull::dangling()`].
///
+/// * The memory referenced by the returned slice must not be accessed through any other pointer
+/// (not derived from the return value) for the duration of lifetime `'a`.
+/// Both read and write accesses are forbidden.
+///
+/// * The total size `len * mem::size_of::<T>()` of the slice must be no larger than `isize::MAX`.
+/// See the safety documentation of [`pointer::offset`].
+///
+/// [valid]: ../../std/ptr/index.html#safety
+/// [`NonNull::dangling()`]: ../../std/ptr/struct.NonNull.html#method.dangling
+/// [`pointer::offset`]: ../../std/primitive.pointer.html#method.offset
/// [`from_raw_parts`]: ../../std/slice/fn.from_raw_parts.html
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
pub unsafe fn from_raw_parts_mut<'a, T>(data: *mut T, len: usize) -> &'a mut [T] {
debug_assert!(is_aligned_and_not_null(data), "attempt to create unaligned or null slice");
debug_assert!(mem::size_of::<T>().saturating_mul(len) <= isize::MAX as usize,
- "attempt to create slice covering half the address space");
+ "attempt to create slice covering at least half the address space");
&mut *ptr::slice_from_raw_parts_mut(data, len)
}
//! Unstable sorting is compatible with libcore because it doesn't allocate memory, unlike our
//! stable sorting implementation.
+// ignore-tidy-undocumented-unsafe
+
use crate::cmp;
use crate::mem::{self, MaybeUninit};
use crate::ptr;
use crate::fmt::{self, Write};
use crate::mem;
+// ignore-tidy-undocumented-unsafe
+
/// Lossy UTF-8 string.
#[unstable(feature = "str_internals", issue = "0")]
pub struct Utf8Lossy {
// ignore-tidy-filelength
+// ignore-tidy-undocumented-unsafe
//! String manipulation.
//!
//! For more details, see the traits [`Pattern`], [`Searcher`],
//! [`ReverseSearcher`], and [`DoubleEndedSearcher`].
+// ignore-tidy-undocumented-unsafe
+
#![unstable(feature = "pattern",
reason = "API not fully fleshed out and ready to be stabilized",
issue = "27721")]
//! println!("live threads: {}", old_thread_count + 1);
//! ```
+// ignore-tidy-undocumented-unsafe
+
#![stable(feature = "rust1", since = "1.0.0")]
#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))]
#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))]
extern {
fn ldexp(x: f64, n: i32) -> f64;
}
+ // SAFETY: assuming a correct `ldexp` has been supplied, the given arguments cannot possibly
+ // cause undefined behavior
unsafe { ldexp(a, b) }
}
if end == 0 {
write!(f, "{}", integer_part)
} else {
- // We are only writing ASCII digits into the buffer and it was
+ // SAFETY: We are only writing ASCII digits into the buffer and it was
// initialized with '0's, so it contains valid UTF8.
let s = unsafe {
crate::str::from_utf8_unchecked(&buf[..end])
// Rust's "try" function, but if we're aborting on panics we just call the
// function as there's nothing else we need to do here.
#[rustc_std_internal_symbol]
+#[allow(improper_ctypes)]
pub unsafe extern fn __rust_maybe_catch_panic(f: fn(*mut u8),
data: *mut u8,
_data_ptr: *mut usize,
// hairy and tightly coupled, for more information see the compiler's
// implementation of this.
#[no_mangle]
+#[allow(improper_ctypes)]
pub unsafe extern "C" fn __rust_maybe_catch_panic(f: fn(*mut u8),
data: *mut u8,
data_ptr: *mut usize,
compiled:
```compile_fail
-#![feature(on_unimplemented)]
+#![feature(rustc_attrs)]
fn foo<T: Index<u8>>(x: T){}
compiled:
```compile_fail
-#![feature(on_unimplemented)]
+#![feature(rustc_attrs)]
fn foo<T: Index<u8>>(x: T){}
compiled:
```compile_fail
-#![feature(on_unimplemented)]
+#![feature(rustc_attrs)]
fn foo<T: Index<u8>>(x: T){}
E0657, // `impl Trait` can only capture lifetimes bound at the fn level
E0687, // in-band lifetimes cannot be used in `fn`/`Fn` syntax
E0688, // in-band lifetimes cannot be mixed with explicit lifetime binders
+ E0703, // invalid ABI
// E0707, // multiple elided lifetimes used in arguments of `async fn`
E0708, // `async` non-`move` closures with parameters are not currently
// supported
use syntax::ast::*;
use syntax::errors;
use syntax::print::pprust;
-use syntax::parse::token::{self, Nonterminal, Token};
+use syntax::token::{self, Nonterminal, Token};
use syntax::tokenstream::{TokenStream, TokenTree};
use syntax::sess::ParseSess;
use syntax::source_map::{respan, ExpnData, ExpnKind, DesugaringKind, Spanned};
// Note that we explicitly do not walk the path. Since we don't really
// lower attributes (we use the AST version) there is nowhere to keep
// the `HirId`s. We don't actually need HIR version of attributes anyway.
+ let kind = match attr.kind {
+ AttrKind::Normal(ref item) => {
+ AttrKind::Normal(AttrItem {
+ path: item.path.clone(),
+ tokens: self.lower_token_stream(item.tokens.clone()),
+ })
+ }
+ AttrKind::DocComment(comment) => AttrKind::DocComment(comment)
+ };
+
Attribute {
- item: AttrItem {
- path: attr.path.clone(),
- tokens: self.lower_token_stream(attr.tokens.clone()),
- },
+ kind,
id: attr.id,
style: attr.style,
- is_sugared_doc: attr.is_sugared_doc,
span: attr.span,
}
}
ImplTraitContext::disallowed(),
),
unsafety: this.lower_unsafety(f.unsafety),
- abi: f.abi,
+ abi: this.lower_abi(f.abi),
decl: this.lower_fn_decl(&f.decl, None, false, None),
param_names: this.lower_fn_params_to_names(&f.decl),
}))
use crate::util::nodemap::NodeMap;
use rustc_data_structures::thin_vec::ThinVec;
+use rustc_target::spec::abi;
use std::collections::BTreeSet;
use smallvec::SmallVec;
fn lower_foreign_mod(&mut self, fm: &ForeignMod) -> hir::ForeignMod {
hir::ForeignMod {
- abi: fm.abi,
+ abi: self.lower_abi(fm.abi),
items: fm.items
.iter()
.map(|x| self.lower_foreign_item(x))
unsafety: self.lower_unsafety(h.unsafety),
asyncness: self.lower_asyncness(h.asyncness.node),
constness: self.lower_constness(h.constness),
- abi: h.abi,
+ abi: self.lower_abi(h.abi),
}
}
+ pub(super) fn lower_abi(&mut self, abi: Abi) -> abi::Abi {
+ abi::lookup(&abi.symbol.as_str()).unwrap_or_else(|| {
+ self.error_on_invalid_abi(abi);
+ abi::Abi::Rust
+ })
+ }
+
+ fn error_on_invalid_abi(&self, abi: Abi) {
+ struct_span_err!(
+ self.sess,
+ abi.span,
+ E0703,
+ "invalid ABI: found `{}`",
+ abi.symbol
+ )
+ .span_label(abi.span, "invalid ABI")
+ .help(&format!("valid ABIs: {}", abi::all_names().join(", ")))
+ .emit();
+ }
+
pub(super) fn lower_unsafety(&mut self, u: Unsafety) -> hir::Unsafety {
match u {
Unsafety::Unsafe => hir::Unsafety::Unsafe,
use syntax::ast::*;
use syntax::visit;
use syntax::symbol::{kw, sym};
-use syntax::parse::token::{self, Token};
+use syntax::token::{self, Token};
use syntax_pos::hygiene::ExpnId;
use syntax_pos::Span;
MutImmutable => MutMutable,
}
}
+
+ pub fn prefix_str(&self) -> &'static str {
+ match self {
+ MutMutable => "mut ",
+ MutImmutable => "",
+ }
+ }
}
#[derive(Copy, Clone, PartialEq, RustcEncodable, RustcDecodable, Debug, HashStable)]
/// and the remaining elements are the rest of the arguments.
/// Thus, `x.foo::<Bar, Baz>(a, b, c, d)` is represented as
/// `ExprKind::MethodCall(PathSegment { foo, [Bar, Baz] }, [x, a, b, c, d])`.
+ ///
+ /// To resolve the called method to a `DefId`, call [`type_dependent_def_id`] with
+ /// the `hir_id` of the `MethodCall` node itself.
+ ///
+ /// [`type_dependent_def_id`]: ../ty/struct.TypeckTables.html#method.type_dependent_def_id
MethodCall(P<PathSegment>, Span, HirVec<Expr>),
/// A tuple (e.g., `(a, b, c, d)`).
Tup(HirVec<Expr>),
}
/// Represents an optionally `Self`-qualified value/type path or associated extension.
+///
+/// To resolve the path to a `DefId`, call [`qpath_res`].
+///
+/// [`qpath_res`]: ../ty/struct.TypeckTables.html#method.qpath_res
#[derive(RustcEncodable, RustcDecodable, Debug, HashStable)]
pub enum QPath {
/// Path to a definition, optionally "fully-qualified" with a `Self`
Normal,
}
+impl Unsafety {
+ pub fn prefix_str(&self) -> &'static str {
+ match self {
+ Unsafety::Unsafe => "unsafe ",
+ Unsafety::Normal => "",
+ }
+ }
+}
+
#[derive(Copy, Clone, PartialEq, RustcEncodable, RustcDecodable, Debug, HashStable)]
pub enum Constness {
Const,
_ => false,
};
self.s.word("&");
- if mutbl == hir::MutMutable {
- self.s.word("mut ");
- }
+ self.s.word(mutbl.prefix_str());
if is_range_inner {
self.popen();
}
use syntax::ast;
use syntax::feature_gate;
-use syntax::parse::token;
-use syntax::symbol::SymbolStr;
+use syntax::token;
use syntax::tokenstream;
+use syntax_pos::symbol::SymbolStr;
use syntax_pos::SourceFile;
use crate::hir::def_id::{DefId, CrateNum, CRATE_DEF_INDEX};
impl_stable_hash_for!(struct ::syntax::attr::RustcDeprecation { since, reason, suggestion });
-
impl_stable_hash_for!(enum ::syntax::attr::IntType {
SignedInt(int_ty),
UnsignedInt(uint_ty)
Unsuffixed
});
+impl_stable_hash_for!(enum ::syntax::ast::LitFloatType {
+ Suffixed(float_ty),
+ Unsuffixed
+});
+
impl_stable_hash_for!(struct ::syntax::ast::Lit {
kind,
token,
Byte(value),
Char(value),
Int(value, lit_int_type),
- Float(value, float_ty),
- FloatUnsuffixed(value),
+ Float(value, lit_float_type),
Bool(value),
Err(value)
});
let filtered: SmallVec<[&ast::Attribute; 8]> = self
.iter()
.filter(|attr| {
- !attr.is_sugared_doc &&
+ !attr.is_doc_comment() &&
!attr.ident().map_or(false, |ident| hcx.is_ignored_attr(ident.name))
})
.collect();
fn hash_stable(&self, hcx: &mut StableHashingContext<'a>, hasher: &mut StableHasher) {
// Make sure that these have been filtered out.
debug_assert!(!self.ident().map_or(false, |ident| hcx.is_ignored_attr(ident.name)));
- debug_assert!(!self.is_sugared_doc);
-
- let ast::Attribute {
- ref item,
- id: _,
- style,
- is_sugared_doc: _,
- span,
- } = *self;
-
- item.hash_stable(hcx, hasher);
- style.hash_stable(hcx, hasher);
- span.hash_stable(hcx, hasher);
+ debug_assert!(!self.is_doc_comment());
+
+ let ast::Attribute { kind, id: _, style, span } = self;
+ if let ast::AttrKind::Normal(item) = kind {
+ item.hash_stable(hcx, hasher);
+ style.hash_stable(hcx, hasher);
+ span.hash_stable(hcx, hasher);
+ } else {
+ unreachable!();
+ }
}
}
} else {
r.push(' ');
}
- s.push_highlighted(format!(
- "&{}{}",
- r,
- if mutbl == hir::MutMutable { "mut " } else { "" }
- ));
+ s.push_highlighted(format!("&{}{}", r, mutbl.prefix_str()));
s.push_normal(ty.to_string());
}
For info on how the current borrowck works, see the [rustc guide].
-[rustc guide]: https://rust-lang.github.io/rustc-guide/mir/borrowck.html
+[rustc guide]: https://rust-lang.github.io/rustc-guide/borrow_check.html
/// Check if a `DefId`'s path matches the given absolute type path usage.
///
+ /// Anonymous scopes such as `extern` imports are matched with `kw::Invalid`;
+ /// inherent `impl` blocks are matched with the name of the type.
+ ///
/// # Examples
///
/// ```rust,ignore (no context or def id available)
"",
"extern",
"Specify where an external rust library is located",
- "NAME=PATH",
+ "NAME[=PATH]",
),
opt::multi_s(
"",
cg: &mut CodegenOptions,
dopts: &mut DebuggingOptions,
matches: &getopts::Matches,
- is_unstable_enabled: bool,
error_format: ErrorOutputType,
) -> Vec<PrintRequest> {
let mut prints = Vec::<PrintRequest>::new();
"tls-models" => PrintRequest::TlsModels,
"native-static-libs" => PrintRequest::NativeStaticLibs,
"target-spec-json" => {
- if is_unstable_enabled {
+ if dopts.unstable_options {
PrintRequest::TargetSpec
} else {
early_error(
matches: &getopts::Matches,
debugging_opts: &DebuggingOptions,
error_format: ErrorOutputType,
- is_unstable_enabled: bool,
) -> Externs {
if matches.opt_present("extern-private") && !debugging_opts.unstable_options {
early_error(
let name = parts.next().unwrap_or_else(||
early_error(error_format, "--extern value must not be empty"));
let location = parts.next().map(|s| s.to_string());
- if location.is_none() && !is_unstable_enabled {
- early_error(
- error_format,
- "the `-Z unstable-options` flag must also be passed to \
- enable `--extern crate_name` without `=path`",
- );
- };
let entry = externs
.entry(name.to_owned())
);
}
- let is_unstable_enabled = nightly_options::is_unstable_enabled(matches);
let prints = collect_print_requests(
&mut cg,
&mut debugging_opts,
matches,
- is_unstable_enabled,
error_format,
);
);
}
- let externs = parse_externs(matches, &debugging_opts, error_format, is_unstable_enabled);
+ let externs = parse_externs(matches, &debugging_opts, error_format);
let crate_name = matches.opt_str("crate-name");
// The shorthand encoding uses the same usize as the
// discriminant, with an offset so they can't conflict.
+ #[cfg(bootstrap)]
let discriminant = unsafe { intrinsics::discriminant_value(variant) };
+ #[cfg(not(bootstrap))]
+ let discriminant = intrinsics::discriminant_value(variant);
assert!(discriminant < SHORTHAND_OFFSET as u64);
let shorthand = start + SHORTHAND_OFFSET;
fn to_ty<'tcx>(&self, tcx: TyCtxt<'tcx>) -> Ty<'tcx> {
match *self {
Int(i, signed) => i.to_ty(tcx, signed),
- Float(FloatTy::F32) => tcx.types.f32,
- Float(FloatTy::F64) => tcx.types.f64,
+ F32 => tcx.types.f32,
+ F64 => tcx.types.f64,
Pointer => tcx.mk_mut_ptr(tcx.mk_unit()),
}
}
match *self {
Int(i, signed) => i.to_ty(tcx, signed),
Pointer => tcx.types.usize,
- Float(..) => bug!("floats do not have an int type"),
+ F32 | F64 => bug!("floats do not have an int type"),
}
}
}
ty::Uint(ity) => {
scalar(Int(Integer::from_attr(dl, attr::UnsignedInt(ity)), false))
}
- ty::Float(fty) => scalar(Float(fty)),
+ ty::Float(fty) => scalar(match fty {
+ ast::FloatTy::F32 => F32,
+ ast::FloatTy::F64 => F64,
+ }),
ty::FnPtr(_) => {
let mut ptr = scalar_unit(Pointer);
ptr.valid_range = 1..=*ptr.valid_range.end();
impl_stable_hash_for!(enum crate::ty::layout::Primitive {
Int(integer, signed),
- Float(fty),
+ F32,
+ F64,
Pointer
});
use rustc::{bug, hir};
use std::fmt::Write;
use std::iter;
-use syntax::ast;
/// Same as `unique_type_name()` but with the result pushed onto the given
/// `output` parameter.
ty::Char => output.push_str("char"),
ty::Str => output.push_str("str"),
ty::Never => output.push_str("!"),
- ty::Int(ast::IntTy::Isize) => output.push_str("isize"),
- ty::Int(ast::IntTy::I8) => output.push_str("i8"),
- ty::Int(ast::IntTy::I16) => output.push_str("i16"),
- ty::Int(ast::IntTy::I32) => output.push_str("i32"),
- ty::Int(ast::IntTy::I64) => output.push_str("i64"),
- ty::Int(ast::IntTy::I128) => output.push_str("i128"),
- ty::Uint(ast::UintTy::Usize) => output.push_str("usize"),
- ty::Uint(ast::UintTy::U8) => output.push_str("u8"),
- ty::Uint(ast::UintTy::U16) => output.push_str("u16"),
- ty::Uint(ast::UintTy::U32) => output.push_str("u32"),
- ty::Uint(ast::UintTy::U64) => output.push_str("u64"),
- ty::Uint(ast::UintTy::U128) => output.push_str("u128"),
- ty::Float(ast::FloatTy::F32) => output.push_str("f32"),
- ty::Float(ast::FloatTy::F64) => output.push_str("f64"),
+ ty::Int(ty) => output.push_str(ty.name_str()),
+ ty::Uint(ty) => output.push_str(ty.name_str()),
+ ty::Float(ty) => output.push_str(ty.name_str()),
ty::Adt(adt_def, substs) => {
self.push_def_path(adt_def.did, output);
self.push_generic_params(substs, iter::empty(), output, debug);
}
ty::Ref(_, inner_type, mutbl) => {
output.push('&');
- if mutbl == hir::MutMutable {
- output.push_str("mut ");
- }
+ output.push_str(mutbl.prefix_str());
self.push_type_name(inner_type, output, debug);
}
ty::Foreign(did) => self.push_def_path(did, output),
ty::FnDef(..) | ty::FnPtr(_) => {
let sig = t.fn_sig(self.tcx);
- if sig.unsafety() == hir::Unsafety::Unsafe {
- output.push_str("unsafe ");
- }
+ output.push_str(sig.unsafety().prefix_str());
let abi = sig.abi();
if abi != ::rustc_target::spec::abi::Abi::Rust {
match ty.kind {
ty::Bool => p!(write("bool")),
ty::Char => p!(write("char")),
- ty::Int(t) => p!(write("{}", t.ty_to_string())),
- ty::Uint(t) => p!(write("{}", t.ty_to_string())),
- ty::Float(t) => p!(write("{}", t.ty_to_string())),
+ ty::Int(t) => p!(write("{}", t.name_str())),
+ ty::Uint(t) => p!(write("{}", t.name_str())),
+ ty::Float(t) => p!(write("{}", t.name_str())),
ty::RawPtr(ref tm) => {
p!(write("*{} ", match tm.mutbl {
hir::MutMutable => "mut",
let bit_size = Integer::from_attr(&self.tcx(), UnsignedInt(*ui)).size();
let max = truncate(u128::max_value(), bit_size);
+ let ui_str = ui.name_str();
if data == max {
- p!(write("std::{}::MAX", ui))
+ p!(write("std::{}::MAX", ui_str))
} else {
- p!(write("{}{}", data, ui))
+ p!(write("{}{}", data, ui_str))
};
},
(ConstValue::Scalar(Scalar::Raw { data, .. }), ty::Int(i)) => {
let size = self.tcx().layout_of(ty::ParamEnv::empty().and(ty))
.unwrap()
.size;
+ let i_str = i.name_str();
match data {
- d if d == min => p!(write("std::{}::MIN", i)),
- d if d == max => p!(write("std::{}::MAX", i)),
- _ => p!(write("{}{}", sign_extend(data, size) as i128, i))
+ d if d == min => p!(write("std::{}::MIN", i_str)),
+ d if d == max => p!(write("std::{}::MAX", i_str)),
+ _ => p!(write("{}{}", sign_extend(data, size) as i128, i_str))
}
},
(ConstValue::Scalar(Scalar::Raw { data, .. }), ty::Char) =>
}
ty::TypeAndMut<'tcx> {
- p!(write("{}", if self.mutbl == hir::MutMutable { "mut " } else { "" }),
- print(self.ty))
+ p!(write("{}", self.mutbl.prefix_str()), print(self.ty))
}
ty::ExistentialTraitRef<'tcx> {
}
ty::FnSig<'tcx> {
- if self.unsafety == hir::Unsafety::Unsafe {
- p!(write("unsafe "));
- }
+ p!(write("{}", self.unsafety.prefix_str()));
if self.abi != Abi::Rust {
p!(write("extern {} ", self.abi));
}
}
-unsafe extern "C" fn report_inline_asm(cgcx: &CodegenContext<LlvmCodegenBackend>,
- msg: &str,
- cookie: c_uint) {
+fn report_inline_asm(cgcx: &CodegenContext<LlvmCodegenBackend>, msg: &str, cookie: c_uint) {
cgcx.diag_emitter.inline_asm_error(cookie as u32, msg.to_owned());
}
ty::Bool => ("bool", DW_ATE_boolean),
ty::Char => ("char", DW_ATE_unsigned_char),
ty::Int(int_ty) => {
- (int_ty.ty_to_string(), DW_ATE_signed)
+ (int_ty.name_str(), DW_ATE_signed)
},
ty::Uint(uint_ty) => {
- (uint_ty.ty_to_string(), DW_ATE_unsigned)
+ (uint_ty.name_str(), DW_ATE_unsigned)
},
ty::Float(float_ty) => {
- (float_ty.ty_to_string(), DW_ATE_float)
+ (float_ty.name_str(), DW_ATE_float)
},
_ => bug!("debuginfo::basic_type_metadata - t is invalid type")
};
let discr_type = match discr.value {
layout::Int(t, _) => t,
- layout::Float(layout::FloatTy::F32) => Integer::I32,
- layout::Float(layout::FloatTy::F64) => Integer::I64,
+ layout::F32 => Integer::I32,
+ layout::F64 => Integer::I64,
layout::Pointer => cx.data_layout().ptr_sized_integer(),
}.to_ty(cx.tcx, false);
use rustc::mir::interpret::GlobalId;
use rustc_codegen_ssa::common::{IntPredicate, TypeKind};
use rustc::hir;
-use syntax::ast::{self, FloatTy};
use rustc_target::abi::HasDataLayout;
+use syntax::ast;
use rustc_codegen_ssa::common::span_invalid_monomorphization_error;
use rustc_codegen_ssa::traits::*;
emit_va_arg(self, args[0], ret_ty)
}
}
- Primitive::Float(FloatTy::F64) |
+ Primitive::F64 |
Primitive::Pointer => {
emit_va_arg(self, args[0], ret_ty)
}
// `va_arg` should never be used with the return type f32.
- Primitive::Float(FloatTy::F32) => {
+ Primitive::F32 => {
bug!("the va_arg intrinsic does not work with `f32`")
}
}
},
ty::Float(f) => {
return_error!("unsupported element type `{}` of floating-point vector `{}`",
- f, in_ty);
+ f.name_str(), in_ty);
},
_ => {
return_error!("`{}` is not a floating-point type", in_ty);
}
/// Appending to a Rust string -- used by RawRustStringOstream.
+#[allow(improper_ctypes)]
#[no_mangle]
pub unsafe extern "C" fn LLVMRustStringWriteImpl(sr: &RustString,
ptr: *const c_char,
use crate::type_::Type;
use rustc::ty::{self, Ty, TypeFoldable};
use rustc::ty::layout::{self, Align, LayoutOf, FnAbiExt, PointeeInfo, Size, TyLayout};
-use rustc_target::abi::{FloatTy, TyLayoutMethods};
+use rustc_target::abi::TyLayoutMethods;
use rustc::ty::print::obsolete::DefPathBasedNames;
use rustc_codegen_ssa::traits::*;
scalar: &layout::Scalar, offset: Size) -> &'a Type {
match scalar.value {
layout::Int(i, _) => cx.type_from_integer( i),
- layout::Float(FloatTy::F32) => cx.type_f32(),
- layout::Float(FloatTy::F64) => cx.type_f64(),
+ layout::F32 => cx.type_f32(),
+ layout::F64 => cx.type_f64(),
layout::Pointer => {
// If we know the alignment, pick something better than i8.
let pointee = if let Some(pointee) = self.pointee_info_at(cx, offset) {
ty::Char => output.push_str("char"),
ty::Str => output.push_str("str"),
ty::Never => output.push_str("!"),
- ty::Int(int_ty) => output.push_str(int_ty.ty_to_string()),
- ty::Uint(uint_ty) => output.push_str(uint_ty.ty_to_string()),
- ty::Float(float_ty) => output.push_str(float_ty.ty_to_string()),
+ ty::Int(int_ty) => output.push_str(int_ty.name_str()),
+ ty::Uint(uint_ty) => output.push_str(uint_ty.name_str()),
+ ty::Float(float_ty) => output.push_str(float_ty.name_str()),
ty::Foreign(def_id) => push_item_name(tcx, def_id, qualified, output),
ty::Adt(def, substs) => {
push_item_name(tcx, def.did, qualified, output);
if !cpp_like_names {
output.push('&');
}
- if mutbl == hir::MutMutable {
- output.push_str("mut ");
- }
+ output.push_str(mutbl.prefix_str());
push_debuginfo_type_name(tcx, inner_type, true, output, visited);
let sig = t.fn_sig(tcx);
- if sig.unsafety() == hir::Unsafety::Unsafe {
- output.push_str("unsafe ");
- }
+ output.push_str(sig.unsafety().prefix_str());
let abi = sig.abi();
if abi != rustc_target::spec::abi::Abi::Rust {
use rustc::util::common::{set_time_depth, time, print_time_passes_entry, ErrorReported};
use rustc_metadata::locator;
use rustc_codegen_utils::codegen_backend::CodegenBackend;
+use errors::PResult;
use rustc_interface::interface;
use rustc_interface::util::get_codegen_sysroot;
use rustc_data_structures::sync::SeqCst;
use syntax::ast;
use syntax::source_map::FileLoader;
use syntax::feature_gate::{GatedCfg, UnstableFeatures};
-use syntax::parse::{self, PResult};
+use syntax::parse;
use syntax::symbol::sym;
use syntax_pos::{DUMMY_SP, FileName};
use emitter::{Emitter, EmitterWriter, is_case_difference};
use registry::Registry;
-
+#[cfg(target_arch = "x86_64")]
+use rustc_data_structures::static_assert_size;
use rustc_data_structures::sync::{self, Lrc, Lock};
use rustc_data_structures::fx::{FxHashSet, FxIndexMap};
use rustc_data_structures::stable_hasher::StableHasher;
SpanSnippetError,
};
+pub type PResult<'a, T> = Result<T, DiagnosticBuilder<'a>>;
+
+// `PResult` is used a lot. Make sure it doesn't unintentionally get bigger.
+// (See also the comment on `DiagnosticBuilderInner`.)
+#[cfg(target_arch = "x86_64")]
+static_assert_size!(PResult<'_, bool>, 16);
+
/// Indicates the confidence in the correctness of a suggestion.
///
/// All suggestions are marked with an `Applicability`. Tools use the applicability of a suggestion
rustc_metadata = { path = "../librustc_metadata" }
rustc_mir = { path = "../librustc_mir" }
rustc_passes = { path = "../librustc_passes" }
-rustc_target = { path = "../librustc_target" }
rustc_typeck = { path = "../librustc_typeck" }
rustc_lint = { path = "../librustc_lint" }
rustc_errors = { path = "../librustc_errors" }
rustc_resolve = { path = "../librustc_resolve" }
tempfile = "3.0.5"
once_cell = "1"
+
+[dev-dependencies]
+rustc_target = { path = "../librustc_target" }
use std::sync::{Arc, Mutex};
use syntax::{self, parse};
use syntax::ast::{self, MetaItemKind};
-use syntax::parse::token;
+use syntax::token;
use syntax::source_map::{FileName, FileLoader, SourceMap};
use syntax::sess::ParseSess;
use syntax_pos::edition;
use rustc_codegen_utils::link::filename_for_metadata;
use rustc_data_structures::{box_region_allow_access, declare_box_region_type, parallel};
use rustc_data_structures::sync::{Lrc, ParallelIterator, par_iter};
+use rustc_errors::PResult;
use rustc_incremental;
use rustc_metadata::cstore;
use rustc_mir as mir;
use syntax::early_buffered_lints::BufferedEarlyLint;
use syntax_expand::base::{NamedSyntaxExtension, ExtCtxt};
use syntax::mut_visit::MutVisitor;
-use syntax::parse::{self, PResult};
+use syntax::parse;
use syntax::util::node_count::NodeCounter;
use syntax::symbol::Symbol;
use syntax_pos::FileName;
--- /dev/null
+use crate::lint::{LateContext, LateLintPass, LintArray, LintContext, LintPass};
+use rustc::{
+ lint::FutureIncompatibleInfo,
+ hir,
+ ty::{
+ self,
+ adjustment::{Adjust, Adjustment},
+ },
+};
+use syntax::{
+ errors::Applicability,
+ symbol::sym,
+};
+
+
+declare_lint! {
+ pub ARRAY_INTO_ITER,
+ Warn,
+ "detects calling `into_iter` on arrays",
+ @future_incompatible = FutureIncompatibleInfo {
+ reference: "issue #66145 <https://github.com/rust-lang/rust/issues/66145>",
+ edition: None,
+ };
+}
+
+declare_lint_pass!(
+ /// Checks for instances of calling `into_iter` on arrays.
+ ArrayIntoIter => [ARRAY_INTO_ITER]
+);
+
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for ArrayIntoIter {
+ fn check_expr(&mut self, cx: &LateContext<'a, 'tcx>, expr: &'tcx hir::Expr) {
+ // We only care about method call expressions.
+ if let hir::ExprKind::MethodCall(call, span, args) = &expr.kind {
+ if call.ident.name != sym::into_iter {
+ return;
+ }
+
+ // Check if the method call actually calls the libcore
+ // `IntoIterator::into_iter`.
+ let def_id = cx.tables.type_dependent_def_id(expr.hir_id).unwrap();
+ match cx.tcx.trait_of_item(def_id) {
+ Some(trait_id) if cx.tcx.is_diagnostic_item(sym::IntoIterator, trait_id) => {},
+ _ => return,
+ };
+
+ // As this is a method call expression, we have at least one
+ // argument.
+ let receiver_arg = &args[0];
+
+ // Test if the original `self` type is an array type.
+ match cx.tables.expr_ty(receiver_arg).kind {
+ ty::Array(..) => {}
+ _ => return,
+ }
+
+ // Make sure that the first adjustment is an autoref coercion.
+ match cx.tables.expr_adjustments(receiver_arg).get(0) {
+ Some(Adjustment { kind: Adjust::Borrow(_), .. }) => {}
+ _ => return,
+ }
+
+ // Emit lint diagnostic.
+ let target = match cx.tables.expr_ty_adjusted(receiver_arg).kind {
+ ty::Ref(_, ty::TyS { kind: ty::Array(..), ..}, _) => "[T; N]",
+ ty::Ref(_, ty::TyS { kind: ty::Slice(..), ..}, _) => "[T]",
+
+ // We know the original first argument type is an array type,
+ // we know that the first adjustment was an autoref coercion
+ // and we know that `IntoIterator` is the trait involved. The
+ // array cannot be coerced to something other than a reference
+ // to an array or to a slice.
+ _ => bug!("array type coerced to something other than array or slice"),
+ };
+ let msg = format!(
+ "this method call currently resolves to `<&{} as IntoIterator>::into_iter` (due \
+ to autoref coercions), but that might change in the future when \
+ `IntoIterator` impls for arrays are added.",
+ target,
+ );
+ cx.struct_span_lint(ARRAY_INTO_ITER, *span, &msg)
+ .span_suggestion(
+ call.ident.span,
+ "use `.iter()` instead of `.into_iter()` to avoid ambiguity",
+ "iter".into(),
+ Applicability::MachineApplicable,
+ )
+ .emit();
+ }
+ }
+}
}
}
if attr.check_name(sym::no_start) || attr.check_name(sym::crate_id) {
- let path_str = pprust::path_to_string(&attr.path);
+ let path_str = pprust::path_to_string(&attr.get_normal_item().path);
let msg = format!("use of deprecated attribute `{}`: no longer used.", path_str);
lint_deprecated_attr(cx, attr, &msg, None);
}
let mut sugared_span: Option<Span> = None;
while let Some(attr) = attrs.next() {
- if attr.is_sugared_doc {
+ if attr.is_doc_comment() {
sugared_span = Some(
sugared_span.map_or_else(
|| attr.span,
);
}
- if attrs.peek().map(|next_attr| next_attr.is_sugared_doc).unwrap_or_default() {
+ if attrs.peek().map(|next_attr| next_attr.is_doc_comment()).unwrap_or_default() {
continue;
}
// `Invalid` represents the empty string and matches that.
const TRANSMUTE_PATH: &[Symbol] =
&[sym::core, sym::intrinsics, kw::Invalid, sym::transmute];
+ const MU_ZEROED_PATH: &[Symbol] =
+ &[sym::core, sym::mem, sym::maybe_uninit, sym::MaybeUninit, sym::zeroed];
+ const MU_UNINIT_PATH: &[Symbol] =
+ &[sym::core, sym::mem, sym::maybe_uninit, sym::MaybeUninit, sym::uninit];
if let hir::ExprKind::Call(ref path_expr, ref args) = expr.kind {
+ // Find calls to `mem::{uninitialized,zeroed}` methods.
if let hir::ExprKind::Path(ref qpath) = path_expr.kind {
let def_id = cx.tables.qpath_res(qpath, path_expr.hir_id).opt_def_id()?;
return Some(InitKind::Zeroed);
}
}
- // FIXME: Also detect `MaybeUninit::zeroed().assume_init()` and
- // `MaybeUninit::uninit().assume_init()`.
+ }
+ } else if let hir::ExprKind::MethodCall(_, _, ref args) = expr.kind {
+ // Find problematic calls to `MaybeUninit::assume_init`.
+ let def_id = cx.tables.type_dependent_def_id(expr.hir_id)?;
+ if cx.tcx.is_diagnostic_item(sym::assume_init, def_id) {
+ // This is a call to *some* method named `assume_init`.
+ // See if the `self` parameter is one of the dangerous constructors.
+ if let hir::ExprKind::Call(ref path_expr, _) = args[0].kind {
+ if let hir::ExprKind::Path(ref qpath) = path_expr.kind {
+ let def_id = cx.tables.qpath_res(qpath, path_expr.hir_id).opt_def_id()?;
+ if cx.match_def_path(def_id, MU_ZEROED_PATH) {
+ return Some(InitKind::Zeroed);
+ } else if cx.match_def_path(def_id, MU_UNINIT_PATH) {
+ return Some(InitKind::Uninit);
+ }
+ }
+ }
}
}
Adt(..) if ty.is_box() => Some((format!("`Box` must be non-null"), None)),
FnPtr(..) => Some((format!("Function pointers must be non-null"), None)),
Never => Some((format!("The never type (`!`) has no valid value"), None)),
+ RawPtr(tm) if matches!(tm.ty.kind, Dynamic(..)) => // raw ptr to dyn Trait
+ Some((format!("The vtable of a wide raw pointer must be non-null"), None)),
// Primitive types with other constraints.
Bool if init == InitKind::Uninit =>
Some((format!("Booleans must be `true` or `false`"), None)),
);
err.span_label(expr.span,
"this code causes undefined behavior when executed");
- err.span_label(expr.span, "help: use `MaybeUninit<T>` instead");
+ err.span_label(expr.span, "help: use `MaybeUninit<T>` instead, \
+ and only call `assume_init` after initialization is done");
if let Some(span) = span {
err.span_note(span, &msg);
} else {
#![feature(box_patterns)]
#![feature(box_syntax)]
#![feature(nll)]
+#![feature(matches_macro)]
#![recursion_limit="256"]
#[macro_use]
extern crate rustc;
+mod array_into_iter;
mod error_codes;
mod nonstandard_style;
mod redundant_semicolon;
use unused::*;
use non_ascii_idents::*;
use rustc::lint::internal::*;
+use array_into_iter::ArrayIntoIter;
/// Useful for other parts of the compiler.
pub use builtin::SoftLints;
// FIXME: Turn the computation of types which implement Debug into a query
// and change this to a module lint pass
MissingDebugImplementations: MissingDebugImplementations::default(),
+
+ ArrayIntoIter: ArrayIntoIter,
]);
)
}
use crate::hir::def_id::DefId;
use rustc::hir::lowering::is_range_literal;
use rustc::ty::subst::SubstsRef;
-use rustc::ty::{self, AdtKind, ParamEnv, Ty, TyCtxt};
+use rustc::ty::{self, AdtKind, ParamEnv, Ty, TyCtxt, TypeFoldable};
use rustc::ty::layout::{self, IntegerExt, LayoutOf, VariantIdx, SizeSkeleton};
use rustc::{lint, util};
use rustc_index::vec::Idx;
max: u128,
expr: &'tcx hir::Expr,
parent_expr: &'tcx hir::Expr,
- ty: impl std::fmt::Debug,
+ ty: &str,
) -> bool {
// We only want to handle exclusive (`..`) ranges,
// which are represented as `ExprKind::Struct`.
let mut err = cx.struct_span_lint(
OVERFLOWING_LITERALS,
parent_expr.span,
- &format!("range endpoint is out of range for `{:?}`", ty),
+ &format!("range endpoint is out of range for `{}`", ty),
);
if let Ok(start) = cx.sess().source_map().span_to_snippet(eps[0].span) {
use ast::{LitKind, LitIntType};
// We need to preserve the literal's suffix,
// as it may determine typing information.
let suffix = match lit.node {
- LitKind::Int(_, LitIntType::Signed(s)) => format!("{}", s),
- LitKind::Int(_, LitIntType::Unsigned(s)) => format!("{}", s),
+ LitKind::Int(_, LitIntType::Signed(s)) => format!("{}", s.name_str()),
+ LitKind::Int(_, LitIntType::Unsigned(s)) => format!("{}", s.name_str()),
LitKind::Int(_, LitIntType::Unsuffixed) => "".to_owned(),
_ => bug!(),
};
let (t, actually) = match ty {
attr::IntType::SignedInt(t) => {
let actually = sign_extend(val, size) as i128;
- (format!("{:?}", t), actually.to_string())
+ (t.name_str(), actually.to_string())
}
attr::IntType::UnsignedInt(t) => {
let actually = truncate(val, size);
- (format!("{:?}", t), actually.to_string())
+ (t.name_str(), actually.to_string())
}
};
let mut err = cx.struct_span_lint(
// - `uX` => `uY`
//
// No suggestion for: `isize`, `usize`.
-fn get_type_suggestion(t: Ty<'_>, val: u128, negative: bool) -> Option<String> {
+fn get_type_suggestion(t: Ty<'_>, val: u128, negative: bool) -> Option<&'static str> {
use syntax::ast::IntTy::*;
use syntax::ast::UintTy::*;
macro_rules! find_fit {
match $ty {
$($type => {
$(if !negative && val <= uint_ty_range($utypes).1 {
- return Some(format!("{:?}", $utypes))
+ return Some($utypes.name_str())
})*
$(if val <= int_ty_range($itypes).1 as u128 + _neg {
- return Some(format!("{:?}", $itypes))
+ return Some($itypes.name_str())
})*
None
},)+
if let Node::Expr(par_e) = cx.tcx.hir().get(par_id) {
if let hir::ExprKind::Struct(..) = par_e.kind {
if is_range_literal(cx.sess(), par_e)
- && lint_overflowing_range_endpoint(cx, lit, v, max, e, par_e, t)
+ && lint_overflowing_range_endpoint(cx, lit, v, max, e, par_e, t.name_str())
{
// The overflowing literal lint was overridden.
return;
cx.span_lint(
OVERFLOWING_LITERALS,
e.span,
- &format!("literal out of range for `{:?}`", t),
+ &format!("literal out of range for `{}`", t.name_str()),
);
}
}
}
hir::ExprKind::Struct(..)
if is_range_literal(cx.sess(), par_e) => {
+ let t = t.name_str();
if lint_overflowing_range_endpoint(cx, lit, lit_val, max, e, par_e, t) {
// The overflowing literal lint was overridden.
return;
cx.span_lint(
OVERFLOWING_LITERALS,
e.span,
- &format!("literal out of range for `{:?}`", t),
+ &format!("literal out of range for `{}`", t.name_str()),
);
}
}
}
ty::Float(t) => {
let is_infinite = match lit.node {
- ast::LitKind::Float(v, _) |
- ast::LitKind::FloatUnsuffixed(v) => {
+ ast::LitKind::Float(v, _) => {
match t {
ast::FloatTy::F32 => v.as_str().parse().map(f32::is_infinite),
ast::FloatTy::F64 => v.as_str().parse().map(f64::is_infinite),
_ => bug!(),
};
if is_infinite == Ok(true) {
- cx.span_lint(OVERFLOWING_LITERALS,
- e.span,
- &format!("literal out of range for `{:?}`", t));
+ cx.span_lint(
+ OVERFLOWING_LITERALS,
+ e.span,
+ &format!("literal out of range for `{}`", t.name_str()),
+ );
}
}
_ => {}
ty::Array(ty, _) => self.check_type_for_ffi(cache, ty),
ty::FnPtr(sig) => {
- match sig.abi() {
- Abi::Rust | Abi::RustIntrinsic | Abi::PlatformIntrinsic | Abi::RustCall => {
- return FfiUnsafe {
- ty,
- reason: "this function pointer has Rust-specific calling convention",
- help: Some("consider using an `extern fn(...) -> ...` \
- function pointer instead"),
- }
- }
- _ => {}
+ if self.is_internal_abi(sig.abi()) {
+ return FfiUnsafe {
+ ty,
+ reason: "this function pointer has Rust-specific calling convention",
+ help: Some("consider using an `extern fn(...) -> ...` \
+ function pointer instead"),
+ };
}
let sig = cx.erase_late_bound_regions(&sig);
ty::Foreign(..) => FfiSafe,
- ty::Param(..) |
+ // `extern "C" fn` functions can have type parameters, which may or may not be FFI-safe,
+ // so they are currently ignored for the purposes of this lint, see #65134.
+ ty::Param(..) | ty::Projection(..) => FfiSafe,
+
ty::Infer(..) |
ty::Bound(..) |
ty::Error |
ty::GeneratorWitness(..) |
ty::Placeholder(..) |
ty::UnnormalizedProjection(..) |
- ty::Projection(..) |
ty::Opaque(..) |
ty::FnDef(..) => bug!("unexpected type in foreign function: {:?}", ty),
}
sp: Span,
note: &str,
help: Option<&str>,
+ is_foreign_item: bool,
) {
let mut diag = self.cx.struct_span_lint(
IMPROPER_CTYPES,
sp,
- &format!("`extern` block uses type `{}`, which is not FFI-safe", ty),
+ &format!(
+ "`extern` {} uses type `{}`, which is not FFI-safe",
+ if is_foreign_item { "block" } else { "fn" },
+ ty,
+ ),
);
diag.span_label(sp, "not FFI-safe");
if let Some(help) = help {
diag.emit();
}
- fn check_for_opaque_ty(&mut self, sp: Span, ty: Ty<'tcx>) -> bool {
- use crate::rustc::ty::TypeFoldable;
-
+ fn check_for_opaque_ty(&mut self, sp: Span, ty: Ty<'tcx>, is_foreign_item: bool) -> bool {
struct ProhibitOpaqueTypes<'tcx> {
ty: Option<Ty<'tcx>>,
};
sp,
"opaque types have no C equivalent",
None,
+ is_foreign_item,
);
true
} else {
}
}
- fn check_type_for_ffi_and_report_errors(&mut self, sp: Span, ty: Ty<'tcx>) {
+ fn check_type_for_ffi_and_report_errors(
+ &mut self,
+ sp: Span,
+ ty: Ty<'tcx>,
+ is_foreign_item: bool,
+ ) {
// We have to check for opaque types before `normalize_erasing_regions`,
// which will replace opaque types with their underlying concrete type.
- if self.check_for_opaque_ty(sp, ty) {
+ if self.check_for_opaque_ty(sp, ty, is_foreign_item) {
// We've already emitted an error due to an opaque type.
return;
}
- // it is only OK to use this function because extern fns cannot have
- // any generic types right now:
- let ty = self.cx.tcx.normalize_erasing_regions(ParamEnv::reveal_all(), ty);
-
+ let ty = self.cx.tcx.normalize_erasing_regions(self.cx.param_env, ty);
match self.check_type_for_ffi(&mut FxHashSet::default(), ty) {
FfiResult::FfiSafe => {}
FfiResult::FfiPhantom(ty) => {
- self.emit_ffi_unsafe_type_lint(ty, sp, "composed only of `PhantomData`", None);
+ self.emit_ffi_unsafe_type_lint(
+ ty, sp, "composed only of `PhantomData`", None, is_foreign_item);
}
FfiResult::FfiUnsafe { ty, reason, help } => {
- self.emit_ffi_unsafe_type_lint(ty, sp, reason, help);
+ self.emit_ffi_unsafe_type_lint(
+ ty, sp, reason, help, is_foreign_item);
}
}
}
- fn check_foreign_fn(&mut self, id: hir::HirId, decl: &hir::FnDecl) {
+ fn check_foreign_fn(&mut self, id: hir::HirId, decl: &hir::FnDecl, is_foreign_item: bool) {
let def_id = self.cx.tcx.hir().local_def_id(id);
let sig = self.cx.tcx.fn_sig(def_id);
let sig = self.cx.tcx.erase_late_bound_regions(&sig);
for (input_ty, input_hir) in sig.inputs().iter().zip(&decl.inputs) {
- self.check_type_for_ffi_and_report_errors(input_hir.span, input_ty);
+ self.check_type_for_ffi_and_report_errors(input_hir.span, input_ty, is_foreign_item);
}
if let hir::Return(ref ret_hir) = decl.output {
let ret_ty = sig.output();
if !ret_ty.is_unit() {
- self.check_type_for_ffi_and_report_errors(ret_hir.span, ret_ty);
+ self.check_type_for_ffi_and_report_errors(ret_hir.span, ret_ty, is_foreign_item);
}
}
}
fn check_foreign_static(&mut self, id: hir::HirId, span: Span) {
let def_id = self.cx.tcx.hir().local_def_id(id);
let ty = self.cx.tcx.type_of(def_id);
- self.check_type_for_ffi_and_report_errors(span, ty);
+ self.check_type_for_ffi_and_report_errors(span, ty, true);
+ }
+
+ fn is_internal_abi(&self, abi: Abi) -> bool {
+ if let Abi::Rust | Abi::RustCall | Abi::RustIntrinsic | Abi::PlatformIntrinsic = abi {
+ true
+ } else {
+ false
+ }
}
}
fn check_foreign_item(&mut self, cx: &LateContext<'_, '_>, it: &hir::ForeignItem) {
let mut vis = ImproperCTypesVisitor { cx };
let abi = cx.tcx.hir().get_foreign_abi(it.hir_id);
- if let Abi::Rust | Abi::RustCall | Abi::RustIntrinsic | Abi::PlatformIntrinsic = abi {
- // Don't worry about types in internal ABIs.
- } else {
+ if !vis.is_internal_abi(abi) {
match it.kind {
hir::ForeignItemKind::Fn(ref decl, _, _) => {
- vis.check_foreign_fn(it.hir_id, decl);
+ vis.check_foreign_fn(it.hir_id, decl, true);
}
hir::ForeignItemKind::Static(ref ty, _) => {
vis.check_foreign_static(it.hir_id, ty.span);
}
}
}
+
+ fn check_fn(
+ &mut self,
+ cx: &LateContext<'a, 'tcx>,
+ kind: hir::intravisit::FnKind<'tcx>,
+ decl: &'tcx hir::FnDecl,
+ _: &'tcx hir::Body,
+ _: Span,
+ hir_id: hir::HirId,
+ ) {
+ use hir::intravisit::FnKind;
+
+ let abi = match kind {
+ FnKind::ItemFn(_, _, header, ..) => (header.abi),
+ FnKind::Method(_, sig, ..) => (sig.header.abi),
+ _ => return,
+ };
+
+ let mut vis = ImproperCTypesVisitor { cx };
+ if !vis.is_internal_abi(abi) {
+ vis.check_foreign_fn(hir_id, decl, false);
+ }
+ }
}
declare_lint_pass!(VariantSizeDifferences => [VARIANT_SIZE_DIFFERENCES]);
use crate::cstore::{self, CStore, MetadataBlob};
use crate::locator::{self, CratePaths};
-use crate::schema::{CrateRoot, CrateDep};
+use crate::rmeta::{CrateRoot, CrateDep};
use rustc_data_structures::sync::{Lock, Once, AtomicCell};
use rustc::hir::def_id::CrateNum;
// The crate store - a central repo for information collected about external
// crates and libraries
-use crate::schema;
+use crate::rmeta;
use rustc::dep_graph::DepNodeIndex;
use rustc::hir::def_id::{CrateNum, DefIndex};
use rustc::hir::map::definitions::DefPathTable;
use syntax_pos;
use proc_macro::bridge::client::ProcMacro;
-pub use crate::cstore_impl::{provide, provide_extern};
+pub use crate::rmeta::{provide, provide_extern};
// A map from external crate numbers (as decoded from some crate file) to
// local crate numbers (as generated during this session). Each external
/// lifetime is only used behind `Lazy`, and therefore acts like an
/// universal (`for<'tcx>`), that is paired up with whichever `TyCtxt`
/// is being used to decode those values.
- crate root: schema::CrateRoot<'static>,
+ crate root: rmeta::CrateRoot<'static>,
/// For each definition in this crate, we encode a key. When the
/// crate is loaded, we read all the keys and put them in this
/// hashmap, which gives the reverse mapping. This allows us to
/// Trait impl data.
/// FIXME: Used only from queries and can use query cache,
/// so pre-decoding can probably be avoided.
- crate trait_impls: FxHashMap<(u32, DefIndex), schema::Lazy<[DefIndex]>>,
+ crate trait_impls: FxHashMap<(u32, DefIndex), rmeta::Lazy<[DefIndex]>>,
/// Proc macro descriptions for this crate, if it's a proc macro crate.
crate raw_proc_macros: Option<&'static [ProcMacro]>,
/// Source maps for code from the crate.
+++ /dev/null
-use crate::cstore::{self, LoadedMacro};
-use crate::encoder;
-use crate::link_args;
-use crate::native_libs;
-use crate::foreign_modules;
-use crate::schema;
-
-use rustc::ty::query::QueryConfig;
-use rustc::middle::cstore::{CrateSource, CrateStore, DepKind, EncodedMetadata, NativeLibraryKind};
-use rustc::middle::exported_symbols::ExportedSymbol;
-use rustc::middle::stability::DeprecationEntry;
-use rustc::hir::def;
-use rustc::hir;
-use rustc::session::{CrateDisambiguator, Session};
-use rustc::ty::{self, TyCtxt};
-use rustc::ty::query::Providers;
-use rustc::hir::def_id::{CrateNum, DefId, LOCAL_CRATE, CRATE_DEF_INDEX};
-use rustc::hir::map::{DefKey, DefPath, DefPathHash};
-use rustc::hir::map::definitions::DefPathTable;
-use rustc::util::nodemap::DefIdMap;
-use rustc_data_structures::svh::Svh;
-
-use smallvec::SmallVec;
-use std::any::Any;
-use rustc_data_structures::sync::Lrc;
-use std::sync::Arc;
-
-use syntax::ast;
-use syntax::attr;
-use syntax::source_map;
-use syntax::parse::source_file_to_stream;
-use syntax::parse::parser::emit_unclosed_delims;
-use syntax::source_map::Spanned;
-use syntax::symbol::Symbol;
-use syntax_pos::{Span, FileName};
-use rustc_index::bit_set::BitSet;
-
-macro_rules! provide {
- (<$lt:tt> $tcx:ident, $def_id:ident, $other:ident, $cdata:ident,
- $($name:ident => $compute:block)*) => {
- pub fn provide_extern<$lt>(providers: &mut Providers<$lt>) {
- // HACK(eddyb) `$lt: $lt` forces `$lt` to be early-bound, which
- // allows the associated type in the return type to be normalized.
- $(fn $name<$lt: $lt, T: IntoArgs>(
- $tcx: TyCtxt<$lt>,
- def_id_arg: T,
- ) -> <ty::queries::$name<$lt> as QueryConfig<$lt>>::Value {
- let _prof_timer =
- $tcx.prof.generic_activity("metadata_decode_entry");
-
- #[allow(unused_variables)]
- let ($def_id, $other) = def_id_arg.into_args();
- assert!(!$def_id.is_local());
-
- let $cdata = $tcx.crate_data_as_any($def_id.krate);
- let $cdata = $cdata.downcast_ref::<cstore::CrateMetadata>()
- .expect("CrateStore created data is not a CrateMetadata");
-
- if $tcx.dep_graph.is_fully_enabled() {
- let crate_dep_node_index = $cdata.get_crate_dep_node_index($tcx);
- $tcx.dep_graph.read_index(crate_dep_node_index);
- }
-
- $compute
- })*
-
- *providers = Providers {
- $($name,)*
- ..*providers
- };
- }
- }
-}
-
-// small trait to work around different signature queries all being defined via
-// the macro above.
-trait IntoArgs {
- fn into_args(self) -> (DefId, DefId);
-}
-
-impl IntoArgs for DefId {
- fn into_args(self) -> (DefId, DefId) { (self, self) }
-}
-
-impl IntoArgs for CrateNum {
- fn into_args(self) -> (DefId, DefId) { (self.as_def_id(), self.as_def_id()) }
-}
-
-impl IntoArgs for (CrateNum, DefId) {
- fn into_args(self) -> (DefId, DefId) { (self.0.as_def_id(), self.1) }
-}
-
-provide! { <'tcx> tcx, def_id, other, cdata,
- type_of => { cdata.get_type(def_id.index, tcx) }
- generics_of => {
- tcx.arena.alloc(cdata.get_generics(def_id.index, tcx.sess))
- }
- predicates_of => { cdata.get_predicates(def_id.index, tcx) }
- predicates_defined_on => { cdata.get_predicates_defined_on(def_id.index, tcx) }
- super_predicates_of => { cdata.get_super_predicates(def_id.index, tcx) }
- trait_def => {
- tcx.arena.alloc(cdata.get_trait_def(def_id.index, tcx.sess))
- }
- adt_def => { cdata.get_adt_def(def_id.index, tcx) }
- adt_destructor => {
- let _ = cdata;
- tcx.calculate_dtor(def_id, &mut |_,_| Ok(()))
- }
- variances_of => { tcx.arena.alloc_from_iter(cdata.get_item_variances(def_id.index)) }
- associated_item_def_ids => {
- let mut result = SmallVec::<[_; 8]>::new();
- cdata.each_child_of_item(def_id.index,
- |child| result.push(child.res.def_id()), tcx.sess);
- tcx.arena.alloc_slice(&result)
- }
- associated_item => { cdata.get_associated_item(def_id.index) }
- impl_trait_ref => { cdata.get_impl_trait(def_id.index, tcx) }
- impl_polarity => { cdata.get_impl_polarity(def_id.index) }
- coerce_unsized_info => {
- cdata.get_coerce_unsized_info(def_id.index).unwrap_or_else(|| {
- bug!("coerce_unsized_info: `{:?}` is missing its info", def_id);
- })
- }
- optimized_mir => { tcx.arena.alloc(cdata.get_optimized_mir(tcx, def_id.index)) }
- promoted_mir => { tcx.arena.alloc(cdata.get_promoted_mir(tcx, def_id.index)) }
- mir_const_qualif => {
- (cdata.mir_const_qualif(def_id.index), tcx.arena.alloc(BitSet::new_empty(0)))
- }
- fn_sig => { cdata.fn_sig(def_id.index, tcx) }
- inherent_impls => { cdata.get_inherent_implementations_for_type(tcx, def_id.index) }
- is_const_fn_raw => { cdata.is_const_fn_raw(def_id.index) }
- asyncness => { cdata.asyncness(def_id.index) }
- is_foreign_item => { cdata.is_foreign_item(def_id.index) }
- static_mutability => { cdata.static_mutability(def_id.index) }
- def_kind => { cdata.def_kind(def_id.index) }
- def_span => { cdata.get_span(def_id.index, &tcx.sess) }
- lookup_stability => {
- cdata.get_stability(def_id.index).map(|s| tcx.intern_stability(s))
- }
- lookup_deprecation_entry => {
- cdata.get_deprecation(def_id.index).map(DeprecationEntry::external)
- }
- item_attrs => { cdata.get_item_attrs(def_id.index, tcx.sess) }
- // FIXME(#38501) We've skipped a `read` on the `HirBody` of
- // a `fn` when encoding, so the dep-tracking wouldn't work.
- // This is only used by rustdoc anyway, which shouldn't have
- // incremental recompilation ever enabled.
- fn_arg_names => { cdata.get_fn_param_names(def_id.index) }
- rendered_const => { cdata.get_rendered_const(def_id.index) }
- impl_parent => { cdata.get_parent_impl(def_id.index) }
- trait_of_item => { cdata.get_trait_of_item(def_id.index) }
- is_mir_available => { cdata.is_item_mir_available(def_id.index) }
-
- dylib_dependency_formats => { cdata.get_dylib_dependency_formats(tcx) }
- is_panic_runtime => { cdata.root.panic_runtime }
- is_compiler_builtins => { cdata.root.compiler_builtins }
- has_global_allocator => { cdata.root.has_global_allocator }
- has_panic_handler => { cdata.root.has_panic_handler }
- is_sanitizer_runtime => { cdata.root.sanitizer_runtime }
- is_profiler_runtime => { cdata.root.profiler_runtime }
- panic_strategy => { cdata.root.panic_strategy }
- extern_crate => {
- let r = *cdata.extern_crate.lock();
- r.map(|c| &*tcx.arena.alloc(c))
- }
- is_no_builtins => { cdata.root.no_builtins }
- symbol_mangling_version => { cdata.root.symbol_mangling_version }
- impl_defaultness => { cdata.get_impl_defaultness(def_id.index) }
- reachable_non_generics => {
- let reachable_non_generics = tcx
- .exported_symbols(cdata.cnum)
- .iter()
- .filter_map(|&(exported_symbol, export_level)| {
- if let ExportedSymbol::NonGeneric(def_id) = exported_symbol {
- return Some((def_id, export_level))
- } else {
- None
- }
- })
- .collect();
-
- tcx.arena.alloc(reachable_non_generics)
- }
- native_libraries => { Lrc::new(cdata.get_native_libraries(tcx.sess)) }
- foreign_modules => { cdata.get_foreign_modules(tcx) }
- plugin_registrar_fn => {
- cdata.root.plugin_registrar_fn.map(|index| {
- DefId { krate: def_id.krate, index }
- })
- }
- proc_macro_decls_static => {
- cdata.root.proc_macro_decls_static.map(|index| {
- DefId { krate: def_id.krate, index }
- })
- }
- crate_disambiguator => { cdata.root.disambiguator }
- crate_hash => { cdata.root.hash }
- original_crate_name => { cdata.root.name }
-
- extra_filename => { cdata.root.extra_filename.clone() }
-
- implementations_of_trait => {
- cdata.get_implementations_for_trait(tcx, Some(other))
- }
-
- all_trait_implementations => {
- cdata.get_implementations_for_trait(tcx, None)
- }
-
- visibility => { cdata.get_visibility(def_id.index) }
- dep_kind => {
- let r = *cdata.dep_kind.lock();
- r
- }
- crate_name => { cdata.root.name }
- item_children => {
- let mut result = SmallVec::<[_; 8]>::new();
- cdata.each_child_of_item(def_id.index, |child| result.push(child), tcx.sess);
- tcx.arena.alloc_slice(&result)
- }
- defined_lib_features => { cdata.get_lib_features(tcx) }
- defined_lang_items => { cdata.get_lang_items(tcx) }
- diagnostic_items => { cdata.get_diagnostic_items(tcx) }
- missing_lang_items => { cdata.get_missing_lang_items(tcx) }
-
- missing_extern_crate_item => {
- let r = match *cdata.extern_crate.borrow() {
- Some(extern_crate) if !extern_crate.is_direct() => true,
- _ => false,
- };
- r
- }
-
- used_crate_source => { Lrc::new(cdata.source.clone()) }
-
- exported_symbols => {
- let syms = cdata.exported_symbols(tcx);
-
- // FIXME rust-lang/rust#64319, rust-lang/rust#64872: We want
- // to block export of generics from dylibs, but we must fix
- // rust-lang/rust#65890 before we can do that robustly.
-
- Arc::new(syms)
- }
-}
-
-pub fn provide(providers: &mut Providers<'_>) {
- // FIXME(#44234) - almost all of these queries have no sub-queries and
- // therefore no actual inputs, they're just reading tables calculated in
- // resolve! Does this work? Unsure! That's what the issue is about
- *providers = Providers {
- is_dllimport_foreign_item: |tcx, id| {
- match tcx.native_library_kind(id) {
- Some(NativeLibraryKind::NativeUnknown) |
- Some(NativeLibraryKind::NativeRawDylib) => true,
- _ => false,
- }
- },
- is_statically_included_foreign_item: |tcx, id| {
- match tcx.native_library_kind(id) {
- Some(NativeLibraryKind::NativeStatic) |
- Some(NativeLibraryKind::NativeStaticNobundle) => true,
- _ => false,
- }
- },
- native_library_kind: |tcx, id| {
- tcx.native_libraries(id.krate)
- .iter()
- .filter(|lib| native_libs::relevant_lib(&tcx.sess, lib))
- .find(|lib| {
- let fm_id = match lib.foreign_module {
- Some(id) => id,
- None => return false,
- };
- tcx.foreign_modules(id.krate)
- .iter()
- .find(|m| m.def_id == fm_id)
- .expect("failed to find foreign module")
- .foreign_items
- .contains(&id)
- })
- .map(|l| l.kind)
- },
- native_libraries: |tcx, cnum| {
- assert_eq!(cnum, LOCAL_CRATE);
- Lrc::new(native_libs::collect(tcx))
- },
- foreign_modules: |tcx, cnum| {
- assert_eq!(cnum, LOCAL_CRATE);
- &tcx.arena.alloc(foreign_modules::collect(tcx))[..]
- },
- link_args: |tcx, cnum| {
- assert_eq!(cnum, LOCAL_CRATE);
- Lrc::new(link_args::collect(tcx))
- },
-
- // Returns a map from a sufficiently visible external item (i.e., an
- // external item that is visible from at least one local module) to a
- // sufficiently visible parent (considering modules that re-export the
- // external item to be parents).
- visible_parent_map: |tcx, cnum| {
- use std::collections::vec_deque::VecDeque;
- use std::collections::hash_map::Entry;
-
- assert_eq!(cnum, LOCAL_CRATE);
- let mut visible_parent_map: DefIdMap<DefId> = Default::default();
-
- // Issue 46112: We want the map to prefer the shortest
- // paths when reporting the path to an item. Therefore we
- // build up the map via a breadth-first search (BFS),
- // which naturally yields minimal-length paths.
- //
- // Note that it needs to be a BFS over the whole forest of
- // crates, not just each individual crate; otherwise you
- // only get paths that are locally minimal with respect to
- // whatever crate we happened to encounter first in this
- // traversal, but not globally minimal across all crates.
- let bfs_queue = &mut VecDeque::new();
-
- // Preferring shortest paths alone does not guarantee a
- // deterministic result; so sort by crate num to avoid
- // hashtable iteration non-determinism. This only makes
- // things as deterministic as crate-nums assignment is,
- // which is to say, its not deterministic in general. But
- // we believe that libstd is consistently assigned crate
- // num 1, so it should be enough to resolve #46112.
- let mut crates: Vec<CrateNum> = (*tcx.crates()).to_owned();
- crates.sort();
-
- for &cnum in crates.iter() {
- // Ignore crates without a corresponding local `extern crate` item.
- if tcx.missing_extern_crate_item(cnum) {
- continue
- }
-
- bfs_queue.push_back(DefId {
- krate: cnum,
- index: CRATE_DEF_INDEX
- });
- }
-
- // (restrict scope of mutable-borrow of `visible_parent_map`)
- {
- let visible_parent_map = &mut visible_parent_map;
- let mut add_child = |bfs_queue: &mut VecDeque<_>,
- child: &def::Export<hir::HirId>,
- parent: DefId| {
- if child.vis != ty::Visibility::Public {
- return;
- }
-
- if let Some(child) = child.res.opt_def_id() {
- match visible_parent_map.entry(child) {
- Entry::Occupied(mut entry) => {
- // If `child` is defined in crate `cnum`, ensure
- // that it is mapped to a parent in `cnum`.
- if child.krate == cnum && entry.get().krate != cnum {
- entry.insert(parent);
- }
- }
- Entry::Vacant(entry) => {
- entry.insert(parent);
- bfs_queue.push_back(child);
- }
- }
- }
- };
-
- while let Some(def) = bfs_queue.pop_front() {
- for child in tcx.item_children(def).iter() {
- add_child(bfs_queue, child, def);
- }
- }
- }
-
- tcx.arena.alloc(visible_parent_map)
- },
-
- dependency_formats: |tcx, cnum| {
- assert_eq!(cnum, LOCAL_CRATE);
- Lrc::new(crate::dependency_format::calculate(tcx))
- },
-
- ..*providers
- };
-}
-
-impl cstore::CStore {
- pub fn export_macros_untracked(&self, cnum: CrateNum) {
- let data = self.get_crate_data(cnum);
- let mut dep_kind = data.dep_kind.lock();
- if *dep_kind == DepKind::UnexportedMacrosOnly {
- *dep_kind = DepKind::MacrosOnly;
- }
- }
-
- pub fn struct_field_names_untracked(&self, def: DefId, sess: &Session) -> Vec<Spanned<Symbol>> {
- self.get_crate_data(def.krate).get_struct_field_names(def.index, sess)
- }
-
- pub fn item_children_untracked(
- &self,
- def_id: DefId,
- sess: &Session
- ) -> Vec<def::Export<hir::HirId>> {
- let mut result = vec![];
- self.get_crate_data(def_id.krate)
- .each_child_of_item(def_id.index, |child| result.push(child), sess);
- result
- }
-
- pub fn load_macro_untracked(&self, id: DefId, sess: &Session) -> LoadedMacro {
- let _prof_timer = sess.prof.generic_activity("metadata_load_macro");
-
- let data = self.get_crate_data(id.krate);
- if data.is_proc_macro_crate() {
- return LoadedMacro::ProcMacro(data.load_proc_macro(id.index, sess));
- }
-
- let def = data.get_macro(id.index);
- let macro_full_name = data.def_path(id.index).to_string_friendly(|_| data.root.name);
- let source_name = FileName::Macros(macro_full_name);
-
- let source_file = sess.parse_sess.source_map().new_source_file(source_name, def.body);
- let local_span = Span::with_root_ctxt(source_file.start_pos, source_file.end_pos);
- let (body, mut errors) = source_file_to_stream(&sess.parse_sess, source_file, None);
- emit_unclosed_delims(&mut errors, &sess.parse_sess);
-
- // Mark the attrs as used
- let attrs = data.get_item_attrs(id.index, sess);
- for attr in attrs.iter() {
- attr::mark_used(attr);
- }
-
- let name = data.def_key(id.index).disambiguated_data.data
- .get_opt_name().expect("no name in load_macro");
- sess.imported_macro_spans.borrow_mut()
- .insert(local_span, (name.to_string(), data.get_span(id.index, sess)));
-
- LoadedMacro::MacroDef(ast::Item {
- // FIXME: cross-crate hygiene
- ident: ast::Ident::with_dummy_span(name),
- id: ast::DUMMY_NODE_ID,
- span: local_span,
- attrs: attrs.iter().cloned().collect(),
- kind: ast::ItemKind::MacroDef(ast::MacroDef {
- tokens: body.into(),
- legacy: def.legacy,
- }),
- vis: source_map::respan(local_span.shrink_to_lo(), ast::VisibilityKind::Inherited),
- tokens: None,
- }, data.root.edition)
- }
-
- pub fn associated_item_cloned_untracked(&self, def: DefId) -> ty::AssocItem {
- self.get_crate_data(def.krate).get_associated_item(def.index)
- }
-
- pub fn crate_source_untracked(&self, cnum: CrateNum) -> CrateSource {
- self.get_crate_data(cnum).source.clone()
- }
-}
-
-impl CrateStore for cstore::CStore {
- fn crate_data_as_any(&self, cnum: CrateNum) -> &dyn Any {
- self.get_crate_data(cnum)
- }
-
- fn item_generics_cloned_untracked(&self, def: DefId, sess: &Session) -> ty::Generics {
- self.get_crate_data(def.krate).get_generics(def.index, sess)
- }
-
- fn crate_name_untracked(&self, cnum: CrateNum) -> Symbol
- {
- self.get_crate_data(cnum).root.name
- }
-
- fn crate_is_private_dep_untracked(&self, cnum: CrateNum) -> bool {
- self.get_crate_data(cnum).private_dep
- }
-
- fn crate_disambiguator_untracked(&self, cnum: CrateNum) -> CrateDisambiguator
- {
- self.get_crate_data(cnum).root.disambiguator
- }
-
- fn crate_hash_untracked(&self, cnum: CrateNum) -> Svh
- {
- self.get_crate_data(cnum).root.hash
- }
-
- fn crate_host_hash_untracked(&self, cnum: CrateNum) -> Option<Svh> {
- self.get_crate_data(cnum).host_hash
- }
-
- /// Returns the `DefKey` for a given `DefId`. This indicates the
- /// parent `DefId` as well as some idea of what kind of data the
- /// `DefId` refers to.
- fn def_key(&self, def: DefId) -> DefKey {
- self.get_crate_data(def.krate).def_key(def.index)
- }
-
- fn def_path(&self, def: DefId) -> DefPath {
- self.get_crate_data(def.krate).def_path(def.index)
- }
-
- fn def_path_hash(&self, def: DefId) -> DefPathHash {
- self.get_crate_data(def.krate).def_path_hash(def.index)
- }
-
- fn def_path_table(&self, cnum: CrateNum) -> &DefPathTable {
- &self.get_crate_data(cnum).def_path_table
- }
-
- fn crates_untracked(&self) -> Vec<CrateNum>
- {
- let mut result = vec![];
- self.iter_crate_data(|cnum, _| result.push(cnum));
- result
- }
-
- fn postorder_cnums_untracked(&self) -> Vec<CrateNum> {
- self.do_postorder_cnums_untracked()
- }
-
- fn encode_metadata(&self, tcx: TyCtxt<'_>) -> EncodedMetadata {
- encoder::encode_metadata(tcx)
- }
-
- fn metadata_encoding_version(&self) -> &[u8]
- {
- schema::METADATA_HEADER
- }
-}
+++ /dev/null
-// Decoding metadata from a single crate's metadata
-
-use crate::cstore::{self, CrateMetadata, MetadataBlob};
-use crate::schema::*;
-use crate::table::{FixedSizeEncoding, PerDefTable};
-
-use rustc_index::vec::IndexVec;
-use rustc_data_structures::sync::Lrc;
-use rustc::hir::map::{DefKey, DefPath, DefPathData, DefPathHash};
-use rustc::hir;
-use rustc::middle::cstore::{LinkagePreference, NativeLibrary, ForeignModule};
-use rustc::middle::exported_symbols::{ExportedSymbol, SymbolExportLevel};
-use rustc::hir::def::{self, Res, DefKind, CtorOf, CtorKind};
-use rustc::hir::def_id::{CrateNum, DefId, DefIndex, LocalDefId, CRATE_DEF_INDEX, LOCAL_CRATE};
-use rustc_data_structures::fingerprint::Fingerprint;
-use rustc_data_structures::fx::FxHashMap;
-use rustc::dep_graph::{DepNodeIndex, DepKind};
-use rustc::middle::lang_items;
-use rustc::mir::{self, interpret};
-use rustc::mir::interpret::AllocDecodingSession;
-use rustc::session::Session;
-use rustc::ty::{self, Ty, TyCtxt};
-use rustc::ty::codec::TyDecoder;
-use rustc::mir::{Body, Promoted};
-use rustc::util::captures::Captures;
-
-use std::io;
-use std::mem;
-use std::num::NonZeroUsize;
-use std::u32;
-
-use rustc_serialize::{Decodable, Decoder, Encodable, SpecializedDecoder, opaque};
-use syntax::attr;
-use syntax::ast::{self, Ident};
-use syntax::source_map::{self, respan, Spanned};
-use syntax_expand::base::{SyntaxExtensionKind, SyntaxExtension};
-use syntax_expand::proc_macro::{AttrProcMacro, ProcMacroDerive, BangProcMacro};
-use syntax_pos::{self, Span, BytePos, Pos, DUMMY_SP, hygiene::MacroKind};
-use syntax_pos::symbol::{Symbol, sym};
-use log::debug;
-use proc_macro::bridge::client::ProcMacro;
-
-crate struct DecodeContext<'a, 'tcx> {
- opaque: opaque::Decoder<'a>,
- cdata: Option<&'a CrateMetadata>,
- sess: Option<&'tcx Session>,
- tcx: Option<TyCtxt<'tcx>>,
-
- // Cache the last used source_file for translating spans as an optimization.
- last_source_file_index: usize,
-
- lazy_state: LazyState,
-
- // Used for decoding interpret::AllocIds in a cached & thread-safe manner.
- alloc_decoding_session: Option<AllocDecodingSession<'a>>,
-}
-
-/// Abstract over the various ways one can create metadata decoders.
-crate trait Metadata<'a, 'tcx>: Copy {
- fn raw_bytes(self) -> &'a [u8];
- fn cdata(self) -> Option<&'a CrateMetadata> { None }
- fn sess(self) -> Option<&'tcx Session> { None }
- fn tcx(self) -> Option<TyCtxt<'tcx>> { None }
-
- fn decoder(self, pos: usize) -> DecodeContext<'a, 'tcx> {
- let tcx = self.tcx();
- DecodeContext {
- opaque: opaque::Decoder::new(self.raw_bytes(), pos),
- cdata: self.cdata(),
- sess: self.sess().or(tcx.map(|tcx| tcx.sess)),
- tcx,
- last_source_file_index: 0,
- lazy_state: LazyState::NoNode,
- alloc_decoding_session: self.cdata().map(|cdata| {
- cdata.alloc_decoding_state.new_decoding_session()
- }),
- }
- }
-}
-
-impl<'a, 'tcx> Metadata<'a, 'tcx> for &'a MetadataBlob {
- fn raw_bytes(self) -> &'a [u8] {
- &self.0
- }
-}
-
-
-impl<'a, 'tcx> Metadata<'a, 'tcx> for (&'a MetadataBlob, &'tcx Session) {
- fn raw_bytes(self) -> &'a [u8] {
- let (blob, _) = self;
- &blob.0
- }
-
- fn sess(self) -> Option<&'tcx Session> {
- let (_, sess) = self;
- Some(sess)
- }
-}
-
-
-impl<'a, 'tcx> Metadata<'a, 'tcx> for &'a CrateMetadata {
- fn raw_bytes(self) -> &'a [u8] {
- self.blob.raw_bytes()
- }
- fn cdata(self) -> Option<&'a CrateMetadata> {
- Some(self)
- }
-}
-
-impl<'a, 'tcx> Metadata<'a, 'tcx> for (&'a CrateMetadata, &'tcx Session) {
- fn raw_bytes(self) -> &'a [u8] {
- self.0.raw_bytes()
- }
- fn cdata(self) -> Option<&'a CrateMetadata> {
- Some(self.0)
- }
- fn sess(self) -> Option<&'tcx Session> {
- Some(&self.1)
- }
-}
-
-impl<'a, 'tcx> Metadata<'a, 'tcx> for (&'a CrateMetadata, TyCtxt<'tcx>) {
- fn raw_bytes(self) -> &'a [u8] {
- self.0.raw_bytes()
- }
- fn cdata(self) -> Option<&'a CrateMetadata> {
- Some(self.0)
- }
- fn tcx(self) -> Option<TyCtxt<'tcx>> {
- Some(self.1)
- }
-}
-
-impl<'a, 'tcx, T: Encodable + Decodable> Lazy<T> {
- crate fn decode<M: Metadata<'a, 'tcx>>(self, metadata: M) -> T {
- let mut dcx = metadata.decoder(self.position.get());
- dcx.lazy_state = LazyState::NodeStart(self.position);
- T::decode(&mut dcx).unwrap()
- }
-}
-
-impl<'a: 'x, 'tcx: 'x, 'x, T: Encodable + Decodable> Lazy<[T]> {
- crate fn decode<M: Metadata<'a, 'tcx>>(
- self,
- metadata: M,
- ) -> impl ExactSizeIterator<Item = T> + Captures<'a> + Captures<'tcx> + 'x {
- let mut dcx = metadata.decoder(self.position.get());
- dcx.lazy_state = LazyState::NodeStart(self.position);
- (0..self.meta).map(move |_| T::decode(&mut dcx).unwrap())
- }
-}
-
-impl<'a, 'tcx> DecodeContext<'a, 'tcx> {
- fn tcx(&self) -> TyCtxt<'tcx> {
- self.tcx.expect("missing TyCtxt in DecodeContext")
- }
-
- fn cdata(&self) -> &'a CrateMetadata {
- self.cdata.expect("missing CrateMetadata in DecodeContext")
- }
-
- fn read_lazy_with_meta<T: ?Sized + LazyMeta>(
- &mut self,
- meta: T::Meta,
- ) -> Result<Lazy<T>, <Self as Decoder>::Error> {
- let min_size = T::min_size(meta);
- let distance = self.read_usize()?;
- let position = match self.lazy_state {
- LazyState::NoNode => bug!("read_lazy_with_meta: outside of a metadata node"),
- LazyState::NodeStart(start) => {
- let start = start.get();
- assert!(distance + min_size <= start);
- start - distance - min_size
- }
- LazyState::Previous(last_min_end) => last_min_end.get() + distance,
- };
- self.lazy_state = LazyState::Previous(NonZeroUsize::new(position + min_size).unwrap());
- Ok(Lazy::from_position_and_meta(NonZeroUsize::new(position).unwrap(), meta))
- }
-}
-
-impl<'a, 'tcx> TyDecoder<'tcx> for DecodeContext<'a, 'tcx> {
- #[inline]
- fn tcx(&self) -> TyCtxt<'tcx> {
- self.tcx.expect("missing TyCtxt in DecodeContext")
- }
-
- #[inline]
- fn peek_byte(&self) -> u8 {
- self.opaque.data[self.opaque.position()]
- }
-
- #[inline]
- fn position(&self) -> usize {
- self.opaque.position()
- }
-
- fn cached_ty_for_shorthand<F>(&mut self,
- shorthand: usize,
- or_insert_with: F)
- -> Result<Ty<'tcx>, Self::Error>
- where F: FnOnce(&mut Self) -> Result<Ty<'tcx>, Self::Error>
- {
- let tcx = self.tcx();
-
- let key = ty::CReaderCacheKey {
- cnum: self.cdata().cnum,
- pos: shorthand,
- };
-
- if let Some(&ty) = tcx.rcache.borrow().get(&key) {
- return Ok(ty);
- }
-
- let ty = or_insert_with(self)?;
- tcx.rcache.borrow_mut().insert(key, ty);
- Ok(ty)
- }
-
- fn with_position<F, R>(&mut self, pos: usize, f: F) -> R
- where F: FnOnce(&mut Self) -> R
- {
- let new_opaque = opaque::Decoder::new(self.opaque.data, pos);
- let old_opaque = mem::replace(&mut self.opaque, new_opaque);
- let old_state = mem::replace(&mut self.lazy_state, LazyState::NoNode);
- let r = f(self);
- self.opaque = old_opaque;
- self.lazy_state = old_state;
- r
- }
-
- fn map_encoded_cnum_to_current(&self, cnum: CrateNum) -> CrateNum {
- if cnum == LOCAL_CRATE {
- self.cdata().cnum
- } else {
- self.cdata().cnum_map[cnum]
- }
- }
-}
-
-impl<'a, 'tcx, T: Encodable> SpecializedDecoder<Lazy<T>> for DecodeContext<'a, 'tcx> {
- fn specialized_decode(&mut self) -> Result<Lazy<T>, Self::Error> {
- self.read_lazy_with_meta(())
- }
-}
-
-impl<'a, 'tcx, T: Encodable> SpecializedDecoder<Lazy<[T]>> for DecodeContext<'a, 'tcx> {
- fn specialized_decode(&mut self) -> Result<Lazy<[T]>, Self::Error> {
- let len = self.read_usize()?;
- if len == 0 {
- Ok(Lazy::empty())
- } else {
- self.read_lazy_with_meta(len)
- }
- }
-}
-
-impl<'a, 'tcx, T> SpecializedDecoder<Lazy<PerDefTable<T>>> for DecodeContext<'a, 'tcx>
- where Option<T>: FixedSizeEncoding,
-{
- fn specialized_decode(&mut self) -> Result<Lazy<PerDefTable<T>>, Self::Error> {
- let len = self.read_usize()?;
- self.read_lazy_with_meta(len)
- }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<DefId> for DecodeContext<'a, 'tcx> {
- #[inline]
- fn specialized_decode(&mut self) -> Result<DefId, Self::Error> {
- let krate = CrateNum::decode(self)?;
- let index = DefIndex::decode(self)?;
-
- Ok(DefId {
- krate,
- index,
- })
- }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<DefIndex> for DecodeContext<'a, 'tcx> {
- #[inline]
- fn specialized_decode(&mut self) -> Result<DefIndex, Self::Error> {
- Ok(DefIndex::from_u32(self.read_u32()?))
- }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<LocalDefId> for DecodeContext<'a, 'tcx> {
- #[inline]
- fn specialized_decode(&mut self) -> Result<LocalDefId, Self::Error> {
- self.specialized_decode().map(|i| LocalDefId::from_def_id(i))
- }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<interpret::AllocId> for DecodeContext<'a, 'tcx> {
- fn specialized_decode(&mut self) -> Result<interpret::AllocId, Self::Error> {
- if let Some(alloc_decoding_session) = self.alloc_decoding_session {
- alloc_decoding_session.decode_alloc_id(self)
- } else {
- bug!("Attempting to decode interpret::AllocId without CrateMetadata")
- }
- }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<Span> for DecodeContext<'a, 'tcx> {
- fn specialized_decode(&mut self) -> Result<Span, Self::Error> {
- let tag = u8::decode(self)?;
-
- if tag == TAG_INVALID_SPAN {
- return Ok(DUMMY_SP)
- }
-
- debug_assert_eq!(tag, TAG_VALID_SPAN);
-
- let lo = BytePos::decode(self)?;
- let len = BytePos::decode(self)?;
- let hi = lo + len;
-
- let sess = if let Some(sess) = self.sess {
- sess
- } else {
- bug!("Cannot decode Span without Session.")
- };
-
- let imported_source_files = self.cdata().imported_source_files(&sess.source_map());
- let source_file = {
- // Optimize for the case that most spans within a translated item
- // originate from the same source_file.
- let last_source_file = &imported_source_files[self.last_source_file_index];
-
- if lo >= last_source_file.original_start_pos &&
- lo <= last_source_file.original_end_pos {
- last_source_file
- } else {
- let mut a = 0;
- let mut b = imported_source_files.len();
-
- while b - a > 1 {
- let m = (a + b) / 2;
- if imported_source_files[m].original_start_pos > lo {
- b = m;
- } else {
- a = m;
- }
- }
-
- self.last_source_file_index = a;
- &imported_source_files[a]
- }
- };
-
- // Make sure our binary search above is correct.
- debug_assert!(lo >= source_file.original_start_pos &&
- lo <= source_file.original_end_pos);
-
- // Make sure we correctly filtered out invalid spans during encoding
- debug_assert!(hi >= source_file.original_start_pos &&
- hi <= source_file.original_end_pos);
-
- let lo = (lo + source_file.translated_source_file.start_pos)
- - source_file.original_start_pos;
- let hi = (hi + source_file.translated_source_file.start_pos)
- - source_file.original_start_pos;
-
- Ok(Span::with_root_ctxt(lo, hi))
- }
-}
-
-impl SpecializedDecoder<Ident> for DecodeContext<'_, '_> {
- fn specialized_decode(&mut self) -> Result<Ident, Self::Error> {
- // FIXME(jseyfried): intercrate hygiene
-
- Ok(Ident::with_dummy_span(Symbol::decode(self)?))
- }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<Fingerprint> for DecodeContext<'a, 'tcx> {
- fn specialized_decode(&mut self) -> Result<Fingerprint, Self::Error> {
- Fingerprint::decode_opaque(&mut self.opaque)
- }
-}
-
-impl<'a, 'tcx, T: Decodable> SpecializedDecoder<mir::ClearCrossCrate<T>>
-for DecodeContext<'a, 'tcx> {
- #[inline]
- fn specialized_decode(&mut self) -> Result<mir::ClearCrossCrate<T>, Self::Error> {
- Ok(mir::ClearCrossCrate::Clear)
- }
-}
-
-implement_ty_decoder!( DecodeContext<'a, 'tcx> );
-
-impl<'tcx> MetadataBlob {
- crate fn is_compatible(&self) -> bool {
- self.raw_bytes().starts_with(METADATA_HEADER)
- }
-
- crate fn get_rustc_version(&self) -> String {
- Lazy::<String>::from_position(
- NonZeroUsize::new(METADATA_HEADER.len() + 4).unwrap(),
- ).decode(self)
- }
-
- crate fn get_root(&self) -> CrateRoot<'tcx> {
- let slice = self.raw_bytes();
- let offset = METADATA_HEADER.len();
- let pos = (((slice[offset + 0] as u32) << 24) | ((slice[offset + 1] as u32) << 16) |
- ((slice[offset + 2] as u32) << 8) |
- ((slice[offset + 3] as u32) << 0)) as usize;
- Lazy::<CrateRoot<'tcx>>::from_position(
- NonZeroUsize::new(pos).unwrap(),
- ).decode(self)
- }
-
- crate fn list_crate_metadata(&self,
- out: &mut dyn io::Write) -> io::Result<()> {
- write!(out, "=External Dependencies=\n")?;
- let root = self.get_root();
- for (i, dep) in root.crate_deps
- .decode(self)
- .enumerate() {
- write!(out, "{} {}{}\n", i + 1, dep.name, dep.extra_filename)?;
- }
- write!(out, "\n")?;
- Ok(())
- }
-}
-
-impl<'tcx> EntryKind<'tcx> {
- fn def_kind(&self) -> Option<DefKind> {
- Some(match *self {
- EntryKind::Const(..) => DefKind::Const,
- EntryKind::AssocConst(..) => DefKind::AssocConst,
- EntryKind::ImmStatic |
- EntryKind::MutStatic |
- EntryKind::ForeignImmStatic |
- EntryKind::ForeignMutStatic => DefKind::Static,
- EntryKind::Struct(_, _) => DefKind::Struct,
- EntryKind::Union(_, _) => DefKind::Union,
- EntryKind::Fn(_) |
- EntryKind::ForeignFn(_) => DefKind::Fn,
- EntryKind::Method(_) => DefKind::Method,
- EntryKind::Type => DefKind::TyAlias,
- EntryKind::TypeParam => DefKind::TyParam,
- EntryKind::ConstParam => DefKind::ConstParam,
- EntryKind::OpaqueTy => DefKind::OpaqueTy,
- EntryKind::AssocType(_) => DefKind::AssocTy,
- EntryKind::AssocOpaqueTy(_) => DefKind::AssocOpaqueTy,
- EntryKind::Mod(_) => DefKind::Mod,
- EntryKind::Variant(_) => DefKind::Variant,
- EntryKind::Trait(_) => DefKind::Trait,
- EntryKind::TraitAlias => DefKind::TraitAlias,
- EntryKind::Enum(..) => DefKind::Enum,
- EntryKind::MacroDef(_) => DefKind::Macro(MacroKind::Bang),
- EntryKind::ForeignType => DefKind::ForeignTy,
-
- EntryKind::ForeignMod |
- EntryKind::GlobalAsm |
- EntryKind::Impl(_) |
- EntryKind::Field |
- EntryKind::Generator(_) |
- EntryKind::Closure => return None,
- })
- }
-}
-
-impl<'a, 'tcx> CrateMetadata {
- crate fn is_proc_macro_crate(&self) -> bool {
- self.root.proc_macro_decls_static.is_some()
- }
-
- fn is_proc_macro(&self, id: DefIndex) -> bool {
- self.is_proc_macro_crate() &&
- self.root.proc_macro_data.unwrap().decode(self).find(|x| *x == id).is_some()
- }
-
- fn maybe_kind(&self, item_id: DefIndex) -> Option<EntryKind<'tcx>> {
- self.root.per_def.kind.get(self, item_id).map(|k| k.decode(self))
- }
-
- fn kind(&self, item_id: DefIndex) -> EntryKind<'tcx> {
- assert!(!self.is_proc_macro(item_id));
- self.maybe_kind(item_id).unwrap_or_else(|| {
- bug!(
- "CrateMetadata::kind({:?}): id not found, in crate {:?} with number {}",
- item_id,
- self.root.name,
- self.cnum,
- )
- })
- }
-
- fn local_def_id(&self, index: DefIndex) -> DefId {
- DefId {
- krate: self.cnum,
- index,
- }
- }
-
- fn raw_proc_macro(&self, id: DefIndex) -> &ProcMacro {
- // DefIndex's in root.proc_macro_data have a one-to-one correspondence
- // with items in 'raw_proc_macros'.
- // NOTE: If you update the order of macros in 'proc_macro_data' for any reason,
- // you must also update src/libsyntax_ext/proc_macro_harness.rs
- // Failing to do so will result in incorrect data being associated
- // with proc macros when deserialized.
- let pos = self.root.proc_macro_data.unwrap().decode(self).position(|i| i == id).unwrap();
- &self.raw_proc_macros.unwrap()[pos]
- }
-
- crate fn item_name(&self, item_index: DefIndex) -> Symbol {
- if !self.is_proc_macro(item_index) {
- self.def_key(item_index)
- .disambiguated_data
- .data
- .get_opt_name()
- .expect("no name in item_name")
- } else {
- Symbol::intern(self.raw_proc_macro(item_index).name())
- }
- }
-
- crate fn def_kind(&self, index: DefIndex) -> Option<DefKind> {
- if !self.is_proc_macro(index) {
- self.kind(index).def_kind()
- } else {
- Some(DefKind::Macro(
- macro_kind(self.raw_proc_macro(index))
- ))
- }
- }
-
- crate fn get_span(&self, index: DefIndex, sess: &Session) -> Span {
- self.root.per_def.span.get(self, index).unwrap().decode((self, sess))
- }
-
- crate fn load_proc_macro(&self, id: DefIndex, sess: &Session) -> SyntaxExtension {
- let (name, kind, helper_attrs) = match *self.raw_proc_macro(id) {
- ProcMacro::CustomDerive { trait_name, attributes, client } => {
- let helper_attrs =
- attributes.iter().cloned().map(Symbol::intern).collect::<Vec<_>>();
- (
- trait_name,
- SyntaxExtensionKind::Derive(Box::new(ProcMacroDerive { client })),
- helper_attrs,
- )
- }
- ProcMacro::Attr { name, client } => (
- name, SyntaxExtensionKind::Attr(Box::new(AttrProcMacro { client })), Vec::new()
- ),
- ProcMacro::Bang { name, client } => (
- name, SyntaxExtensionKind::Bang(Box::new(BangProcMacro { client })), Vec::new()
- )
- };
-
- SyntaxExtension::new(
- &sess.parse_sess,
- kind,
- self.get_span(id, sess),
- helper_attrs,
- self.root.edition,
- Symbol::intern(name),
- &self.get_item_attrs(id, sess),
- )
- }
-
- crate fn get_trait_def(&self, item_id: DefIndex, sess: &Session) -> ty::TraitDef {
- match self.kind(item_id) {
- EntryKind::Trait(data) => {
- let data = data.decode((self, sess));
- ty::TraitDef::new(self.local_def_id(item_id),
- data.unsafety,
- data.paren_sugar,
- data.has_auto_impl,
- data.is_marker,
- self.def_path_table.def_path_hash(item_id))
- },
- EntryKind::TraitAlias => {
- ty::TraitDef::new(self.local_def_id(item_id),
- hir::Unsafety::Normal,
- false,
- false,
- false,
- self.def_path_table.def_path_hash(item_id))
- },
- _ => bug!("def-index does not refer to trait or trait alias"),
- }
- }
-
- fn get_variant(
- &self,
- tcx: TyCtxt<'tcx>,
- kind: &EntryKind<'_>,
- index: DefIndex,
- parent_did: DefId,
- ) -> ty::VariantDef {
- let data = match kind {
- EntryKind::Variant(data) |
- EntryKind::Struct(data, _) |
- EntryKind::Union(data, _) => data.decode(self),
- _ => bug!(),
- };
-
- let adt_kind = match kind {
- EntryKind::Variant(_) => ty::AdtKind::Enum,
- EntryKind::Struct(..) => ty::AdtKind::Struct,
- EntryKind::Union(..) => ty::AdtKind::Union,
- _ => bug!(),
- };
-
- let variant_did = if adt_kind == ty::AdtKind::Enum {
- Some(self.local_def_id(index))
- } else {
- None
- };
- let ctor_did = data.ctor.map(|index| self.local_def_id(index));
-
- ty::VariantDef::new(
- tcx,
- Ident::with_dummy_span(self.item_name(index)),
- variant_did,
- ctor_did,
- data.discr,
- self.root.per_def.children.get(self, index).unwrap_or(Lazy::empty())
- .decode(self).map(|index| ty::FieldDef {
- did: self.local_def_id(index),
- ident: Ident::with_dummy_span(self.item_name(index)),
- vis: self.get_visibility(index),
- }).collect(),
- data.ctor_kind,
- adt_kind,
- parent_did,
- false,
- )
- }
-
- crate fn get_adt_def(&self, item_id: DefIndex, tcx: TyCtxt<'tcx>) -> &'tcx ty::AdtDef {
- let kind = self.kind(item_id);
- let did = self.local_def_id(item_id);
-
- let (adt_kind, repr) = match kind {
- EntryKind::Enum(repr) => (ty::AdtKind::Enum, repr),
- EntryKind::Struct(_, repr) => (ty::AdtKind::Struct, repr),
- EntryKind::Union(_, repr) => (ty::AdtKind::Union, repr),
- _ => bug!("get_adt_def called on a non-ADT {:?}", did),
- };
-
- let variants = if let ty::AdtKind::Enum = adt_kind {
- self.root.per_def.children.get(self, item_id).unwrap_or(Lazy::empty())
- .decode(self)
- .map(|index| {
- self.get_variant(tcx, &self.kind(index), index, did)
- })
- .collect()
- } else {
- std::iter::once(self.get_variant(tcx, &kind, item_id, did)).collect()
- };
-
- tcx.alloc_adt_def(did, adt_kind, variants, repr)
- }
-
- crate fn get_predicates(
- &self,
- item_id: DefIndex,
- tcx: TyCtxt<'tcx>,
- ) -> ty::GenericPredicates<'tcx> {
- self.root.per_def.predicates.get(self, item_id).unwrap().decode((self, tcx))
- }
-
- crate fn get_predicates_defined_on(
- &self,
- item_id: DefIndex,
- tcx: TyCtxt<'tcx>,
- ) -> ty::GenericPredicates<'tcx> {
- self.root.per_def.predicates_defined_on.get(self, item_id).unwrap().decode((self, tcx))
- }
-
- crate fn get_super_predicates(
- &self,
- item_id: DefIndex,
- tcx: TyCtxt<'tcx>,
- ) -> ty::GenericPredicates<'tcx> {
- self.root.per_def.super_predicates.get(self, item_id).unwrap().decode((self, tcx))
- }
-
- crate fn get_generics(&self, item_id: DefIndex, sess: &Session) -> ty::Generics {
- self.root.per_def.generics.get(self, item_id).unwrap().decode((self, sess))
- }
-
- crate fn get_type(&self, id: DefIndex, tcx: TyCtxt<'tcx>) -> Ty<'tcx> {
- self.root.per_def.ty.get(self, id).unwrap().decode((self, tcx))
- }
-
- crate fn get_stability(&self, id: DefIndex) -> Option<attr::Stability> {
- match self.is_proc_macro(id) {
- true => self.root.proc_macro_stability.clone(),
- false => self.root.per_def.stability.get(self, id).map(|stab| stab.decode(self)),
- }
- }
-
- crate fn get_deprecation(&self, id: DefIndex) -> Option<attr::Deprecation> {
- self.root.per_def.deprecation.get(self, id)
- .filter(|_| !self.is_proc_macro(id))
- .map(|depr| depr.decode(self))
- }
-
- crate fn get_visibility(&self, id: DefIndex) -> ty::Visibility {
- match self.is_proc_macro(id) {
- true => ty::Visibility::Public,
- false => self.root.per_def.visibility.get(self, id).unwrap().decode(self),
- }
- }
-
- fn get_impl_data(&self, id: DefIndex) -> ImplData {
- match self.kind(id) {
- EntryKind::Impl(data) => data.decode(self),
- _ => bug!(),
- }
- }
-
- crate fn get_parent_impl(&self, id: DefIndex) -> Option<DefId> {
- self.get_impl_data(id).parent_impl
- }
-
- crate fn get_impl_polarity(&self, id: DefIndex) -> ty::ImplPolarity {
- self.get_impl_data(id).polarity
- }
-
- crate fn get_impl_defaultness(&self, id: DefIndex) -> hir::Defaultness {
- self.get_impl_data(id).defaultness
- }
-
- crate fn get_coerce_unsized_info(
- &self,
- id: DefIndex,
- ) -> Option<ty::adjustment::CoerceUnsizedInfo> {
- self.get_impl_data(id).coerce_unsized_info
- }
-
- crate fn get_impl_trait(&self, id: DefIndex, tcx: TyCtxt<'tcx>) -> Option<ty::TraitRef<'tcx>> {
- self.root.per_def.impl_trait_ref.get(self, id).map(|tr| tr.decode((self, tcx)))
- }
-
- /// Iterates over all the stability attributes in the given crate.
- crate fn get_lib_features(&self, tcx: TyCtxt<'tcx>) -> &'tcx [(ast::Name, Option<ast::Name>)] {
- // FIXME: For a proc macro crate, not sure whether we should return the "host"
- // features or an empty Vec. Both don't cause ICEs.
- tcx.arena.alloc_from_iter(self.root
- .lib_features
- .decode(self))
- }
-
- /// Iterates over the language items in the given crate.
- crate fn get_lang_items(&self, tcx: TyCtxt<'tcx>) -> &'tcx [(DefId, usize)] {
- if self.is_proc_macro_crate() {
- // Proc macro crates do not export any lang-items to the target.
- &[]
- } else {
- tcx.arena.alloc_from_iter(self.root
- .lang_items
- .decode(self)
- .map(|(def_index, index)| (self.local_def_id(def_index), index)))
- }
- }
-
- /// Iterates over the diagnostic items in the given crate.
- crate fn get_diagnostic_items(
- &self,
- tcx: TyCtxt<'tcx>,
- ) -> &'tcx FxHashMap<Symbol, DefId> {
- tcx.arena.alloc(if self.is_proc_macro_crate() {
- // Proc macro crates do not export any diagnostic-items to the target.
- Default::default()
- } else {
- self.root
- .diagnostic_items
- .decode(self)
- .map(|(name, def_index)| (name, self.local_def_id(def_index)))
- .collect()
- })
- }
-
- /// Iterates over each child of the given item.
- crate fn each_child_of_item<F>(&self, id: DefIndex, mut callback: F, sess: &Session)
- where F: FnMut(def::Export<hir::HirId>)
- {
- if let Some(proc_macros_ids) = self.root.proc_macro_data.map(|d| d.decode(self)) {
- /* If we are loading as a proc macro, we want to return the view of this crate
- * as a proc macro crate.
- */
- if id == CRATE_DEF_INDEX {
- for def_index in proc_macros_ids {
- let raw_macro = self.raw_proc_macro(def_index);
- let res = Res::Def(
- DefKind::Macro(macro_kind(raw_macro)),
- self.local_def_id(def_index),
- );
- let ident = Ident::from_str(raw_macro.name());
- callback(def::Export {
- ident: ident,
- res: res,
- vis: ty::Visibility::Public,
- span: DUMMY_SP,
- });
- }
- }
- return
- }
-
- // Find the item.
- let kind = match self.maybe_kind(id) {
- None => return,
- Some(kind) => kind,
- };
-
- // Iterate over all children.
- let macros_only = self.dep_kind.lock().macros_only();
- let children = self.root.per_def.children.get(self, id).unwrap_or(Lazy::empty());
- for child_index in children.decode((self, sess)) {
- if macros_only {
- continue
- }
-
- // Get the item.
- if let Some(child_kind) = self.maybe_kind(child_index) {
- match child_kind {
- EntryKind::MacroDef(..) => {}
- _ if macros_only => continue,
- _ => {}
- }
-
- // Hand off the item to the callback.
- match child_kind {
- // FIXME(eddyb) Don't encode these in children.
- EntryKind::ForeignMod => {
- let child_children =
- self.root.per_def.children.get(self, child_index)
- .unwrap_or(Lazy::empty());
- for child_index in child_children.decode((self, sess)) {
- if let Some(kind) = self.def_kind(child_index) {
- callback(def::Export {
- res: Res::Def(kind, self.local_def_id(child_index)),
- ident: Ident::with_dummy_span(self.item_name(child_index)),
- vis: self.get_visibility(child_index),
- span: self.root.per_def.span.get(self, child_index).unwrap()
- .decode((self, sess)),
- });
- }
- }
- continue;
- }
- EntryKind::Impl(_) => continue,
-
- _ => {}
- }
-
- let def_key = self.def_key(child_index);
- let span = self.get_span(child_index, sess);
- if let (Some(kind), Some(name)) =
- (self.def_kind(child_index), def_key.disambiguated_data.data.get_opt_name()) {
- let ident = Ident::with_dummy_span(name);
- let vis = self.get_visibility(child_index);
- let def_id = self.local_def_id(child_index);
- let res = Res::Def(kind, def_id);
- callback(def::Export { res, ident, vis, span });
- // For non-re-export structs and variants add their constructors to children.
- // Re-export lists automatically contain constructors when necessary.
- match kind {
- DefKind::Struct => {
- if let Some(ctor_def_id) = self.get_ctor_def_id(child_index) {
- let ctor_kind = self.get_ctor_kind(child_index);
- let ctor_res = Res::Def(
- DefKind::Ctor(CtorOf::Struct, ctor_kind),
- ctor_def_id,
- );
- let vis = self.get_visibility(ctor_def_id.index);
- callback(def::Export { res: ctor_res, vis, ident, span });
- }
- }
- DefKind::Variant => {
- // Braced variants, unlike structs, generate unusable names in
- // value namespace, they are reserved for possible future use.
- // It's ok to use the variant's id as a ctor id since an
- // error will be reported on any use of such resolution anyway.
- let ctor_def_id = self.get_ctor_def_id(child_index).unwrap_or(def_id);
- let ctor_kind = self.get_ctor_kind(child_index);
- let ctor_res = Res::Def(
- DefKind::Ctor(CtorOf::Variant, ctor_kind),
- ctor_def_id,
- );
- let mut vis = self.get_visibility(ctor_def_id.index);
- if ctor_def_id == def_id && vis == ty::Visibility::Public {
- // For non-exhaustive variants lower the constructor visibility to
- // within the crate. We only need this for fictive constructors,
- // for other constructors correct visibilities
- // were already encoded in metadata.
- let attrs = self.get_item_attrs(def_id.index, sess);
- if attr::contains_name(&attrs, sym::non_exhaustive) {
- let crate_def_id = self.local_def_id(CRATE_DEF_INDEX);
- vis = ty::Visibility::Restricted(crate_def_id);
- }
- }
- callback(def::Export { res: ctor_res, ident, vis, span });
- }
- _ => {}
- }
- }
- }
- }
-
- if let EntryKind::Mod(data) = kind {
- for exp in data.decode((self, sess)).reexports.decode((self, sess)) {
- match exp.res {
- Res::Def(DefKind::Macro(..), _) => {}
- _ if macros_only => continue,
- _ => {}
- }
- callback(exp);
- }
- }
- }
-
- crate fn is_item_mir_available(&self, id: DefIndex) -> bool {
- !self.is_proc_macro(id) &&
- self.root.per_def.mir.get(self, id).is_some()
- }
-
- crate fn get_optimized_mir(&self, tcx: TyCtxt<'tcx>, id: DefIndex) -> Body<'tcx> {
- self.root.per_def.mir.get(self, id)
- .filter(|_| !self.is_proc_macro(id))
- .unwrap_or_else(|| {
- bug!("get_optimized_mir: missing MIR for `{:?}`", self.local_def_id(id))
- })
- .decode((self, tcx))
- }
-
- crate fn get_promoted_mir(
- &self,
- tcx: TyCtxt<'tcx>,
- id: DefIndex,
- ) -> IndexVec<Promoted, Body<'tcx>> {
- self.root.per_def.promoted_mir.get(self, id)
- .filter(|_| !self.is_proc_macro(id))
- .unwrap_or_else(|| {
- bug!("get_promoted_mir: missing MIR for `{:?}`", self.local_def_id(id))
- })
- .decode((self, tcx))
- }
-
- crate fn mir_const_qualif(&self, id: DefIndex) -> u8 {
- match self.kind(id) {
- EntryKind::Const(qualif, _) |
- EntryKind::AssocConst(AssocContainer::ImplDefault, qualif, _) |
- EntryKind::AssocConst(AssocContainer::ImplFinal, qualif, _) => {
- qualif.mir
- }
- _ => bug!(),
- }
- }
-
- crate fn get_associated_item(&self, id: DefIndex) -> ty::AssocItem {
- let def_key = self.def_key(id);
- let parent = self.local_def_id(def_key.parent.unwrap());
- let name = def_key.disambiguated_data.data.get_opt_name().unwrap();
-
- let (kind, container, has_self) = match self.kind(id) {
- EntryKind::AssocConst(container, _, _) => {
- (ty::AssocKind::Const, container, false)
- }
- EntryKind::Method(data) => {
- let data = data.decode(self);
- (ty::AssocKind::Method, data.container, data.has_self)
- }
- EntryKind::AssocType(container) => {
- (ty::AssocKind::Type, container, false)
- }
- EntryKind::AssocOpaqueTy(container) => {
- (ty::AssocKind::OpaqueTy, container, false)
- }
- _ => bug!("cannot get associated-item of `{:?}`", def_key)
- };
-
- ty::AssocItem {
- ident: Ident::with_dummy_span(name),
- kind,
- vis: self.get_visibility(id),
- defaultness: container.defaultness(),
- def_id: self.local_def_id(id),
- container: container.with_def_id(parent),
- method_has_self_argument: has_self
- }
- }
-
- crate fn get_item_variances(&self, id: DefIndex) -> Vec<ty::Variance> {
- self.root.per_def.variances.get(self, id).unwrap_or(Lazy::empty())
- .decode(self).collect()
- }
-
- crate fn get_ctor_kind(&self, node_id: DefIndex) -> CtorKind {
- match self.kind(node_id) {
- EntryKind::Struct(data, _) |
- EntryKind::Union(data, _) |
- EntryKind::Variant(data) => data.decode(self).ctor_kind,
- _ => CtorKind::Fictive,
- }
- }
-
- crate fn get_ctor_def_id(&self, node_id: DefIndex) -> Option<DefId> {
- match self.kind(node_id) {
- EntryKind::Struct(data, _) => {
- data.decode(self).ctor.map(|index| self.local_def_id(index))
- }
- EntryKind::Variant(data) => {
- data.decode(self).ctor.map(|index| self.local_def_id(index))
- }
- _ => None,
- }
- }
-
- crate fn get_item_attrs(&self, node_id: DefIndex, sess: &Session) -> Lrc<[ast::Attribute]> {
- // The attributes for a tuple struct/variant are attached to the definition, not the ctor;
- // we assume that someone passing in a tuple struct ctor is actually wanting to
- // look at the definition
- let def_key = self.def_key(node_id);
- let item_id = if def_key.disambiguated_data.data == DefPathData::Ctor {
- def_key.parent.unwrap()
- } else {
- node_id
- };
-
- Lrc::from(self.root.per_def.attributes.get(self, item_id).unwrap_or(Lazy::empty())
- .decode((self, sess))
- .collect::<Vec<_>>())
- }
-
- crate fn get_struct_field_names(
- &self,
- id: DefIndex,
- sess: &Session,
- ) -> Vec<Spanned<ast::Name>> {
- self.root.per_def.children.get(self, id).unwrap_or(Lazy::empty())
- .decode(self)
- .map(|index| respan(self.get_span(index, sess), self.item_name(index)))
- .collect()
- }
-
- // Translate a DefId from the current compilation environment to a DefId
- // for an external crate.
- fn reverse_translate_def_id(&self, did: DefId) -> Option<DefId> {
- for (local, &global) in self.cnum_map.iter_enumerated() {
- if global == did.krate {
- return Some(DefId {
- krate: local,
- index: did.index,
- });
- }
- }
-
- None
- }
-
- crate fn get_inherent_implementations_for_type(
- &self,
- tcx: TyCtxt<'tcx>,
- id: DefIndex,
- ) -> &'tcx [DefId] {
- tcx.arena.alloc_from_iter(
- self.root.per_def.inherent_impls.get(self, id).unwrap_or(Lazy::empty())
- .decode(self)
- .map(|index| self.local_def_id(index))
- )
- }
-
- crate fn get_implementations_for_trait(
- &self,
- tcx: TyCtxt<'tcx>,
- filter: Option<DefId>,
- ) -> &'tcx [DefId] {
- if self.is_proc_macro_crate() {
- // proc-macro crates export no trait impls.
- return &[]
- }
-
- // Do a reverse lookup beforehand to avoid touching the crate_num
- // hash map in the loop below.
- let filter = match filter.map(|def_id| self.reverse_translate_def_id(def_id)) {
- Some(Some(def_id)) => Some((def_id.krate.as_u32(), def_id.index)),
- Some(None) => return &[],
- None => None,
- };
-
- if let Some(filter) = filter {
- if let Some(impls) = self.trait_impls.get(&filter) {
- tcx.arena.alloc_from_iter(impls.decode(self).map(|idx| self.local_def_id(idx)))
- } else {
- &[]
- }
- } else {
- tcx.arena.alloc_from_iter(self.trait_impls.values().flat_map(|impls| {
- impls.decode(self).map(|idx| self.local_def_id(idx))
- }))
- }
- }
-
- crate fn get_trait_of_item(&self, id: DefIndex) -> Option<DefId> {
- let def_key = self.def_key(id);
- match def_key.disambiguated_data.data {
- DefPathData::TypeNs(..) | DefPathData::ValueNs(..) => (),
- // Not an associated item
- _ => return None,
- }
- def_key.parent.and_then(|parent_index| {
- match self.kind(parent_index) {
- EntryKind::Trait(_) |
- EntryKind::TraitAlias => Some(self.local_def_id(parent_index)),
- _ => None,
- }
- })
- }
-
-
- crate fn get_native_libraries(&self, sess: &Session) -> Vec<NativeLibrary> {
- if self.is_proc_macro_crate() {
- // Proc macro crates do not have any *target* native libraries.
- vec![]
- } else {
- self.root.native_libraries.decode((self, sess)).collect()
- }
- }
-
- crate fn get_foreign_modules(&self, tcx: TyCtxt<'tcx>) -> &'tcx [ForeignModule] {
- if self.is_proc_macro_crate() {
- // Proc macro crates do not have any *target* foreign modules.
- &[]
- } else {
- tcx.arena.alloc_from_iter(self.root.foreign_modules.decode((self, tcx.sess)))
- }
- }
-
- crate fn get_dylib_dependency_formats(
- &self,
- tcx: TyCtxt<'tcx>,
- ) -> &'tcx [(CrateNum, LinkagePreference)] {
- tcx.arena.alloc_from_iter(self.root
- .dylib_dependency_formats
- .decode(self)
- .enumerate()
- .flat_map(|(i, link)| {
- let cnum = CrateNum::new(i + 1);
- link.map(|link| (self.cnum_map[cnum], link))
- }))
- }
-
- crate fn get_missing_lang_items(&self, tcx: TyCtxt<'tcx>) -> &'tcx [lang_items::LangItem] {
- if self.is_proc_macro_crate() {
- // Proc macro crates do not depend on any target weak lang-items.
- &[]
- } else {
- tcx.arena.alloc_from_iter(self.root
- .lang_items_missing
- .decode(self))
- }
- }
-
- crate fn get_fn_param_names(&self, id: DefIndex) -> Vec<ast::Name> {
- let param_names = match self.kind(id) {
- EntryKind::Fn(data) |
- EntryKind::ForeignFn(data) => data.decode(self).param_names,
- EntryKind::Method(data) => data.decode(self).fn_data.param_names,
- _ => Lazy::empty(),
- };
- param_names.decode(self).collect()
- }
-
- crate fn exported_symbols(
- &self,
- tcx: TyCtxt<'tcx>,
- ) -> Vec<(ExportedSymbol<'tcx>, SymbolExportLevel)> {
- if self.is_proc_macro_crate() {
- // If this crate is a custom derive crate, then we're not even going to
- // link those in so we skip those crates.
- vec![]
- } else {
- self.root.exported_symbols.decode((self, tcx)).collect()
- }
- }
-
- crate fn get_rendered_const(&self, id: DefIndex) -> String {
- match self.kind(id) {
- EntryKind::Const(_, data) |
- EntryKind::AssocConst(_, _, data) => data.decode(self).0,
- _ => bug!(),
- }
- }
-
- crate fn get_macro(&self, id: DefIndex) -> MacroDef {
- match self.kind(id) {
- EntryKind::MacroDef(macro_def) => macro_def.decode(self),
- _ => bug!(),
- }
- }
-
- crate fn is_const_fn_raw(&self, id: DefIndex) -> bool {
- let constness = match self.kind(id) {
- EntryKind::Method(data) => data.decode(self).fn_data.constness,
- EntryKind::Fn(data) => data.decode(self).constness,
- EntryKind::Variant(..) | EntryKind::Struct(..) => hir::Constness::Const,
- _ => hir::Constness::NotConst,
- };
- constness == hir::Constness::Const
- }
-
- crate fn asyncness(&self, id: DefIndex) -> hir::IsAsync {
- match self.kind(id) {
- EntryKind::Fn(data) => data.decode(self).asyncness,
- EntryKind::Method(data) => data.decode(self).fn_data.asyncness,
- EntryKind::ForeignFn(data) => data.decode(self).asyncness,
- _ => bug!("asyncness: expected function kind"),
- }
- }
-
- crate fn is_foreign_item(&self, id: DefIndex) -> bool {
- match self.kind(id) {
- EntryKind::ForeignImmStatic |
- EntryKind::ForeignMutStatic |
- EntryKind::ForeignFn(_) => true,
- _ => false,
- }
- }
-
- crate fn static_mutability(&self, id: DefIndex) -> Option<hir::Mutability> {
- match self.kind(id) {
- EntryKind::ImmStatic |
- EntryKind::ForeignImmStatic => Some(hir::MutImmutable),
- EntryKind::MutStatic |
- EntryKind::ForeignMutStatic => Some(hir::MutMutable),
- _ => None,
- }
- }
-
- crate fn fn_sig(&self, id: DefIndex, tcx: TyCtxt<'tcx>) -> ty::PolyFnSig<'tcx> {
- self.root.per_def.fn_sig.get(self, id).unwrap().decode((self, tcx))
- }
-
- #[inline]
- crate fn def_key(&self, index: DefIndex) -> DefKey {
- let mut key = self.def_path_table.def_key(index);
- if self.is_proc_macro(index) {
- let name = self.raw_proc_macro(index).name();
- key.disambiguated_data.data = DefPathData::MacroNs(Symbol::intern(name));
- }
- key
- }
-
- // Returns the path leading to the thing with this `id`.
- crate fn def_path(&self, id: DefIndex) -> DefPath {
- debug!("def_path(cnum={:?}, id={:?})", self.cnum, id);
- DefPath::make(self.cnum, id, |parent| self.def_key(parent))
- }
-
- #[inline]
- crate fn def_path_hash(&self, index: DefIndex) -> DefPathHash {
- self.def_path_table.def_path_hash(index)
- }
-
- /// Imports the source_map from an external crate into the source_map of the crate
- /// currently being compiled (the "local crate").
- ///
- /// The import algorithm works analogous to how AST items are inlined from an
- /// external crate's metadata:
- /// For every SourceFile in the external source_map an 'inline' copy is created in the
- /// local source_map. The correspondence relation between external and local
- /// SourceFiles is recorded in the `ImportedSourceFile` objects returned from this
- /// function. When an item from an external crate is later inlined into this
- /// crate, this correspondence information is used to translate the span
- /// information of the inlined item so that it refers the correct positions in
- /// the local source_map (see `<decoder::DecodeContext as SpecializedDecoder<Span>>`).
- ///
- /// The import algorithm in the function below will reuse SourceFiles already
- /// existing in the local source_map. For example, even if the SourceFile of some
- /// source file of libstd gets imported many times, there will only ever be
- /// one SourceFile object for the corresponding file in the local source_map.
- ///
- /// Note that imported SourceFiles do not actually contain the source code of the
- /// file they represent, just information about length, line breaks, and
- /// multibyte characters. This information is enough to generate valid debuginfo
- /// for items inlined from other crates.
- ///
- /// Proc macro crates don't currently export spans, so this function does not have
- /// to work for them.
- fn imported_source_files(
- &'a self,
- local_source_map: &source_map::SourceMap,
- ) -> &[cstore::ImportedSourceFile] {
- self.source_map_import_info.init_locking(|| {
- let external_source_map = self.root.source_map.decode(self);
-
- external_source_map.map(|source_file_to_import| {
- // We can't reuse an existing SourceFile, so allocate a new one
- // containing the information we need.
- let syntax_pos::SourceFile { name,
- name_was_remapped,
- src_hash,
- start_pos,
- end_pos,
- mut lines,
- mut multibyte_chars,
- mut non_narrow_chars,
- mut normalized_pos,
- name_hash,
- .. } = source_file_to_import;
-
- let source_length = (end_pos - start_pos).to_usize();
-
- // Translate line-start positions and multibyte character
- // position into frame of reference local to file.
- // `SourceMap::new_imported_source_file()` will then translate those
- // coordinates to their new global frame of reference when the
- // offset of the SourceFile is known.
- for pos in &mut lines {
- *pos = *pos - start_pos;
- }
- for mbc in &mut multibyte_chars {
- mbc.pos = mbc.pos - start_pos;
- }
- for swc in &mut non_narrow_chars {
- *swc = *swc - start_pos;
- }
- for np in &mut normalized_pos {
- np.pos = np.pos - start_pos;
- }
-
- let local_version = local_source_map.new_imported_source_file(name,
- name_was_remapped,
- self.cnum.as_u32(),
- src_hash,
- name_hash,
- source_length,
- lines,
- multibyte_chars,
- non_narrow_chars,
- normalized_pos);
- debug!("CrateMetaData::imported_source_files alloc \
- source_file {:?} original (start_pos {:?} end_pos {:?}) \
- translated (start_pos {:?} end_pos {:?})",
- local_version.name, start_pos, end_pos,
- local_version.start_pos, local_version.end_pos);
-
- cstore::ImportedSourceFile {
- original_start_pos: start_pos,
- original_end_pos: end_pos,
- translated_source_file: local_version,
- }
- }).collect()
- })
- }
-
- /// Get the `DepNodeIndex` corresponding this crate. The result of this
- /// method is cached in the `dep_node_index` field.
- pub(super) fn get_crate_dep_node_index(&self, tcx: TyCtxt<'tcx>) -> DepNodeIndex {
- let mut dep_node_index = self.dep_node_index.load();
-
- if unlikely!(dep_node_index == DepNodeIndex::INVALID) {
- // We have not cached the DepNodeIndex for this upstream crate yet,
- // so use the dep-graph to find it out and cache it.
- // Note that multiple threads can enter this block concurrently.
- // That is fine because the DepNodeIndex remains constant
- // throughout the whole compilation session, and multiple stores
- // would always write the same value.
-
- let def_path_hash = self.def_path_hash(CRATE_DEF_INDEX);
- let dep_node = def_path_hash.to_dep_node(DepKind::CrateMetadata);
-
- dep_node_index = tcx.dep_graph.dep_node_index_of(&dep_node);
- assert!(dep_node_index != DepNodeIndex::INVALID);
- self.dep_node_index.store(dep_node_index);
- }
-
- dep_node_index
- }
-}
-
-// Cannot be implemented on 'ProcMacro', as libproc_macro
-// does not depend on libsyntax
-fn macro_kind(raw: &ProcMacro) -> MacroKind {
- match raw {
- ProcMacro::CustomDerive { .. } => MacroKind::Derive,
- ProcMacro::Attr { .. } => MacroKind::Attr,
- ProcMacro::Bang { .. } => MacroKind::Bang
- }
-}
+++ /dev/null
-use crate::schema::*;
-use crate::table::{FixedSizeEncoding, PerDefTable};
-
-use rustc::middle::cstore::{LinkagePreference, NativeLibrary,
- EncodedMetadata, ForeignModule};
-use rustc::hir::def::CtorKind;
-use rustc::hir::def_id::{CrateNum, CRATE_DEF_INDEX, DefIndex, DefId, LocalDefId, LOCAL_CRATE};
-use rustc::hir::{GenericParamKind, AnonConst};
-use rustc::hir::map::definitions::DefPathTable;
-use rustc_data_structures::fingerprint::Fingerprint;
-use rustc_index::vec::IndexVec;
-use rustc::middle::dependency_format::Linkage;
-use rustc::middle::exported_symbols::{ExportedSymbol, SymbolExportLevel,
- metadata_symbol_name};
-use rustc::middle::lang_items;
-use rustc::mir::{self, interpret};
-use rustc::traits::specialization_graph;
-use rustc::ty::{self, Ty, TyCtxt, SymbolName};
-use rustc::ty::codec::{self as ty_codec, TyEncoder};
-use rustc::ty::layout::VariantIdx;
-
-use rustc::session::config::{self, CrateType};
-use rustc::util::nodemap::FxHashMap;
-
-use rustc_data_structures::stable_hasher::StableHasher;
-use rustc_data_structures::sync::Lrc;
-use rustc_serialize::{Encodable, Encoder, SpecializedEncoder, opaque};
-
-use std::hash::Hash;
-use std::num::NonZeroUsize;
-use std::path::Path;
-use std::u32;
-use syntax::ast;
-use syntax::attr;
-use syntax::expand::is_proc_macro_attr;
-use syntax::source_map::Spanned;
-use syntax::symbol::{kw, sym, Ident, Symbol};
-use syntax_pos::{self, FileName, SourceFile, Span};
-use log::{debug, trace};
-
-use rustc::hir::{self, PatKind};
-use rustc::hir::itemlikevisit::ItemLikeVisitor;
-use rustc::hir::intravisit::{Visitor, NestedVisitorMap};
-use rustc::hir::intravisit;
-
-struct EncodeContext<'tcx> {
- opaque: opaque::Encoder,
- tcx: TyCtxt<'tcx>,
-
- per_def: PerDefTables<'tcx>,
-
- lazy_state: LazyState,
- type_shorthands: FxHashMap<Ty<'tcx>, usize>,
- predicate_shorthands: FxHashMap<ty::Predicate<'tcx>, usize>,
-
- interpret_allocs: FxHashMap<interpret::AllocId, usize>,
- interpret_allocs_inverse: Vec<interpret::AllocId>,
-
- // This is used to speed up Span encoding.
- source_file_cache: Lrc<SourceFile>,
-}
-
-#[derive(Default)]
-struct PerDefTables<'tcx> {
- kind: PerDefTable<Lazy<EntryKind<'tcx>>>,
- visibility: PerDefTable<Lazy<ty::Visibility>>,
- span: PerDefTable<Lazy<Span>>,
- attributes: PerDefTable<Lazy<[ast::Attribute]>>,
- children: PerDefTable<Lazy<[DefIndex]>>,
- stability: PerDefTable<Lazy<attr::Stability>>,
- deprecation: PerDefTable<Lazy<attr::Deprecation>>,
-
- ty: PerDefTable<Lazy<Ty<'tcx>>>,
- fn_sig: PerDefTable<Lazy<ty::PolyFnSig<'tcx>>>,
- impl_trait_ref: PerDefTable<Lazy<ty::TraitRef<'tcx>>>,
- inherent_impls: PerDefTable<Lazy<[DefIndex]>>,
- variances: PerDefTable<Lazy<[ty::Variance]>>,
- generics: PerDefTable<Lazy<ty::Generics>>,
- predicates: PerDefTable<Lazy<ty::GenericPredicates<'tcx>>>,
- predicates_defined_on: PerDefTable<Lazy<ty::GenericPredicates<'tcx>>>,
- super_predicates: PerDefTable<Lazy<ty::GenericPredicates<'tcx>>>,
-
- mir: PerDefTable<Lazy<mir::Body<'tcx>>>,
- promoted_mir: PerDefTable<Lazy<IndexVec<mir::Promoted, mir::Body<'tcx>>>>,
-}
-
-macro_rules! encoder_methods {
- ($($name:ident($ty:ty);)*) => {
- $(fn $name(&mut self, value: $ty) -> Result<(), Self::Error> {
- self.opaque.$name(value)
- })*
- }
-}
-
-impl<'tcx> Encoder for EncodeContext<'tcx> {
- type Error = <opaque::Encoder as Encoder>::Error;
-
- fn emit_unit(&mut self) -> Result<(), Self::Error> {
- Ok(())
- }
-
- encoder_methods! {
- emit_usize(usize);
- emit_u128(u128);
- emit_u64(u64);
- emit_u32(u32);
- emit_u16(u16);
- emit_u8(u8);
-
- emit_isize(isize);
- emit_i128(i128);
- emit_i64(i64);
- emit_i32(i32);
- emit_i16(i16);
- emit_i8(i8);
-
- emit_bool(bool);
- emit_f64(f64);
- emit_f32(f32);
- emit_char(char);
- emit_str(&str);
- }
-}
-
-impl<'tcx, T: Encodable> SpecializedEncoder<Lazy<T>> for EncodeContext<'tcx> {
- fn specialized_encode(&mut self, lazy: &Lazy<T>) -> Result<(), Self::Error> {
- self.emit_lazy_distance(*lazy)
- }
-}
-
-impl<'tcx, T: Encodable> SpecializedEncoder<Lazy<[T]>> for EncodeContext<'tcx> {
- fn specialized_encode(&mut self, lazy: &Lazy<[T]>) -> Result<(), Self::Error> {
- self.emit_usize(lazy.meta)?;
- if lazy.meta == 0 {
- return Ok(());
- }
- self.emit_lazy_distance(*lazy)
- }
-}
-
-impl<'tcx, T> SpecializedEncoder<Lazy<PerDefTable<T>>> for EncodeContext<'tcx>
- where Option<T>: FixedSizeEncoding,
-{
- fn specialized_encode(&mut self, lazy: &Lazy<PerDefTable<T>>) -> Result<(), Self::Error> {
- self.emit_usize(lazy.meta)?;
- self.emit_lazy_distance(*lazy)
- }
-}
-
-impl<'tcx> SpecializedEncoder<CrateNum> for EncodeContext<'tcx> {
- #[inline]
- fn specialized_encode(&mut self, cnum: &CrateNum) -> Result<(), Self::Error> {
- self.emit_u32(cnum.as_u32())
- }
-}
-
-impl<'tcx> SpecializedEncoder<DefId> for EncodeContext<'tcx> {
- #[inline]
- fn specialized_encode(&mut self, def_id: &DefId) -> Result<(), Self::Error> {
- let DefId {
- krate,
- index,
- } = *def_id;
-
- krate.encode(self)?;
- index.encode(self)
- }
-}
-
-impl<'tcx> SpecializedEncoder<DefIndex> for EncodeContext<'tcx> {
- #[inline]
- fn specialized_encode(&mut self, def_index: &DefIndex) -> Result<(), Self::Error> {
- self.emit_u32(def_index.as_u32())
- }
-}
-
-impl<'tcx> SpecializedEncoder<Span> for EncodeContext<'tcx> {
- fn specialized_encode(&mut self, span: &Span) -> Result<(), Self::Error> {
- if span.is_dummy() {
- return TAG_INVALID_SPAN.encode(self)
- }
-
- let span = span.data();
-
- // The Span infrastructure should make sure that this invariant holds:
- debug_assert!(span.lo <= span.hi);
-
- if !self.source_file_cache.contains(span.lo) {
- let source_map = self.tcx.sess.source_map();
- let source_file_index = source_map.lookup_source_file_idx(span.lo);
- self.source_file_cache = source_map.files()[source_file_index].clone();
- }
-
- if !self.source_file_cache.contains(span.hi) {
- // Unfortunately, macro expansion still sometimes generates Spans
- // that malformed in this way.
- return TAG_INVALID_SPAN.encode(self)
- }
-
- // HACK(eddyb) there's no way to indicate which crate a Span is coming
- // from right now, so decoding would fail to find the SourceFile if
- // it's not local to the crate the Span is found in.
- if self.source_file_cache.is_imported() {
- return TAG_INVALID_SPAN.encode(self)
- }
-
- TAG_VALID_SPAN.encode(self)?;
- span.lo.encode(self)?;
-
- // Encode length which is usually less than span.hi and profits more
- // from the variable-length integer encoding that we use.
- let len = span.hi - span.lo;
- len.encode(self)
-
- // Don't encode the expansion context.
- }
-}
-
-impl SpecializedEncoder<Ident> for EncodeContext<'tcx> {
- fn specialized_encode(&mut self, ident: &Ident) -> Result<(), Self::Error> {
- // FIXME(jseyfried): intercrate hygiene
- ident.name.encode(self)
- }
-}
-
-impl<'tcx> SpecializedEncoder<LocalDefId> for EncodeContext<'tcx> {
- #[inline]
- fn specialized_encode(&mut self, def_id: &LocalDefId) -> Result<(), Self::Error> {
- self.specialized_encode(&def_id.to_def_id())
- }
-}
-
-impl<'tcx> SpecializedEncoder<Ty<'tcx>> for EncodeContext<'tcx> {
- fn specialized_encode(&mut self, ty: &Ty<'tcx>) -> Result<(), Self::Error> {
- ty_codec::encode_with_shorthand(self, ty, |ecx| &mut ecx.type_shorthands)
- }
-}
-
-impl<'tcx> SpecializedEncoder<interpret::AllocId> for EncodeContext<'tcx> {
- fn specialized_encode(&mut self, alloc_id: &interpret::AllocId) -> Result<(), Self::Error> {
- use std::collections::hash_map::Entry;
- let index = match self.interpret_allocs.entry(*alloc_id) {
- Entry::Occupied(e) => *e.get(),
- Entry::Vacant(e) => {
- let idx = self.interpret_allocs_inverse.len();
- self.interpret_allocs_inverse.push(*alloc_id);
- e.insert(idx);
- idx
- },
- };
-
- index.encode(self)
- }
-}
-
-impl<'tcx> SpecializedEncoder<&'tcx [(ty::Predicate<'tcx>, Span)]> for EncodeContext<'tcx> {
- fn specialized_encode(&mut self,
- predicates: &&'tcx [(ty::Predicate<'tcx>, Span)])
- -> Result<(), Self::Error> {
- ty_codec::encode_spanned_predicates(self, predicates, |ecx| &mut ecx.predicate_shorthands)
- }
-}
-
-impl<'tcx> SpecializedEncoder<Fingerprint> for EncodeContext<'tcx> {
- fn specialized_encode(&mut self, f: &Fingerprint) -> Result<(), Self::Error> {
- f.encode_opaque(&mut self.opaque)
- }
-}
-
-impl<'tcx, T: Encodable> SpecializedEncoder<mir::ClearCrossCrate<T>> for EncodeContext<'tcx> {
- fn specialized_encode(&mut self,
- _: &mir::ClearCrossCrate<T>)
- -> Result<(), Self::Error> {
- Ok(())
- }
-}
-
-impl<'tcx> TyEncoder for EncodeContext<'tcx> {
- fn position(&self) -> usize {
- self.opaque.position()
- }
-}
-
-/// Helper trait to allow overloading `EncodeContext::lazy` for iterators.
-trait EncodeContentsForLazy<T: ?Sized + LazyMeta> {
- fn encode_contents_for_lazy(self, ecx: &mut EncodeContext<'tcx>) -> T::Meta;
-}
-
-impl<T: Encodable> EncodeContentsForLazy<T> for &T {
- fn encode_contents_for_lazy(self, ecx: &mut EncodeContext<'tcx>) {
- self.encode(ecx).unwrap()
- }
-}
-
-impl<T: Encodable> EncodeContentsForLazy<T> for T {
- fn encode_contents_for_lazy(self, ecx: &mut EncodeContext<'tcx>) {
- self.encode(ecx).unwrap()
- }
-}
-
-impl<I, T: Encodable> EncodeContentsForLazy<[T]> for I
- where I: IntoIterator,
- I::Item: EncodeContentsForLazy<T>,
-{
- fn encode_contents_for_lazy(self, ecx: &mut EncodeContext<'tcx>) -> usize {
- self.into_iter().map(|value| value.encode_contents_for_lazy(ecx)).count()
- }
-}
-
-// Shorthand for `$self.$tables.$table.set($key, $self.lazy($value))`, which would
-// normally need extra variables to avoid errors about multiple mutable borrows.
-macro_rules! record {
- ($self:ident.$tables:ident.$table:ident[$key:expr] <- $value:expr) => {{
- {
- let value = $value;
- let lazy = $self.lazy(value);
- $self.$tables.$table.set($key, lazy);
- }
- }}
-}
-
-impl<'tcx> EncodeContext<'tcx> {
- fn emit_lazy_distance<T: ?Sized + LazyMeta>(
- &mut self,
- lazy: Lazy<T>,
- ) -> Result<(), <Self as Encoder>::Error> {
- let min_end = lazy.position.get() + T::min_size(lazy.meta);
- let distance = match self.lazy_state {
- LazyState::NoNode => bug!("emit_lazy_distance: outside of a metadata node"),
- LazyState::NodeStart(start) => {
- let start = start.get();
- assert!(min_end <= start);
- start - min_end
- }
- LazyState::Previous(last_min_end) => {
- assert!(
- last_min_end <= lazy.position,
- "make sure that the calls to `lazy*` \
- are in the same order as the metadata fields",
- );
- lazy.position.get() - last_min_end.get()
- }
- };
- self.lazy_state = LazyState::Previous(NonZeroUsize::new(min_end).unwrap());
- self.emit_usize(distance)
- }
-
- fn lazy<T: ?Sized + LazyMeta>(
- &mut self,
- value: impl EncodeContentsForLazy<T>,
- ) -> Lazy<T> {
- let pos = NonZeroUsize::new(self.position()).unwrap();
-
- assert_eq!(self.lazy_state, LazyState::NoNode);
- self.lazy_state = LazyState::NodeStart(pos);
- let meta = value.encode_contents_for_lazy(self);
- self.lazy_state = LazyState::NoNode;
-
- assert!(pos.get() + <T>::min_size(meta) <= self.position());
-
- Lazy::from_position_and_meta(pos, meta)
- }
-
- fn encode_info_for_items(&mut self) {
- let krate = self.tcx.hir().krate();
- let vis = Spanned { span: syntax_pos::DUMMY_SP, node: hir::VisibilityKind::Public };
- self.encode_info_for_mod(hir::CRATE_HIR_ID, &krate.module, &krate.attrs, &vis);
- krate.visit_all_item_likes(&mut self.as_deep_visitor());
- for macro_def in &krate.exported_macros {
- self.visit_macro_def(macro_def);
- }
- }
-
- fn encode_def_path_table(&mut self) -> Lazy<DefPathTable> {
- let definitions = self.tcx.hir().definitions();
- self.lazy(definitions.def_path_table())
- }
-
- fn encode_source_map(&mut self) -> Lazy<[syntax_pos::SourceFile]> {
- let source_map = self.tcx.sess.source_map();
- let all_source_files = source_map.files();
-
- let (working_dir, _cwd_remapped) = self.tcx.sess.working_dir.clone();
-
- let adapted = all_source_files.iter()
- .filter(|source_file| {
- // No need to re-export imported source_files, as any downstream
- // crate will import them from their original source.
- // FIXME(eddyb) the `Span` encoding should take that into account.
- !source_file.is_imported()
- })
- .map(|source_file| {
- match source_file.name {
- // This path of this SourceFile has been modified by
- // path-remapping, so we use it verbatim (and avoid
- // cloning the whole map in the process).
- _ if source_file.name_was_remapped => source_file.clone(),
-
- // Otherwise expand all paths to absolute paths because
- // any relative paths are potentially relative to a
- // wrong directory.
- FileName::Real(ref name) => {
- let mut adapted = (**source_file).clone();
- adapted.name = Path::new(&working_dir).join(name).into();
- adapted.name_hash = {
- let mut hasher: StableHasher = StableHasher::new();
- adapted.name.hash(&mut hasher);
- hasher.finish::<u128>()
- };
- Lrc::new(adapted)
- },
-
- // expanded code, not from a file
- _ => source_file.clone(),
- }
- })
- .collect::<Vec<_>>();
-
- self.lazy(adapted.iter().map(|rc| &**rc))
- }
-
- fn encode_crate_root(&mut self) -> Lazy<CrateRoot<'tcx>> {
- let is_proc_macro = self.tcx.sess.crate_types.borrow().contains(&CrateType::ProcMacro);
-
- let mut i = self.position();
-
- let crate_deps = self.encode_crate_deps();
- let dylib_dependency_formats = self.encode_dylib_dependency_formats();
- let dep_bytes = self.position() - i;
-
- // Encode the lib features.
- i = self.position();
- let lib_features = self.encode_lib_features();
- let lib_feature_bytes = self.position() - i;
-
- // Encode the language items.
- i = self.position();
- let lang_items = self.encode_lang_items();
- let lang_items_missing = self.encode_lang_items_missing();
- let lang_item_bytes = self.position() - i;
-
- // Encode the diagnostic items.
- i = self.position();
- let diagnostic_items = self.encode_diagnostic_items();
- let diagnostic_item_bytes = self.position() - i;
-
- // Encode the native libraries used
- i = self.position();
- let native_libraries = self.encode_native_libraries();
- let native_lib_bytes = self.position() - i;
-
- let foreign_modules = self.encode_foreign_modules();
-
- // Encode source_map
- i = self.position();
- let source_map = self.encode_source_map();
- let source_map_bytes = self.position() - i;
-
- // Encode DefPathTable
- i = self.position();
- let def_path_table = self.encode_def_path_table();
- let def_path_table_bytes = self.position() - i;
-
- // Encode the def IDs of impls, for coherence checking.
- i = self.position();
- let impls = self.encode_impls();
- let impl_bytes = self.position() - i;
-
- // Encode exported symbols info.
- i = self.position();
- let exported_symbols = self.tcx.exported_symbols(LOCAL_CRATE);
- let exported_symbols = self.encode_exported_symbols(&exported_symbols);
- let exported_symbols_bytes = self.position() - i;
-
- let tcx = self.tcx;
-
- // Encode the items.
- i = self.position();
- self.encode_info_for_items();
- let item_bytes = self.position() - i;
-
- // Encode the allocation index
- let interpret_alloc_index = {
- let mut interpret_alloc_index = Vec::new();
- let mut n = 0;
- trace!("beginning to encode alloc ids");
- loop {
- let new_n = self.interpret_allocs_inverse.len();
- // if we have found new ids, serialize those, too
- if n == new_n {
- // otherwise, abort
- break;
- }
- trace!("encoding {} further alloc ids", new_n - n);
- for idx in n..new_n {
- let id = self.interpret_allocs_inverse[idx];
- let pos = self.position() as u32;
- interpret_alloc_index.push(pos);
- interpret::specialized_encode_alloc_id(
- self,
- tcx,
- id,
- ).unwrap();
- }
- n = new_n;
- }
- self.lazy(interpret_alloc_index)
- };
-
-
- i = self.position();
- let per_def = LazyPerDefTables {
- kind: self.per_def.kind.encode(&mut self.opaque),
- visibility: self.per_def.visibility.encode(&mut self.opaque),
- span: self.per_def.span.encode(&mut self.opaque),
- attributes: self.per_def.attributes.encode(&mut self.opaque),
- children: self.per_def.children.encode(&mut self.opaque),
- stability: self.per_def.stability.encode(&mut self.opaque),
- deprecation: self.per_def.deprecation.encode(&mut self.opaque),
-
- ty: self.per_def.ty.encode(&mut self.opaque),
- fn_sig: self.per_def.fn_sig.encode(&mut self.opaque),
- impl_trait_ref: self.per_def.impl_trait_ref.encode(&mut self.opaque),
- inherent_impls: self.per_def.inherent_impls.encode(&mut self.opaque),
- variances: self.per_def.variances.encode(&mut self.opaque),
- generics: self.per_def.generics.encode(&mut self.opaque),
- predicates: self.per_def.predicates.encode(&mut self.opaque),
- predicates_defined_on: self.per_def.predicates_defined_on.encode(&mut self.opaque),
- super_predicates: self.per_def.super_predicates.encode(&mut self.opaque),
-
- mir: self.per_def.mir.encode(&mut self.opaque),
- promoted_mir: self.per_def.promoted_mir.encode(&mut self.opaque),
- };
- let per_def_bytes = self.position() - i;
-
- // Encode the proc macro data
- i = self.position();
- let proc_macro_data = self.encode_proc_macros();
- let proc_macro_data_bytes = self.position() - i;
-
-
- let attrs = tcx.hir().krate_attrs();
- let has_default_lib_allocator = attr::contains_name(&attrs, sym::default_lib_allocator);
- let has_global_allocator = *tcx.sess.has_global_allocator.get();
-
- let root = self.lazy(CrateRoot {
- name: tcx.crate_name(LOCAL_CRATE),
- extra_filename: tcx.sess.opts.cg.extra_filename.clone(),
- triple: tcx.sess.opts.target_triple.clone(),
- hash: tcx.crate_hash(LOCAL_CRATE),
- disambiguator: tcx.sess.local_crate_disambiguator(),
- panic_strategy: tcx.sess.panic_strategy(),
- edition: tcx.sess.edition(),
- has_global_allocator: has_global_allocator,
- has_panic_handler: tcx.has_panic_handler(LOCAL_CRATE),
- has_default_lib_allocator: has_default_lib_allocator,
- plugin_registrar_fn: tcx.plugin_registrar_fn(LOCAL_CRATE).map(|id| id.index),
- proc_macro_decls_static: if is_proc_macro {
- let id = tcx.proc_macro_decls_static(LOCAL_CRATE).unwrap();
- Some(id.index)
- } else {
- None
- },
- proc_macro_data,
- proc_macro_stability: if is_proc_macro {
- tcx.lookup_stability(DefId::local(CRATE_DEF_INDEX)).map(|stab| stab.clone())
- } else {
- None
- },
- compiler_builtins: attr::contains_name(&attrs, sym::compiler_builtins),
- needs_allocator: attr::contains_name(&attrs, sym::needs_allocator),
- needs_panic_runtime: attr::contains_name(&attrs, sym::needs_panic_runtime),
- no_builtins: attr::contains_name(&attrs, sym::no_builtins),
- panic_runtime: attr::contains_name(&attrs, sym::panic_runtime),
- profiler_runtime: attr::contains_name(&attrs, sym::profiler_runtime),
- sanitizer_runtime: attr::contains_name(&attrs, sym::sanitizer_runtime),
- symbol_mangling_version: tcx.sess.opts.debugging_opts.symbol_mangling_version,
-
- crate_deps,
- dylib_dependency_formats,
- lib_features,
- lang_items,
- diagnostic_items,
- lang_items_missing,
- native_libraries,
- foreign_modules,
- source_map,
- def_path_table,
- impls,
- exported_symbols,
- interpret_alloc_index,
- per_def,
- });
-
- let total_bytes = self.position();
-
- if self.tcx.sess.meta_stats() {
- let mut zero_bytes = 0;
- for e in self.opaque.data.iter() {
- if *e == 0 {
- zero_bytes += 1;
- }
- }
-
- println!("metadata stats:");
- println!(" dep bytes: {}", dep_bytes);
- println!(" lib feature bytes: {}", lib_feature_bytes);
- println!(" lang item bytes: {}", lang_item_bytes);
- println!(" diagnostic item bytes: {}", diagnostic_item_bytes);
- println!(" native bytes: {}", native_lib_bytes);
- println!(" source_map bytes: {}", source_map_bytes);
- println!(" impl bytes: {}", impl_bytes);
- println!(" exp. symbols bytes: {}", exported_symbols_bytes);
- println!(" def-path table bytes: {}", def_path_table_bytes);
- println!(" proc-macro-data-bytes: {}", proc_macro_data_bytes);
- println!(" item bytes: {}", item_bytes);
- println!(" per-def table bytes: {}", per_def_bytes);
- println!(" zero bytes: {}", zero_bytes);
- println!(" total bytes: {}", total_bytes);
- }
-
- root
- }
-}
-
-impl EncodeContext<'tcx> {
- fn encode_variances_of(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_variances_of({:?})", def_id);
- record!(self.per_def.variances[def_id] <- &self.tcx.variances_of(def_id)[..]);
- }
-
- fn encode_item_type(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_item_type({:?})", def_id);
- record!(self.per_def.ty[def_id] <- self.tcx.type_of(def_id));
- }
-
- fn encode_enum_variant_info(
- &mut self,
- enum_did: DefId,
- index: VariantIdx,
- ) {
- let tcx = self.tcx;
- let def = tcx.adt_def(enum_did);
- let variant = &def.variants[index];
- let def_id = variant.def_id;
- debug!("EncodeContext::encode_enum_variant_info({:?})", def_id);
-
- let data = VariantData {
- ctor_kind: variant.ctor_kind,
- discr: variant.discr,
- ctor: variant.ctor_def_id.map(|did| did.index),
- };
-
- let enum_id = tcx.hir().as_local_hir_id(enum_did).unwrap();
- let enum_vis = &tcx.hir().expect_item(enum_id).vis;
-
- record!(self.per_def.kind[def_id] <- EntryKind::Variant(self.lazy(data)));
- record!(self.per_def.visibility[def_id] <-
- ty::Visibility::from_hir(enum_vis, enum_id, self.tcx));
- record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
- record!(self.per_def.attributes[def_id] <- &self.tcx.get_attrs(def_id)[..]);
- record!(self.per_def.children[def_id] <- variant.fields.iter().map(|f| {
- assert!(f.did.is_local());
- f.did.index
- }));
- self.encode_stability(def_id);
- self.encode_deprecation(def_id);
- self.encode_item_type(def_id);
- if variant.ctor_kind == CtorKind::Fn {
- // FIXME(eddyb) encode signature only in `encode_enum_variant_ctor`.
- if let Some(ctor_def_id) = variant.ctor_def_id {
- record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(ctor_def_id));
- }
- // FIXME(eddyb) is this ever used?
- self.encode_variances_of(def_id);
- }
- self.encode_generics(def_id);
- self.encode_predicates(def_id);
- self.encode_optimized_mir(def_id);
- self.encode_promoted_mir(def_id);
- }
-
- fn encode_enum_variant_ctor(
- &mut self,
- enum_did: DefId,
- index: VariantIdx,
- ) {
- let tcx = self.tcx;
- let def = tcx.adt_def(enum_did);
- let variant = &def.variants[index];
- let def_id = variant.ctor_def_id.unwrap();
- debug!("EncodeContext::encode_enum_variant_ctor({:?})", def_id);
-
- // FIXME(eddyb) encode only the `CtorKind` for constructors.
- let data = VariantData {
- ctor_kind: variant.ctor_kind,
- discr: variant.discr,
- ctor: Some(def_id.index),
- };
-
- // Variant constructors have the same visibility as the parent enums, unless marked as
- // non-exhaustive, in which case they are lowered to `pub(crate)`.
- let enum_id = tcx.hir().as_local_hir_id(enum_did).unwrap();
- let enum_vis = &tcx.hir().expect_item(enum_id).vis;
- let mut ctor_vis = ty::Visibility::from_hir(enum_vis, enum_id, tcx);
- if variant.is_field_list_non_exhaustive() && ctor_vis == ty::Visibility::Public {
- ctor_vis = ty::Visibility::Restricted(DefId::local(CRATE_DEF_INDEX));
- }
-
- record!(self.per_def.kind[def_id] <- EntryKind::Variant(self.lazy(data)));
- record!(self.per_def.visibility[def_id] <- ctor_vis);
- record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
- self.encode_stability(def_id);
- self.encode_deprecation(def_id);
- self.encode_item_type(def_id);
- if variant.ctor_kind == CtorKind::Fn {
- record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(def_id));
- self.encode_variances_of(def_id);
- }
- self.encode_generics(def_id);
- self.encode_predicates(def_id);
- self.encode_optimized_mir(def_id);
- self.encode_promoted_mir(def_id);
- }
-
- fn encode_info_for_mod(
- &mut self,
- id: hir::HirId,
- md: &hir::Mod,
- attrs: &[ast::Attribute],
- vis: &hir::Visibility,
- ) {
- let tcx = self.tcx;
- let def_id = tcx.hir().local_def_id(id);
- debug!("EncodeContext::encode_info_for_mod({:?})", def_id);
-
- let data = ModData {
- reexports: match tcx.module_exports(def_id) {
- Some(exports) => self.lazy(exports),
- _ => Lazy::empty(),
- },
- };
-
- record!(self.per_def.kind[def_id] <- EntryKind::Mod(self.lazy(data)));
- record!(self.per_def.visibility[def_id] <- ty::Visibility::from_hir(vis, id, self.tcx));
- record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
- record!(self.per_def.attributes[def_id] <- attrs);
- record!(self.per_def.children[def_id] <- md.item_ids.iter().map(|item_id| {
- tcx.hir().local_def_id(item_id.id).index
- }));
- self.encode_stability(def_id);
- self.encode_deprecation(def_id);
- }
-
- fn encode_field(
- &mut self,
- adt_def_id: DefId,
- variant_index: VariantIdx,
- field_index: usize,
- ) {
- let tcx = self.tcx;
- let variant = &tcx.adt_def(adt_def_id).variants[variant_index];
- let field = &variant.fields[field_index];
-
- let def_id = field.did;
- debug!("EncodeContext::encode_field({:?})", def_id);
-
- let variant_id = tcx.hir().as_local_hir_id(variant.def_id).unwrap();
- let variant_data = tcx.hir().expect_variant_data(variant_id);
-
- record!(self.per_def.kind[def_id] <- EntryKind::Field);
- record!(self.per_def.visibility[def_id] <- field.vis);
- record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
- record!(self.per_def.attributes[def_id] <- &variant_data.fields()[field_index].attrs);
- self.encode_stability(def_id);
- self.encode_deprecation(def_id);
- self.encode_item_type(def_id);
- self.encode_generics(def_id);
- self.encode_predicates(def_id);
- }
-
- fn encode_struct_ctor(&mut self, adt_def_id: DefId, def_id: DefId) {
- debug!("EncodeContext::encode_struct_ctor({:?})", def_id);
- let tcx = self.tcx;
- let adt_def = tcx.adt_def(adt_def_id);
- let variant = adt_def.non_enum_variant();
-
- let data = VariantData {
- ctor_kind: variant.ctor_kind,
- discr: variant.discr,
- ctor: Some(def_id.index),
- };
-
- let struct_id = tcx.hir().as_local_hir_id(adt_def_id).unwrap();
- let struct_vis = &tcx.hir().expect_item(struct_id).vis;
- let mut ctor_vis = ty::Visibility::from_hir(struct_vis, struct_id, tcx);
- for field in &variant.fields {
- if ctor_vis.is_at_least(field.vis, tcx) {
- ctor_vis = field.vis;
- }
- }
-
- // If the structure is marked as non_exhaustive then lower the visibility
- // to within the crate.
- if adt_def.non_enum_variant().is_field_list_non_exhaustive() &&
- ctor_vis == ty::Visibility::Public
- {
- ctor_vis = ty::Visibility::Restricted(DefId::local(CRATE_DEF_INDEX));
- }
-
- record!(self.per_def.kind[def_id] <- EntryKind::Struct(self.lazy(data), adt_def.repr));
- record!(self.per_def.visibility[def_id] <- ctor_vis);
- record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
- self.encode_stability(def_id);
- self.encode_deprecation(def_id);
- self.encode_item_type(def_id);
- if variant.ctor_kind == CtorKind::Fn {
- record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(def_id));
- self.encode_variances_of(def_id);
- }
- self.encode_generics(def_id);
- self.encode_predicates(def_id);
- self.encode_optimized_mir(def_id);
- self.encode_promoted_mir(def_id);
- }
-
- fn encode_generics(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_generics({:?})", def_id);
- record!(self.per_def.generics[def_id] <- self.tcx.generics_of(def_id));
- }
-
- fn encode_predicates(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_predicates({:?})", def_id);
- record!(self.per_def.predicates[def_id] <- self.tcx.predicates_of(def_id));
- }
-
- fn encode_predicates_defined_on(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_predicates_defined_on({:?})", def_id);
- record!(self.per_def.predicates_defined_on[def_id] <-
- self.tcx.predicates_defined_on(def_id))
- }
-
- fn encode_super_predicates(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_super_predicates({:?})", def_id);
- record!(self.per_def.super_predicates[def_id] <- self.tcx.super_predicates_of(def_id));
- }
-
- fn encode_info_for_trait_item(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_info_for_trait_item({:?})", def_id);
- let tcx = self.tcx;
-
- let hir_id = tcx.hir().as_local_hir_id(def_id).unwrap();
- let ast_item = tcx.hir().expect_trait_item(hir_id);
- let trait_item = tcx.associated_item(def_id);
-
- let container = match trait_item.defaultness {
- hir::Defaultness::Default { has_value: true } =>
- AssocContainer::TraitWithDefault,
- hir::Defaultness::Default { has_value: false } =>
- AssocContainer::TraitRequired,
- hir::Defaultness::Final =>
- span_bug!(ast_item.span, "traits cannot have final items"),
- };
-
- record!(self.per_def.kind[def_id] <- match trait_item.kind {
- ty::AssocKind::Const => {
- let rendered =
- hir::print::to_string(self.tcx.hir(), |s| s.print_trait_item(ast_item));
- let rendered_const = self.lazy(RenderedConst(rendered));
-
- EntryKind::AssocConst(container, ConstQualif { mir: 0 }, rendered_const)
- }
- ty::AssocKind::Method => {
- let fn_data = if let hir::TraitItemKind::Method(m_sig, m) = &ast_item.kind {
- let param_names = match *m {
- hir::TraitMethod::Required(ref names) => {
- self.encode_fn_param_names(names)
- }
- hir::TraitMethod::Provided(body) => {
- self.encode_fn_param_names_for_body(body)
- }
- };
- FnData {
- asyncness: m_sig.header.asyncness,
- constness: hir::Constness::NotConst,
- param_names,
- }
- } else {
- bug!()
- };
- EntryKind::Method(self.lazy(MethodData {
- fn_data,
- container,
- has_self: trait_item.method_has_self_argument,
- }))
- }
- ty::AssocKind::Type => EntryKind::AssocType(container),
- ty::AssocKind::OpaqueTy => span_bug!(ast_item.span, "opaque type in trait"),
- });
- record!(self.per_def.visibility[def_id] <- trait_item.vis);
- record!(self.per_def.span[def_id] <- ast_item.span);
- record!(self.per_def.attributes[def_id] <- &ast_item.attrs);
- self.encode_stability(def_id);
- self.encode_deprecation(def_id);
- match trait_item.kind {
- ty::AssocKind::Const |
- ty::AssocKind::Method => {
- self.encode_item_type(def_id);
- }
- ty::AssocKind::Type => {
- if trait_item.defaultness.has_value() {
- self.encode_item_type(def_id);
- }
- }
- ty::AssocKind::OpaqueTy => unreachable!(),
- }
- if trait_item.kind == ty::AssocKind::Method {
- record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(def_id));
- self.encode_variances_of(def_id);
- }
- self.encode_generics(def_id);
- self.encode_predicates(def_id);
- self.encode_optimized_mir(def_id);
- self.encode_promoted_mir(def_id);
- }
-
- fn metadata_output_only(&self) -> bool {
- // MIR optimisation can be skipped when we're just interested in the metadata.
- !self.tcx.sess.opts.output_types.should_codegen()
- }
-
- fn encode_info_for_impl_item(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_info_for_impl_item({:?})", def_id);
- let tcx = self.tcx;
-
- let hir_id = self.tcx.hir().as_local_hir_id(def_id).unwrap();
- let ast_item = self.tcx.hir().expect_impl_item(hir_id);
- let impl_item = self.tcx.associated_item(def_id);
-
- let container = match impl_item.defaultness {
- hir::Defaultness::Default { has_value: true } => AssocContainer::ImplDefault,
- hir::Defaultness::Final => AssocContainer::ImplFinal,
- hir::Defaultness::Default { has_value: false } =>
- span_bug!(ast_item.span, "impl items always have values (currently)"),
- };
-
- record!(self.per_def.kind[def_id] <- match impl_item.kind {
- ty::AssocKind::Const => {
- if let hir::ImplItemKind::Const(_, body_id) = ast_item.kind {
- let mir = self.tcx.at(ast_item.span).mir_const_qualif(def_id).0;
-
- EntryKind::AssocConst(container,
- ConstQualif { mir },
- self.encode_rendered_const_for_body(body_id))
- } else {
- bug!()
- }
- }
- ty::AssocKind::Method => {
- let fn_data = if let hir::ImplItemKind::Method(ref sig, body) = ast_item.kind {
- FnData {
- asyncness: sig.header.asyncness,
- constness: sig.header.constness,
- param_names: self.encode_fn_param_names_for_body(body),
- }
- } else {
- bug!()
- };
- EntryKind::Method(self.lazy(MethodData {
- fn_data,
- container,
- has_self: impl_item.method_has_self_argument,
- }))
- }
- ty::AssocKind::OpaqueTy => EntryKind::AssocOpaqueTy(container),
- ty::AssocKind::Type => EntryKind::AssocType(container)
- });
- record!(self.per_def.visibility[def_id] <- impl_item.vis);
- record!(self.per_def.span[def_id] <- ast_item.span);
- record!(self.per_def.attributes[def_id] <- &ast_item.attrs);
- self.encode_stability(def_id);
- self.encode_deprecation(def_id);
- self.encode_item_type(def_id);
- if impl_item.kind == ty::AssocKind::Method {
- record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(def_id));
- self.encode_variances_of(def_id);
- }
- self.encode_generics(def_id);
- self.encode_predicates(def_id);
- let mir = match ast_item.kind {
- hir::ImplItemKind::Const(..) => true,
- hir::ImplItemKind::Method(ref sig, _) => {
- let generics = self.tcx.generics_of(def_id);
- let needs_inline = (generics.requires_monomorphization(self.tcx) ||
- tcx.codegen_fn_attrs(def_id).requests_inline()) &&
- !self.metadata_output_only();
- let is_const_fn = sig.header.constness == hir::Constness::Const;
- let always_encode_mir = self.tcx.sess.opts.debugging_opts.always_encode_mir;
- needs_inline || is_const_fn || always_encode_mir
- },
- hir::ImplItemKind::OpaqueTy(..) |
- hir::ImplItemKind::TyAlias(..) => false,
- };
- if mir {
- self.encode_optimized_mir(def_id);
- self.encode_promoted_mir(def_id);
- }
- }
-
- fn encode_fn_param_names_for_body(&mut self, body_id: hir::BodyId)
- -> Lazy<[ast::Name]> {
- self.tcx.dep_graph.with_ignore(|| {
- let body = self.tcx.hir().body(body_id);
- self.lazy(body.params.iter().map(|arg| {
- match arg.pat.kind {
- PatKind::Binding(_, _, ident, _) => ident.name,
- _ => kw::Invalid,
- }
- }))
- })
- }
-
- fn encode_fn_param_names(&mut self, param_names: &[ast::Ident]) -> Lazy<[ast::Name]> {
- self.lazy(param_names.iter().map(|ident| ident.name))
- }
-
- fn encode_optimized_mir(&mut self, def_id: DefId) {
- debug!("EntryBuilder::encode_mir({:?})", def_id);
- if self.tcx.mir_keys(LOCAL_CRATE).contains(&def_id) {
- record!(self.per_def.mir[def_id] <- self.tcx.optimized_mir(def_id));
- }
- }
-
- fn encode_promoted_mir(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_promoted_mir({:?})", def_id);
- if self.tcx.mir_keys(LOCAL_CRATE).contains(&def_id) {
- record!(self.per_def.promoted_mir[def_id] <- self.tcx.promoted_mir(def_id));
- }
- }
-
- // Encodes the inherent implementations of a structure, enumeration, or trait.
- fn encode_inherent_implementations(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_inherent_implementations({:?})", def_id);
- let implementations = self.tcx.inherent_impls(def_id);
- if !implementations.is_empty() {
- record!(self.per_def.inherent_impls[def_id] <- implementations.iter().map(|&def_id| {
- assert!(def_id.is_local());
- def_id.index
- }));
- }
- }
-
- fn encode_stability(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_stability({:?})", def_id);
- if let Some(stab) = self.tcx.lookup_stability(def_id) {
- record!(self.per_def.stability[def_id] <- stab)
- }
- }
-
- fn encode_deprecation(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_deprecation({:?})", def_id);
- if let Some(depr) = self.tcx.lookup_deprecation(def_id) {
- record!(self.per_def.deprecation[def_id] <- depr);
- }
- }
-
- fn encode_rendered_const_for_body(&mut self, body_id: hir::BodyId) -> Lazy<RenderedConst> {
- let body = self.tcx.hir().body(body_id);
- let rendered = hir::print::to_string(self.tcx.hir(), |s| s.print_expr(&body.value));
- let rendered_const = &RenderedConst(rendered);
- self.lazy(rendered_const)
- }
-
- fn encode_info_for_item(&mut self, def_id: DefId, item: &'tcx hir::Item) {
- let tcx = self.tcx;
-
- debug!("EncodeContext::encode_info_for_item({:?})", def_id);
-
- record!(self.per_def.kind[def_id] <- match item.kind {
- hir::ItemKind::Static(_, hir::MutMutable, _) => EntryKind::MutStatic,
- hir::ItemKind::Static(_, hir::MutImmutable, _) => EntryKind::ImmStatic,
- hir::ItemKind::Const(_, body_id) => {
- let mir = self.tcx.at(item.span).mir_const_qualif(def_id).0;
- EntryKind::Const(
- ConstQualif { mir },
- self.encode_rendered_const_for_body(body_id)
- )
- }
- hir::ItemKind::Fn(_, header, .., body) => {
- let data = FnData {
- asyncness: header.asyncness,
- constness: header.constness,
- param_names: self.encode_fn_param_names_for_body(body),
- };
-
- EntryKind::Fn(self.lazy(data))
- }
- hir::ItemKind::Mod(ref m) => {
- return self.encode_info_for_mod(item.hir_id, m, &item.attrs, &item.vis);
- }
- hir::ItemKind::ForeignMod(_) => EntryKind::ForeignMod,
- hir::ItemKind::GlobalAsm(..) => EntryKind::GlobalAsm,
- hir::ItemKind::TyAlias(..) => EntryKind::Type,
- hir::ItemKind::OpaqueTy(..) => EntryKind::OpaqueTy,
- hir::ItemKind::Enum(..) => EntryKind::Enum(self.tcx.adt_def(def_id).repr),
- hir::ItemKind::Struct(ref struct_def, _) => {
- let adt_def = self.tcx.adt_def(def_id);
- let variant = adt_def.non_enum_variant();
-
- // Encode def_ids for each field and method
- // for methods, write all the stuff get_trait_method
- // needs to know
- let ctor = struct_def.ctor_hir_id().map(|ctor_hir_id| {
- self.tcx.hir().local_def_id(ctor_hir_id).index
- });
-
- EntryKind::Struct(self.lazy(VariantData {
- ctor_kind: variant.ctor_kind,
- discr: variant.discr,
- ctor,
- }), adt_def.repr)
- }
- hir::ItemKind::Union(..) => {
- let adt_def = self.tcx.adt_def(def_id);
- let variant = adt_def.non_enum_variant();
-
- EntryKind::Union(self.lazy(VariantData {
- ctor_kind: variant.ctor_kind,
- discr: variant.discr,
- ctor: None,
- }), adt_def.repr)
- }
- hir::ItemKind::Impl(_, _, defaultness, ..) => {
- let trait_ref = self.tcx.impl_trait_ref(def_id);
- let polarity = self.tcx.impl_polarity(def_id);
- let parent = if let Some(trait_ref) = trait_ref {
- let trait_def = self.tcx.trait_def(trait_ref.def_id);
- trait_def.ancestors(self.tcx, def_id).nth(1).and_then(|node| {
- match node {
- specialization_graph::Node::Impl(parent) => Some(parent),
- _ => None,
- }
- })
- } else {
- None
- };
-
- // if this is an impl of `CoerceUnsized`, create its
- // "unsized info", else just store None
- let coerce_unsized_info =
- trait_ref.and_then(|t| {
- if Some(t.def_id) == self.tcx.lang_items().coerce_unsized_trait() {
- Some(self.tcx.at(item.span).coerce_unsized_info(def_id))
- } else {
- None
- }
- });
-
- let data = ImplData {
- polarity,
- defaultness,
- parent_impl: parent,
- coerce_unsized_info,
- };
-
- EntryKind::Impl(self.lazy(data))
- }
- hir::ItemKind::Trait(..) => {
- let trait_def = self.tcx.trait_def(def_id);
- let data = TraitData {
- unsafety: trait_def.unsafety,
- paren_sugar: trait_def.paren_sugar,
- has_auto_impl: self.tcx.trait_is_auto(def_id),
- is_marker: trait_def.is_marker,
- };
-
- EntryKind::Trait(self.lazy(data))
- }
- hir::ItemKind::TraitAlias(..) => EntryKind::TraitAlias,
- hir::ItemKind::ExternCrate(_) |
- hir::ItemKind::Use(..) => bug!("cannot encode info for item {:?}", item),
- });
- record!(self.per_def.visibility[def_id] <-
- ty::Visibility::from_hir(&item.vis, item.hir_id, tcx));
- record!(self.per_def.span[def_id] <- item.span);
- record!(self.per_def.attributes[def_id] <- &item.attrs);
- // FIXME(eddyb) there should be a nicer way to do this.
- match item.kind {
- hir::ItemKind::ForeignMod(ref fm) => record!(self.per_def.children[def_id] <-
- fm.items
- .iter()
- .map(|foreign_item| tcx.hir().local_def_id(
- foreign_item.hir_id).index)
- ),
- hir::ItemKind::Enum(..) => record!(self.per_def.children[def_id] <-
- self.tcx.adt_def(def_id).variants.iter().map(|v| {
- assert!(v.def_id.is_local());
- v.def_id.index
- })
- ),
- hir::ItemKind::Struct(..) |
- hir::ItemKind::Union(..) => record!(self.per_def.children[def_id] <-
- self.tcx.adt_def(def_id).non_enum_variant().fields.iter().map(|f| {
- assert!(f.did.is_local());
- f.did.index
- })
- ),
- hir::ItemKind::Impl(..) |
- hir::ItemKind::Trait(..) => {
- let associated_item_def_ids = self.tcx.associated_item_def_ids(def_id);
- record!(self.per_def.children[def_id] <-
- associated_item_def_ids.iter().map(|&def_id| {
- assert!(def_id.is_local());
- def_id.index
- })
- );
- }
- _ => {}
- }
- self.encode_stability(def_id);
- self.encode_deprecation(def_id);
- match item.kind {
- hir::ItemKind::Static(..) |
- hir::ItemKind::Const(..) |
- hir::ItemKind::Fn(..) |
- hir::ItemKind::TyAlias(..) |
- hir::ItemKind::OpaqueTy(..) |
- hir::ItemKind::Enum(..) |
- hir::ItemKind::Struct(..) |
- hir::ItemKind::Union(..) |
- hir::ItemKind::Impl(..) => self.encode_item_type(def_id),
- _ => {}
- }
- if let hir::ItemKind::Fn(..) = item.kind {
- record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(def_id));
- }
- if let hir::ItemKind::Impl(..) = item.kind {
- if let Some(trait_ref) = self.tcx.impl_trait_ref(def_id) {
- record!(self.per_def.impl_trait_ref[def_id] <- trait_ref);
- }
- }
- self.encode_inherent_implementations(def_id);
- match item.kind {
- hir::ItemKind::Enum(..) |
- hir::ItemKind::Struct(..) |
- hir::ItemKind::Union(..) |
- hir::ItemKind::Fn(..) => self.encode_variances_of(def_id),
- _ => {}
- }
- match item.kind {
- hir::ItemKind::Static(..) |
- hir::ItemKind::Const(..) |
- hir::ItemKind::Fn(..) |
- hir::ItemKind::TyAlias(..) |
- hir::ItemKind::Enum(..) |
- hir::ItemKind::Struct(..) |
- hir::ItemKind::Union(..) |
- hir::ItemKind::Impl(..) |
- hir::ItemKind::OpaqueTy(..) |
- hir::ItemKind::Trait(..) |
- hir::ItemKind::TraitAlias(..) => {
- self.encode_generics(def_id);
- self.encode_predicates(def_id);
- }
- _ => {}
- }
- // The only time that `predicates_defined_on` is used (on
- // an external item) is for traits, during chalk lowering,
- // so only encode it in that case as an efficiency
- // hack. (No reason not to expand it in the future if
- // necessary.)
- match item.kind {
- hir::ItemKind::Trait(..) |
- hir::ItemKind::TraitAlias(..) => {
- self.encode_predicates_defined_on(def_id);
- }
- _ => {} // not *wrong* for other kinds of items, but not needed
- }
- match item.kind {
- hir::ItemKind::Trait(..) |
- hir::ItemKind::TraitAlias(..) => {
- self.encode_super_predicates(def_id);
- }
- _ => {}
- }
-
- let mir = match item.kind {
- hir::ItemKind::Static(..) | hir::ItemKind::Const(..) => true,
- hir::ItemKind::Fn(_, header, ..) => {
- let generics = tcx.generics_of(def_id);
- let needs_inline =
- (generics.requires_monomorphization(tcx) ||
- tcx.codegen_fn_attrs(def_id).requests_inline()) &&
- !self.metadata_output_only();
- let always_encode_mir = self.tcx.sess.opts.debugging_opts.always_encode_mir;
- needs_inline || header.constness == hir::Constness::Const || always_encode_mir
- }
- _ => false,
- };
- if mir {
- self.encode_optimized_mir(def_id);
- self.encode_promoted_mir(def_id);
- }
- }
-
- /// Serialize the text of exported macros
- fn encode_info_for_macro_def(&mut self, macro_def: &hir::MacroDef) {
- use syntax::print::pprust;
- let def_id = self.tcx.hir().local_def_id(macro_def.hir_id);
- record!(self.per_def.kind[def_id] <- EntryKind::MacroDef(self.lazy(MacroDef {
- body: pprust::tts_to_string(macro_def.body.clone()),
- legacy: macro_def.legacy,
- })));
- record!(self.per_def.visibility[def_id] <- ty::Visibility::Public);
- record!(self.per_def.span[def_id] <- macro_def.span);
- record!(self.per_def.attributes[def_id] <- ¯o_def.attrs);
- self.encode_stability(def_id);
- self.encode_deprecation(def_id);
- }
-
- fn encode_info_for_generic_param(
- &mut self,
- def_id: DefId,
- kind: EntryKind<'tcx>,
- encode_type: bool,
- ) {
- record!(self.per_def.kind[def_id] <- kind);
- record!(self.per_def.visibility[def_id] <- ty::Visibility::Public);
- record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
- if encode_type {
- self.encode_item_type(def_id);
- }
- }
-
- fn encode_info_for_closure(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_info_for_closure({:?})", def_id);
-
- // NOTE(eddyb) `tcx.type_of(def_id)` isn't used because it's fully generic,
- // including on the signature, which is inferred in `typeck_tables_of.
- let hir_id = self.tcx.hir().as_local_hir_id(def_id).unwrap();
- let ty = self.tcx.typeck_tables_of(def_id).node_type(hir_id);
-
- record!(self.per_def.kind[def_id] <- match ty.kind {
- ty::Generator(def_id, ..) => {
- let layout = self.tcx.generator_layout(def_id);
- let data = GeneratorData {
- layout: layout.clone(),
- };
- EntryKind::Generator(self.lazy(data))
- }
-
- ty::Closure(..) => EntryKind::Closure,
-
- _ => bug!("closure that is neither generator nor closure"),
- });
- record!(self.per_def.visibility[def_id] <- ty::Visibility::Public);
- record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
- record!(self.per_def.attributes[def_id] <- &self.tcx.get_attrs(def_id)[..]);
- self.encode_item_type(def_id);
- if let ty::Closure(def_id, substs) = ty.kind {
- record!(self.per_def.fn_sig[def_id] <- substs.as_closure().sig(def_id, self.tcx));
- }
- self.encode_generics(def_id);
- self.encode_optimized_mir(def_id);
- self.encode_promoted_mir(def_id);
- }
-
- fn encode_info_for_anon_const(&mut self, def_id: DefId) {
- debug!("EncodeContext::encode_info_for_anon_const({:?})", def_id);
- let id = self.tcx.hir().as_local_hir_id(def_id).unwrap();
- let body_id = self.tcx.hir().body_owned_by(id);
- let const_data = self.encode_rendered_const_for_body(body_id);
- let mir = self.tcx.mir_const_qualif(def_id).0;
-
- record!(self.per_def.kind[def_id] <- EntryKind::Const(ConstQualif { mir }, const_data));
- record!(self.per_def.visibility[def_id] <- ty::Visibility::Public);
- record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
- self.encode_item_type(def_id);
- self.encode_generics(def_id);
- self.encode_predicates(def_id);
- self.encode_optimized_mir(def_id);
- self.encode_promoted_mir(def_id);
- }
-
- fn encode_native_libraries(&mut self) -> Lazy<[NativeLibrary]> {
- let used_libraries = self.tcx.native_libraries(LOCAL_CRATE);
- self.lazy(used_libraries.iter().cloned())
- }
-
- fn encode_foreign_modules(&mut self) -> Lazy<[ForeignModule]> {
- let foreign_modules = self.tcx.foreign_modules(LOCAL_CRATE);
- self.lazy(foreign_modules.iter().cloned())
- }
-
- fn encode_proc_macros(&mut self) -> Option<Lazy<[DefIndex]>> {
- let is_proc_macro = self.tcx.sess.crate_types.borrow().contains(&CrateType::ProcMacro);
- if is_proc_macro {
- let tcx = self.tcx;
- Some(self.lazy(tcx.hir().krate().items.values().filter_map(|item| {
- if item.attrs.iter().any(|attr| is_proc_macro_attr(attr)) {
- Some(item.hir_id.owner)
- } else {
- None
- }
- })))
- } else {
- None
- }
- }
-
- fn encode_crate_deps(&mut self) -> Lazy<[CrateDep]> {
- let crates = self.tcx.crates();
-
- let mut deps = crates
- .iter()
- .map(|&cnum| {
- let dep = CrateDep {
- name: self.tcx.original_crate_name(cnum),
- hash: self.tcx.crate_hash(cnum),
- host_hash: self.tcx.crate_host_hash(cnum),
- kind: self.tcx.dep_kind(cnum),
- extra_filename: self.tcx.extra_filename(cnum),
- };
- (cnum, dep)
- })
- .collect::<Vec<_>>();
-
- deps.sort_by_key(|&(cnum, _)| cnum);
-
- {
- // Sanity-check the crate numbers
- let mut expected_cnum = 1;
- for &(n, _) in &deps {
- assert_eq!(n, CrateNum::new(expected_cnum));
- expected_cnum += 1;
- }
- }
-
- // We're just going to write a list of crate 'name-hash-version's, with
- // the assumption that they are numbered 1 to n.
- // FIXME (#2166): This is not nearly enough to support correct versioning
- // but is enough to get transitive crate dependencies working.
- self.lazy(deps.iter().map(|&(_, ref dep)| dep))
- }
-
- fn encode_lib_features(&mut self) -> Lazy<[(ast::Name, Option<ast::Name>)]> {
- let tcx = self.tcx;
- let lib_features = tcx.lib_features();
- self.lazy(lib_features.to_vec())
- }
-
- fn encode_diagnostic_items(&mut self) -> Lazy<[(Symbol, DefIndex)]> {
- let tcx = self.tcx;
- let diagnostic_items = tcx.diagnostic_items(LOCAL_CRATE);
- self.lazy(diagnostic_items.iter().map(|(&name, def_id)| (name, def_id.index)))
- }
-
- fn encode_lang_items(&mut self) -> Lazy<[(DefIndex, usize)]> {
- let tcx = self.tcx;
- let lang_items = tcx.lang_items();
- let lang_items = lang_items.items().iter();
- self.lazy(lang_items.enumerate().filter_map(|(i, &opt_def_id)| {
- if let Some(def_id) = opt_def_id {
- if def_id.is_local() {
- return Some((def_id.index, i));
- }
- }
- None
- }))
- }
-
- fn encode_lang_items_missing(&mut self) -> Lazy<[lang_items::LangItem]> {
- let tcx = self.tcx;
- self.lazy(&tcx.lang_items().missing)
- }
-
- /// Encodes an index, mapping each trait to its (local) implementations.
- fn encode_impls(&mut self) -> Lazy<[TraitImpls]> {
- debug!("EncodeContext::encode_impls()");
- let tcx = self.tcx;
- let mut visitor = ImplVisitor {
- tcx,
- impls: FxHashMap::default(),
- };
- tcx.hir().krate().visit_all_item_likes(&mut visitor);
-
- let mut all_impls: Vec<_> = visitor.impls.into_iter().collect();
-
- // Bring everything into deterministic order for hashing
- all_impls.sort_by_cached_key(|&(trait_def_id, _)| {
- tcx.def_path_hash(trait_def_id)
- });
-
- let all_impls: Vec<_> = all_impls
- .into_iter()
- .map(|(trait_def_id, mut impls)| {
- // Bring everything into deterministic order for hashing
- impls.sort_by_cached_key(|&def_index| {
- tcx.hir().definitions().def_path_hash(def_index)
- });
-
- TraitImpls {
- trait_id: (trait_def_id.krate.as_u32(), trait_def_id.index),
- impls: self.lazy(&impls),
- }
- })
- .collect();
-
- self.lazy(&all_impls)
- }
-
- // Encodes all symbols exported from this crate into the metadata.
- //
- // This pass is seeded off the reachability list calculated in the
- // middle::reachable module but filters out items that either don't have a
- // symbol associated with them (they weren't translated) or if they're an FFI
- // definition (as that's not defined in this crate).
- fn encode_exported_symbols(&mut self,
- exported_symbols: &[(ExportedSymbol<'tcx>, SymbolExportLevel)])
- -> Lazy<[(ExportedSymbol<'tcx>, SymbolExportLevel)]> {
- // The metadata symbol name is special. It should not show up in
- // downstream crates.
- let metadata_symbol_name = SymbolName::new(&metadata_symbol_name(self.tcx));
-
- self.lazy(exported_symbols
- .iter()
- .filter(|&&(ref exported_symbol, _)| {
- match *exported_symbol {
- ExportedSymbol::NoDefId(symbol_name) => {
- symbol_name != metadata_symbol_name
- },
- _ => true,
- }
- })
- .cloned())
- }
-
- fn encode_dylib_dependency_formats(&mut self) -> Lazy<[Option<LinkagePreference>]> {
- let formats = self.tcx.dependency_formats(LOCAL_CRATE);
- for (ty, arr) in formats.iter() {
- if *ty != config::CrateType::Dylib {
- continue;
- }
- return self.lazy(arr.iter().map(|slot| {
- match *slot {
- Linkage::NotLinked |
- Linkage::IncludedFromDylib => None,
-
- Linkage::Dynamic => Some(LinkagePreference::RequireDynamic),
- Linkage::Static => Some(LinkagePreference::RequireStatic),
- }
- }));
- }
- Lazy::empty()
- }
-
- fn encode_info_for_foreign_item(
- &mut self,
- def_id: DefId,
- nitem: &hir::ForeignItem,
- ) {
- let tcx = self.tcx;
-
- debug!("EncodeContext::encode_info_for_foreign_item({:?})", def_id);
-
- record!(self.per_def.kind[def_id] <- match nitem.kind {
- hir::ForeignItemKind::Fn(_, ref names, _) => {
- let data = FnData {
- asyncness: hir::IsAsync::NotAsync,
- constness: hir::Constness::NotConst,
- param_names: self.encode_fn_param_names(names),
- };
- EntryKind::ForeignFn(self.lazy(data))
- }
- hir::ForeignItemKind::Static(_, hir::MutMutable) => EntryKind::ForeignMutStatic,
- hir::ForeignItemKind::Static(_, hir::MutImmutable) => EntryKind::ForeignImmStatic,
- hir::ForeignItemKind::Type => EntryKind::ForeignType,
- });
- record!(self.per_def.visibility[def_id] <-
- ty::Visibility::from_hir(&nitem.vis, nitem.hir_id, self.tcx));
- record!(self.per_def.span[def_id] <- nitem.span);
- record!(self.per_def.attributes[def_id] <- &nitem.attrs);
- self.encode_stability(def_id);
- self.encode_deprecation(def_id);
- self.encode_item_type(def_id);
- if let hir::ForeignItemKind::Fn(..) = nitem.kind {
- record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(def_id));
- self.encode_variances_of(def_id);
- }
- self.encode_generics(def_id);
- self.encode_predicates(def_id);
- }
-}
-
-// FIXME(eddyb) make metadata encoding walk over all definitions, instead of HIR.
-impl Visitor<'tcx> for EncodeContext<'tcx> {
- fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> {
- NestedVisitorMap::OnlyBodies(&self.tcx.hir())
- }
- fn visit_expr(&mut self, ex: &'tcx hir::Expr) {
- intravisit::walk_expr(self, ex);
- self.encode_info_for_expr(ex);
- }
- fn visit_anon_const(&mut self, c: &'tcx AnonConst) {
- intravisit::walk_anon_const(self, c);
- let def_id = self.tcx.hir().local_def_id(c.hir_id);
- self.encode_info_for_anon_const(def_id);
- }
- fn visit_item(&mut self, item: &'tcx hir::Item) {
- intravisit::walk_item(self, item);
- let def_id = self.tcx.hir().local_def_id(item.hir_id);
- match item.kind {
- hir::ItemKind::ExternCrate(_) |
- hir::ItemKind::Use(..) => {} // ignore these
- _ => self.encode_info_for_item(def_id, item),
- }
- self.encode_addl_info_for_item(item);
- }
- fn visit_foreign_item(&mut self, ni: &'tcx hir::ForeignItem) {
- intravisit::walk_foreign_item(self, ni);
- let def_id = self.tcx.hir().local_def_id(ni.hir_id);
- self.encode_info_for_foreign_item(def_id, ni);
- }
- fn visit_generics(&mut self, generics: &'tcx hir::Generics) {
- intravisit::walk_generics(self, generics);
- self.encode_info_for_generics(generics);
- }
- fn visit_macro_def(&mut self, macro_def: &'tcx hir::MacroDef) {
- self.encode_info_for_macro_def(macro_def);
- }
-}
-
-impl EncodeContext<'tcx> {
- fn encode_fields(&mut self, adt_def_id: DefId) {
- let def = self.tcx.adt_def(adt_def_id);
- for (variant_index, variant) in def.variants.iter_enumerated() {
- for (field_index, _field) in variant.fields.iter().enumerate() {
- // FIXME(eddyb) `adt_def_id` is leftover from incremental isolation,
- // pass `def`, `variant` or `field` instead.
- self.encode_field(adt_def_id, variant_index, field_index);
- }
- }
- }
-
- fn encode_info_for_generics(&mut self, generics: &hir::Generics) {
- for param in &generics.params {
- let def_id = self.tcx.hir().local_def_id(param.hir_id);
- match param.kind {
- GenericParamKind::Lifetime { .. } => continue,
- GenericParamKind::Type { ref default, .. } => {
- self.encode_info_for_generic_param(
- def_id,
- EntryKind::TypeParam,
- default.is_some(),
- );
- }
- GenericParamKind::Const { .. } => {
- self.encode_info_for_generic_param(def_id, EntryKind::ConstParam, true);
- }
- }
- }
- }
-
- fn encode_info_for_expr(&mut self, expr: &hir::Expr) {
- match expr.kind {
- hir::ExprKind::Closure(..) => {
- let def_id = self.tcx.hir().local_def_id(expr.hir_id);
- self.encode_info_for_closure(def_id);
- }
- _ => {}
- }
- }
-
- /// In some cases, along with the item itself, we also
- /// encode some sub-items. Usually we want some info from the item
- /// so it's easier to do that here then to wait until we would encounter
- /// normally in the visitor walk.
- fn encode_addl_info_for_item(&mut self, item: &hir::Item) {
- let def_id = self.tcx.hir().local_def_id(item.hir_id);
- match item.kind {
- hir::ItemKind::Static(..) |
- hir::ItemKind::Const(..) |
- hir::ItemKind::Fn(..) |
- hir::ItemKind::Mod(..) |
- hir::ItemKind::ForeignMod(..) |
- hir::ItemKind::GlobalAsm(..) |
- hir::ItemKind::ExternCrate(..) |
- hir::ItemKind::Use(..) |
- hir::ItemKind::TyAlias(..) |
- hir::ItemKind::OpaqueTy(..) |
- hir::ItemKind::TraitAlias(..) => {
- // no sub-item recording needed in these cases
- }
- hir::ItemKind::Enum(..) => {
- self.encode_fields(def_id);
-
- let def = self.tcx.adt_def(def_id);
- for (i, variant) in def.variants.iter_enumerated() {
- // FIXME(eddyb) `def_id` is leftover from incremental isolation,
- // pass `def` or `variant` instead.
- self.encode_enum_variant_info(def_id, i);
-
- // FIXME(eddyb) `def_id` is leftover from incremental isolation,
- // pass `def`, `variant` or `ctor_def_id` instead.
- if let Some(_ctor_def_id) = variant.ctor_def_id {
- self.encode_enum_variant_ctor(def_id, i);
- }
- }
- }
- hir::ItemKind::Struct(ref struct_def, _) => {
- self.encode_fields(def_id);
-
- // If the struct has a constructor, encode it.
- if let Some(ctor_hir_id) = struct_def.ctor_hir_id() {
- let ctor_def_id = self.tcx.hir().local_def_id(ctor_hir_id);
- self.encode_struct_ctor(def_id, ctor_def_id);
- }
- }
- hir::ItemKind::Union(..) => {
- self.encode_fields(def_id);
- }
- hir::ItemKind::Impl(..) => {
- for &trait_item_def_id in self.tcx.associated_item_def_ids(def_id).iter() {
- self.encode_info_for_impl_item(trait_item_def_id);
- }
- }
- hir::ItemKind::Trait(..) => {
- for &item_def_id in self.tcx.associated_item_def_ids(def_id).iter() {
- self.encode_info_for_trait_item(item_def_id);
- }
- }
- }
- }
-}
-
-struct ImplVisitor<'tcx> {
- tcx: TyCtxt<'tcx>,
- impls: FxHashMap<DefId, Vec<DefIndex>>,
-}
-
-impl<'tcx, 'v> ItemLikeVisitor<'v> for ImplVisitor<'tcx> {
- fn visit_item(&mut self, item: &hir::Item) {
- if let hir::ItemKind::Impl(..) = item.kind {
- let impl_id = self.tcx.hir().local_def_id(item.hir_id);
- if let Some(trait_ref) = self.tcx.impl_trait_ref(impl_id) {
- self.impls
- .entry(trait_ref.def_id)
- .or_default()
- .push(impl_id.index);
- }
- }
- }
-
- fn visit_trait_item(&mut self, _trait_item: &'v hir::TraitItem) {}
-
- fn visit_impl_item(&mut self, _impl_item: &'v hir::ImplItem) {
- // handled in `visit_item` above
- }
-}
-
-// NOTE(eddyb) The following comment was preserved for posterity, even
-// though it's no longer relevant as EBML (which uses nested & tagged
-// "documents") was replaced with a scheme that can't go out of bounds.
-//
-// And here we run into yet another obscure archive bug: in which metadata
-// loaded from archives may have trailing garbage bytes. Awhile back one of
-// our tests was failing sporadically on the macOS 64-bit builders (both nopt
-// and opt) by having ebml generate an out-of-bounds panic when looking at
-// metadata.
-//
-// Upon investigation it turned out that the metadata file inside of an rlib
-// (and ar archive) was being corrupted. Some compilations would generate a
-// metadata file which would end in a few extra bytes, while other
-// compilations would not have these extra bytes appended to the end. These
-// extra bytes were interpreted by ebml as an extra tag, so they ended up
-// being interpreted causing the out-of-bounds.
-//
-// The root cause of why these extra bytes were appearing was never
-// discovered, and in the meantime the solution we're employing is to insert
-// the length of the metadata to the start of the metadata. Later on this
-// will allow us to slice the metadata to the precise length that we just
-// generated regardless of trailing bytes that end up in it.
-
-crate fn encode_metadata(tcx: TyCtxt<'_>) -> EncodedMetadata {
- let mut encoder = opaque::Encoder::new(vec![]);
- encoder.emit_raw_bytes(METADATA_HEADER);
-
- // Will be filled with the root position after encoding everything.
- encoder.emit_raw_bytes(&[0, 0, 0, 0]);
-
- // Since encoding metadata is not in a query, and nothing is cached,
- // there's no need to do dep-graph tracking for any of it.
- let (root, mut result) = tcx.dep_graph.with_ignore(move || {
- let mut ecx = EncodeContext {
- opaque: encoder,
- tcx,
- per_def: Default::default(),
- lazy_state: LazyState::NoNode,
- type_shorthands: Default::default(),
- predicate_shorthands: Default::default(),
- source_file_cache: tcx.sess.source_map().files()[0].clone(),
- interpret_allocs: Default::default(),
- interpret_allocs_inverse: Default::default(),
- };
-
- // Encode the rustc version string in a predictable location.
- rustc_version().encode(&mut ecx).unwrap();
-
- // Encode all the entries and extra information in the crate,
- // culminating in the `CrateRoot` which points to all of it.
- let root = ecx.encode_crate_root();
- (root, ecx.opaque.into_inner())
- });
-
- // Encode the root position.
- let header = METADATA_HEADER.len();
- let pos = root.position.get();
- result[header + 0] = (pos >> 24) as u8;
- result[header + 1] = (pos >> 16) as u8;
- result[header + 2] = (pos >> 8) as u8;
- result[header + 3] = (pos >> 0) as u8;
-
- EncodedMetadata { raw_data: result }
-}
pub mod error_codes;
-mod encoder;
-mod decoder;
mod dependency_format;
-mod cstore_impl;
mod foreign_modules;
mod link_args;
mod native_libs;
-mod schema;
-mod table;
+mod rmeta;
pub mod creader;
pub mod cstore;
tcx.hir().krate().visit_all_item_likes(&mut collector);
for attr in tcx.hir().krate().attrs.iter() {
- if attr.path == sym::link_args {
+ if attr.has_name(sym::link_args) {
if let Some(linkarg) = attr.value_str() {
collector.add_link_args(&linkarg.as_str());
}
use crate::cstore::MetadataBlob;
use crate::creader::Library;
-use crate::schema::{METADATA_HEADER, rustc_version};
+use crate::rmeta::{METADATA_HEADER, rustc_version};
use rustc_data_structures::fx::FxHashSet;
use rustc_data_structures::svh::Svh;
--- /dev/null
+// Decoding metadata from a single crate's metadata
+
+use crate::cstore::{self, CrateMetadata, MetadataBlob};
+use crate::rmeta::*;
+use crate::rmeta::table::{FixedSizeEncoding, PerDefTable};
+
+use rustc_index::vec::IndexVec;
+use rustc_data_structures::sync::Lrc;
+use rustc::hir::map::{DefKey, DefPath, DefPathData, DefPathHash};
+use rustc::hir;
+use rustc::middle::cstore::{LinkagePreference, NativeLibrary, ForeignModule};
+use rustc::middle::exported_symbols::{ExportedSymbol, SymbolExportLevel};
+use rustc::hir::def::{self, Res, DefKind, CtorOf, CtorKind};
+use rustc::hir::def_id::{CrateNum, DefId, DefIndex, LocalDefId, CRATE_DEF_INDEX, LOCAL_CRATE};
+use rustc_data_structures::fingerprint::Fingerprint;
+use rustc_data_structures::fx::FxHashMap;
+use rustc::dep_graph::{DepNodeIndex, DepKind};
+use rustc::middle::lang_items;
+use rustc::mir::{self, interpret};
+use rustc::mir::interpret::AllocDecodingSession;
+use rustc::session::Session;
+use rustc::ty::{self, Ty, TyCtxt};
+use rustc::ty::codec::TyDecoder;
+use rustc::mir::{Body, Promoted};
+use rustc::util::captures::Captures;
+
+use std::io;
+use std::mem;
+use std::num::NonZeroUsize;
+use std::u32;
+
+use rustc_serialize::{Decodable, Decoder, Encodable, SpecializedDecoder, opaque};
+use syntax::attr;
+use syntax::ast::{self, Ident};
+use syntax::source_map::{self, respan, Spanned};
+use syntax_expand::base::{SyntaxExtensionKind, SyntaxExtension};
+use syntax_expand::proc_macro::{AttrProcMacro, ProcMacroDerive, BangProcMacro};
+use syntax_pos::{self, Span, BytePos, Pos, DUMMY_SP, hygiene::MacroKind};
+use syntax_pos::symbol::{Symbol, sym};
+use log::debug;
+use proc_macro::bridge::client::ProcMacro;
+
+pub use cstore_impl::{provide, provide_extern};
+
+mod cstore_impl;
+
+crate struct DecodeContext<'a, 'tcx> {
+ opaque: opaque::Decoder<'a>,
+ cdata: Option<&'a CrateMetadata>,
+ sess: Option<&'tcx Session>,
+ tcx: Option<TyCtxt<'tcx>>,
+
+ // Cache the last used source_file for translating spans as an optimization.
+ last_source_file_index: usize,
+
+ lazy_state: LazyState,
+
+ // Used for decoding interpret::AllocIds in a cached & thread-safe manner.
+ alloc_decoding_session: Option<AllocDecodingSession<'a>>,
+}
+
+/// Abstract over the various ways one can create metadata decoders.
+crate trait Metadata<'a, 'tcx>: Copy {
+ fn raw_bytes(self) -> &'a [u8];
+ fn cdata(self) -> Option<&'a CrateMetadata> { None }
+ fn sess(self) -> Option<&'tcx Session> { None }
+ fn tcx(self) -> Option<TyCtxt<'tcx>> { None }
+
+ fn decoder(self, pos: usize) -> DecodeContext<'a, 'tcx> {
+ let tcx = self.tcx();
+ DecodeContext {
+ opaque: opaque::Decoder::new(self.raw_bytes(), pos),
+ cdata: self.cdata(),
+ sess: self.sess().or(tcx.map(|tcx| tcx.sess)),
+ tcx,
+ last_source_file_index: 0,
+ lazy_state: LazyState::NoNode,
+ alloc_decoding_session: self.cdata().map(|cdata| {
+ cdata.alloc_decoding_state.new_decoding_session()
+ }),
+ }
+ }
+}
+
+impl<'a, 'tcx> Metadata<'a, 'tcx> for &'a MetadataBlob {
+ fn raw_bytes(self) -> &'a [u8] {
+ &self.0
+ }
+}
+
+
+impl<'a, 'tcx> Metadata<'a, 'tcx> for (&'a MetadataBlob, &'tcx Session) {
+ fn raw_bytes(self) -> &'a [u8] {
+ let (blob, _) = self;
+ &blob.0
+ }
+
+ fn sess(self) -> Option<&'tcx Session> {
+ let (_, sess) = self;
+ Some(sess)
+ }
+}
+
+
+impl<'a, 'tcx> Metadata<'a, 'tcx> for &'a CrateMetadata {
+ fn raw_bytes(self) -> &'a [u8] {
+ self.blob.raw_bytes()
+ }
+ fn cdata(self) -> Option<&'a CrateMetadata> {
+ Some(self)
+ }
+}
+
+impl<'a, 'tcx> Metadata<'a, 'tcx> for (&'a CrateMetadata, &'tcx Session) {
+ fn raw_bytes(self) -> &'a [u8] {
+ self.0.raw_bytes()
+ }
+ fn cdata(self) -> Option<&'a CrateMetadata> {
+ Some(self.0)
+ }
+ fn sess(self) -> Option<&'tcx Session> {
+ Some(&self.1)
+ }
+}
+
+impl<'a, 'tcx> Metadata<'a, 'tcx> for (&'a CrateMetadata, TyCtxt<'tcx>) {
+ fn raw_bytes(self) -> &'a [u8] {
+ self.0.raw_bytes()
+ }
+ fn cdata(self) -> Option<&'a CrateMetadata> {
+ Some(self.0)
+ }
+ fn tcx(self) -> Option<TyCtxt<'tcx>> {
+ Some(self.1)
+ }
+}
+
+impl<'a, 'tcx, T: Encodable + Decodable> Lazy<T> {
+ crate fn decode<M: Metadata<'a, 'tcx>>(self, metadata: M) -> T {
+ let mut dcx = metadata.decoder(self.position.get());
+ dcx.lazy_state = LazyState::NodeStart(self.position);
+ T::decode(&mut dcx).unwrap()
+ }
+}
+
+impl<'a: 'x, 'tcx: 'x, 'x, T: Encodable + Decodable> Lazy<[T]> {
+ crate fn decode<M: Metadata<'a, 'tcx>>(
+ self,
+ metadata: M,
+ ) -> impl ExactSizeIterator<Item = T> + Captures<'a> + Captures<'tcx> + 'x {
+ let mut dcx = metadata.decoder(self.position.get());
+ dcx.lazy_state = LazyState::NodeStart(self.position);
+ (0..self.meta).map(move |_| T::decode(&mut dcx).unwrap())
+ }
+}
+
+impl<'a, 'tcx> DecodeContext<'a, 'tcx> {
+ fn tcx(&self) -> TyCtxt<'tcx> {
+ self.tcx.expect("missing TyCtxt in DecodeContext")
+ }
+
+ fn cdata(&self) -> &'a CrateMetadata {
+ self.cdata.expect("missing CrateMetadata in DecodeContext")
+ }
+
+ fn read_lazy_with_meta<T: ?Sized + LazyMeta>(
+ &mut self,
+ meta: T::Meta,
+ ) -> Result<Lazy<T>, <Self as Decoder>::Error> {
+ let min_size = T::min_size(meta);
+ let distance = self.read_usize()?;
+ let position = match self.lazy_state {
+ LazyState::NoNode => bug!("read_lazy_with_meta: outside of a metadata node"),
+ LazyState::NodeStart(start) => {
+ let start = start.get();
+ assert!(distance + min_size <= start);
+ start - distance - min_size
+ }
+ LazyState::Previous(last_min_end) => last_min_end.get() + distance,
+ };
+ self.lazy_state = LazyState::Previous(NonZeroUsize::new(position + min_size).unwrap());
+ Ok(Lazy::from_position_and_meta(NonZeroUsize::new(position).unwrap(), meta))
+ }
+}
+
+impl<'a, 'tcx> TyDecoder<'tcx> for DecodeContext<'a, 'tcx> {
+ #[inline]
+ fn tcx(&self) -> TyCtxt<'tcx> {
+ self.tcx.expect("missing TyCtxt in DecodeContext")
+ }
+
+ #[inline]
+ fn peek_byte(&self) -> u8 {
+ self.opaque.data[self.opaque.position()]
+ }
+
+ #[inline]
+ fn position(&self) -> usize {
+ self.opaque.position()
+ }
+
+ fn cached_ty_for_shorthand<F>(&mut self,
+ shorthand: usize,
+ or_insert_with: F)
+ -> Result<Ty<'tcx>, Self::Error>
+ where F: FnOnce(&mut Self) -> Result<Ty<'tcx>, Self::Error>
+ {
+ let tcx = self.tcx();
+
+ let key = ty::CReaderCacheKey {
+ cnum: self.cdata().cnum,
+ pos: shorthand,
+ };
+
+ if let Some(&ty) = tcx.rcache.borrow().get(&key) {
+ return Ok(ty);
+ }
+
+ let ty = or_insert_with(self)?;
+ tcx.rcache.borrow_mut().insert(key, ty);
+ Ok(ty)
+ }
+
+ fn with_position<F, R>(&mut self, pos: usize, f: F) -> R
+ where F: FnOnce(&mut Self) -> R
+ {
+ let new_opaque = opaque::Decoder::new(self.opaque.data, pos);
+ let old_opaque = mem::replace(&mut self.opaque, new_opaque);
+ let old_state = mem::replace(&mut self.lazy_state, LazyState::NoNode);
+ let r = f(self);
+ self.opaque = old_opaque;
+ self.lazy_state = old_state;
+ r
+ }
+
+ fn map_encoded_cnum_to_current(&self, cnum: CrateNum) -> CrateNum {
+ if cnum == LOCAL_CRATE {
+ self.cdata().cnum
+ } else {
+ self.cdata().cnum_map[cnum]
+ }
+ }
+}
+
+impl<'a, 'tcx, T: Encodable> SpecializedDecoder<Lazy<T>> for DecodeContext<'a, 'tcx> {
+ fn specialized_decode(&mut self) -> Result<Lazy<T>, Self::Error> {
+ self.read_lazy_with_meta(())
+ }
+}
+
+impl<'a, 'tcx, T: Encodable> SpecializedDecoder<Lazy<[T]>> for DecodeContext<'a, 'tcx> {
+ fn specialized_decode(&mut self) -> Result<Lazy<[T]>, Self::Error> {
+ let len = self.read_usize()?;
+ if len == 0 {
+ Ok(Lazy::empty())
+ } else {
+ self.read_lazy_with_meta(len)
+ }
+ }
+}
+
+impl<'a, 'tcx, T> SpecializedDecoder<Lazy<PerDefTable<T>>> for DecodeContext<'a, 'tcx>
+ where Option<T>: FixedSizeEncoding,
+{
+ fn specialized_decode(&mut self) -> Result<Lazy<PerDefTable<T>>, Self::Error> {
+ let len = self.read_usize()?;
+ self.read_lazy_with_meta(len)
+ }
+}
+
+impl<'a, 'tcx> SpecializedDecoder<DefId> for DecodeContext<'a, 'tcx> {
+ #[inline]
+ fn specialized_decode(&mut self) -> Result<DefId, Self::Error> {
+ let krate = CrateNum::decode(self)?;
+ let index = DefIndex::decode(self)?;
+
+ Ok(DefId {
+ krate,
+ index,
+ })
+ }
+}
+
+impl<'a, 'tcx> SpecializedDecoder<DefIndex> for DecodeContext<'a, 'tcx> {
+ #[inline]
+ fn specialized_decode(&mut self) -> Result<DefIndex, Self::Error> {
+ Ok(DefIndex::from_u32(self.read_u32()?))
+ }
+}
+
+impl<'a, 'tcx> SpecializedDecoder<LocalDefId> for DecodeContext<'a, 'tcx> {
+ #[inline]
+ fn specialized_decode(&mut self) -> Result<LocalDefId, Self::Error> {
+ self.specialized_decode().map(|i| LocalDefId::from_def_id(i))
+ }
+}
+
+impl<'a, 'tcx> SpecializedDecoder<interpret::AllocId> for DecodeContext<'a, 'tcx> {
+ fn specialized_decode(&mut self) -> Result<interpret::AllocId, Self::Error> {
+ if let Some(alloc_decoding_session) = self.alloc_decoding_session {
+ alloc_decoding_session.decode_alloc_id(self)
+ } else {
+ bug!("Attempting to decode interpret::AllocId without CrateMetadata")
+ }
+ }
+}
+
+impl<'a, 'tcx> SpecializedDecoder<Span> for DecodeContext<'a, 'tcx> {
+ fn specialized_decode(&mut self) -> Result<Span, Self::Error> {
+ let tag = u8::decode(self)?;
+
+ if tag == TAG_INVALID_SPAN {
+ return Ok(DUMMY_SP)
+ }
+
+ debug_assert_eq!(tag, TAG_VALID_SPAN);
+
+ let lo = BytePos::decode(self)?;
+ let len = BytePos::decode(self)?;
+ let hi = lo + len;
+
+ let sess = if let Some(sess) = self.sess {
+ sess
+ } else {
+ bug!("Cannot decode Span without Session.")
+ };
+
+ let imported_source_files = self.cdata().imported_source_files(&sess.source_map());
+ let source_file = {
+ // Optimize for the case that most spans within a translated item
+ // originate from the same source_file.
+ let last_source_file = &imported_source_files[self.last_source_file_index];
+
+ if lo >= last_source_file.original_start_pos &&
+ lo <= last_source_file.original_end_pos {
+ last_source_file
+ } else {
+ let mut a = 0;
+ let mut b = imported_source_files.len();
+
+ while b - a > 1 {
+ let m = (a + b) / 2;
+ if imported_source_files[m].original_start_pos > lo {
+ b = m;
+ } else {
+ a = m;
+ }
+ }
+
+ self.last_source_file_index = a;
+ &imported_source_files[a]
+ }
+ };
+
+ // Make sure our binary search above is correct.
+ debug_assert!(lo >= source_file.original_start_pos &&
+ lo <= source_file.original_end_pos);
+
+ // Make sure we correctly filtered out invalid spans during encoding
+ debug_assert!(hi >= source_file.original_start_pos &&
+ hi <= source_file.original_end_pos);
+
+ let lo = (lo + source_file.translated_source_file.start_pos)
+ - source_file.original_start_pos;
+ let hi = (hi + source_file.translated_source_file.start_pos)
+ - source_file.original_start_pos;
+
+ Ok(Span::with_root_ctxt(lo, hi))
+ }
+}
+
+impl SpecializedDecoder<Ident> for DecodeContext<'_, '_> {
+ fn specialized_decode(&mut self) -> Result<Ident, Self::Error> {
+ // FIXME(jseyfried): intercrate hygiene
+
+ Ok(Ident::with_dummy_span(Symbol::decode(self)?))
+ }
+}
+
+impl<'a, 'tcx> SpecializedDecoder<Fingerprint> for DecodeContext<'a, 'tcx> {
+ fn specialized_decode(&mut self) -> Result<Fingerprint, Self::Error> {
+ Fingerprint::decode_opaque(&mut self.opaque)
+ }
+}
+
+impl<'a, 'tcx, T: Decodable> SpecializedDecoder<mir::ClearCrossCrate<T>>
+for DecodeContext<'a, 'tcx> {
+ #[inline]
+ fn specialized_decode(&mut self) -> Result<mir::ClearCrossCrate<T>, Self::Error> {
+ Ok(mir::ClearCrossCrate::Clear)
+ }
+}
+
+implement_ty_decoder!( DecodeContext<'a, 'tcx> );
+
+impl<'tcx> MetadataBlob {
+ crate fn is_compatible(&self) -> bool {
+ self.raw_bytes().starts_with(METADATA_HEADER)
+ }
+
+ crate fn get_rustc_version(&self) -> String {
+ Lazy::<String>::from_position(
+ NonZeroUsize::new(METADATA_HEADER.len() + 4).unwrap(),
+ ).decode(self)
+ }
+
+ crate fn get_root(&self) -> CrateRoot<'tcx> {
+ let slice = self.raw_bytes();
+ let offset = METADATA_HEADER.len();
+ let pos = (((slice[offset + 0] as u32) << 24) | ((slice[offset + 1] as u32) << 16) |
+ ((slice[offset + 2] as u32) << 8) |
+ ((slice[offset + 3] as u32) << 0)) as usize;
+ Lazy::<CrateRoot<'tcx>>::from_position(
+ NonZeroUsize::new(pos).unwrap(),
+ ).decode(self)
+ }
+
+ crate fn list_crate_metadata(&self,
+ out: &mut dyn io::Write) -> io::Result<()> {
+ write!(out, "=External Dependencies=\n")?;
+ let root = self.get_root();
+ for (i, dep) in root.crate_deps
+ .decode(self)
+ .enumerate() {
+ write!(out, "{} {}{}\n", i + 1, dep.name, dep.extra_filename)?;
+ }
+ write!(out, "\n")?;
+ Ok(())
+ }
+}
+
+impl<'tcx> EntryKind<'tcx> {
+ fn def_kind(&self) -> Option<DefKind> {
+ Some(match *self {
+ EntryKind::Const(..) => DefKind::Const,
+ EntryKind::AssocConst(..) => DefKind::AssocConst,
+ EntryKind::ImmStatic |
+ EntryKind::MutStatic |
+ EntryKind::ForeignImmStatic |
+ EntryKind::ForeignMutStatic => DefKind::Static,
+ EntryKind::Struct(_, _) => DefKind::Struct,
+ EntryKind::Union(_, _) => DefKind::Union,
+ EntryKind::Fn(_) |
+ EntryKind::ForeignFn(_) => DefKind::Fn,
+ EntryKind::Method(_) => DefKind::Method,
+ EntryKind::Type => DefKind::TyAlias,
+ EntryKind::TypeParam => DefKind::TyParam,
+ EntryKind::ConstParam => DefKind::ConstParam,
+ EntryKind::OpaqueTy => DefKind::OpaqueTy,
+ EntryKind::AssocType(_) => DefKind::AssocTy,
+ EntryKind::AssocOpaqueTy(_) => DefKind::AssocOpaqueTy,
+ EntryKind::Mod(_) => DefKind::Mod,
+ EntryKind::Variant(_) => DefKind::Variant,
+ EntryKind::Trait(_) => DefKind::Trait,
+ EntryKind::TraitAlias => DefKind::TraitAlias,
+ EntryKind::Enum(..) => DefKind::Enum,
+ EntryKind::MacroDef(_) => DefKind::Macro(MacroKind::Bang),
+ EntryKind::ForeignType => DefKind::ForeignTy,
+
+ EntryKind::ForeignMod |
+ EntryKind::GlobalAsm |
+ EntryKind::Impl(_) |
+ EntryKind::Field |
+ EntryKind::Generator(_) |
+ EntryKind::Closure => return None,
+ })
+ }
+}
+
+impl<'a, 'tcx> CrateMetadata {
+ fn is_proc_macro_crate(&self) -> bool {
+ self.root.proc_macro_decls_static.is_some()
+ }
+
+ fn is_proc_macro(&self, id: DefIndex) -> bool {
+ self.is_proc_macro_crate() &&
+ self.root.proc_macro_data.unwrap().decode(self).find(|x| *x == id).is_some()
+ }
+
+ fn maybe_kind(&self, item_id: DefIndex) -> Option<EntryKind<'tcx>> {
+ self.root.per_def.kind.get(self, item_id).map(|k| k.decode(self))
+ }
+
+ fn kind(&self, item_id: DefIndex) -> EntryKind<'tcx> {
+ assert!(!self.is_proc_macro(item_id));
+ self.maybe_kind(item_id).unwrap_or_else(|| {
+ bug!(
+ "CrateMetadata::kind({:?}): id not found, in crate {:?} with number {}",
+ item_id,
+ self.root.name,
+ self.cnum,
+ )
+ })
+ }
+
+ fn local_def_id(&self, index: DefIndex) -> DefId {
+ DefId {
+ krate: self.cnum,
+ index,
+ }
+ }
+
+ fn raw_proc_macro(&self, id: DefIndex) -> &ProcMacro {
+ // DefIndex's in root.proc_macro_data have a one-to-one correspondence
+ // with items in 'raw_proc_macros'.
+ // NOTE: If you update the order of macros in 'proc_macro_data' for any reason,
+ // you must also update src/libsyntax_ext/proc_macro_harness.rs
+ // Failing to do so will result in incorrect data being associated
+ // with proc macros when deserialized.
+ let pos = self.root.proc_macro_data.unwrap().decode(self).position(|i| i == id).unwrap();
+ &self.raw_proc_macros.unwrap()[pos]
+ }
+
+ fn item_name(&self, item_index: DefIndex) -> Symbol {
+ if !self.is_proc_macro(item_index) {
+ self.def_key(item_index)
+ .disambiguated_data
+ .data
+ .get_opt_name()
+ .expect("no name in item_name")
+ } else {
+ Symbol::intern(self.raw_proc_macro(item_index).name())
+ }
+ }
+
+ fn def_kind(&self, index: DefIndex) -> Option<DefKind> {
+ if !self.is_proc_macro(index) {
+ self.kind(index).def_kind()
+ } else {
+ Some(DefKind::Macro(
+ macro_kind(self.raw_proc_macro(index))
+ ))
+ }
+ }
+
+ fn get_span(&self, index: DefIndex, sess: &Session) -> Span {
+ self.root.per_def.span.get(self, index).unwrap().decode((self, sess))
+ }
+
+ fn load_proc_macro(&self, id: DefIndex, sess: &Session) -> SyntaxExtension {
+ let (name, kind, helper_attrs) = match *self.raw_proc_macro(id) {
+ ProcMacro::CustomDerive { trait_name, attributes, client } => {
+ let helper_attrs =
+ attributes.iter().cloned().map(Symbol::intern).collect::<Vec<_>>();
+ (
+ trait_name,
+ SyntaxExtensionKind::Derive(Box::new(ProcMacroDerive { client })),
+ helper_attrs,
+ )
+ }
+ ProcMacro::Attr { name, client } => (
+ name, SyntaxExtensionKind::Attr(Box::new(AttrProcMacro { client })), Vec::new()
+ ),
+ ProcMacro::Bang { name, client } => (
+ name, SyntaxExtensionKind::Bang(Box::new(BangProcMacro { client })), Vec::new()
+ )
+ };
+
+ SyntaxExtension::new(
+ &sess.parse_sess,
+ kind,
+ self.get_span(id, sess),
+ helper_attrs,
+ self.root.edition,
+ Symbol::intern(name),
+ &self.get_item_attrs(id, sess),
+ )
+ }
+
+ fn get_trait_def(&self, item_id: DefIndex, sess: &Session) -> ty::TraitDef {
+ match self.kind(item_id) {
+ EntryKind::Trait(data) => {
+ let data = data.decode((self, sess));
+ ty::TraitDef::new(self.local_def_id(item_id),
+ data.unsafety,
+ data.paren_sugar,
+ data.has_auto_impl,
+ data.is_marker,
+ self.def_path_table.def_path_hash(item_id))
+ },
+ EntryKind::TraitAlias => {
+ ty::TraitDef::new(self.local_def_id(item_id),
+ hir::Unsafety::Normal,
+ false,
+ false,
+ false,
+ self.def_path_table.def_path_hash(item_id))
+ },
+ _ => bug!("def-index does not refer to trait or trait alias"),
+ }
+ }
+
+ fn get_variant(
+ &self,
+ tcx: TyCtxt<'tcx>,
+ kind: &EntryKind<'_>,
+ index: DefIndex,
+ parent_did: DefId,
+ ) -> ty::VariantDef {
+ let data = match kind {
+ EntryKind::Variant(data) |
+ EntryKind::Struct(data, _) |
+ EntryKind::Union(data, _) => data.decode(self),
+ _ => bug!(),
+ };
+
+ let adt_kind = match kind {
+ EntryKind::Variant(_) => ty::AdtKind::Enum,
+ EntryKind::Struct(..) => ty::AdtKind::Struct,
+ EntryKind::Union(..) => ty::AdtKind::Union,
+ _ => bug!(),
+ };
+
+ let variant_did = if adt_kind == ty::AdtKind::Enum {
+ Some(self.local_def_id(index))
+ } else {
+ None
+ };
+ let ctor_did = data.ctor.map(|index| self.local_def_id(index));
+
+ ty::VariantDef::new(
+ tcx,
+ Ident::with_dummy_span(self.item_name(index)),
+ variant_did,
+ ctor_did,
+ data.discr,
+ self.root.per_def.children.get(self, index).unwrap_or(Lazy::empty())
+ .decode(self).map(|index| ty::FieldDef {
+ did: self.local_def_id(index),
+ ident: Ident::with_dummy_span(self.item_name(index)),
+ vis: self.get_visibility(index),
+ }).collect(),
+ data.ctor_kind,
+ adt_kind,
+ parent_did,
+ false,
+ )
+ }
+
+ fn get_adt_def(&self, item_id: DefIndex, tcx: TyCtxt<'tcx>) -> &'tcx ty::AdtDef {
+ let kind = self.kind(item_id);
+ let did = self.local_def_id(item_id);
+
+ let (adt_kind, repr) = match kind {
+ EntryKind::Enum(repr) => (ty::AdtKind::Enum, repr),
+ EntryKind::Struct(_, repr) => (ty::AdtKind::Struct, repr),
+ EntryKind::Union(_, repr) => (ty::AdtKind::Union, repr),
+ _ => bug!("get_adt_def called on a non-ADT {:?}", did),
+ };
+
+ let variants = if let ty::AdtKind::Enum = adt_kind {
+ self.root.per_def.children.get(self, item_id).unwrap_or(Lazy::empty())
+ .decode(self)
+ .map(|index| {
+ self.get_variant(tcx, &self.kind(index), index, did)
+ })
+ .collect()
+ } else {
+ std::iter::once(self.get_variant(tcx, &kind, item_id, did)).collect()
+ };
+
+ tcx.alloc_adt_def(did, adt_kind, variants, repr)
+ }
+
+ fn get_explicit_predicates(
+ &self,
+ item_id: DefIndex,
+ tcx: TyCtxt<'tcx>,
+ ) -> ty::GenericPredicates<'tcx> {
+ self.root.per_def.explicit_predicates.get(self, item_id).unwrap().decode((self, tcx))
+ }
+
+ fn get_inferred_outlives(
+ &self,
+ item_id: DefIndex,
+ tcx: TyCtxt<'tcx>,
+ ) -> &'tcx [(ty::Predicate<'tcx>, Span)] {
+ self.root.per_def.inferred_outlives.get(self, item_id).map(|predicates| {
+ predicates.decode((self, tcx))
+ }).unwrap_or_default()
+ }
+
+ fn get_super_predicates(
+ &self,
+ item_id: DefIndex,
+ tcx: TyCtxt<'tcx>,
+ ) -> ty::GenericPredicates<'tcx> {
+ self.root.per_def.super_predicates.get(self, item_id).unwrap().decode((self, tcx))
+ }
+
+ fn get_generics(&self, item_id: DefIndex, sess: &Session) -> ty::Generics {
+ self.root.per_def.generics.get(self, item_id).unwrap().decode((self, sess))
+ }
+
+ fn get_type(&self, id: DefIndex, tcx: TyCtxt<'tcx>) -> Ty<'tcx> {
+ self.root.per_def.ty.get(self, id).unwrap().decode((self, tcx))
+ }
+
+ fn get_stability(&self, id: DefIndex) -> Option<attr::Stability> {
+ match self.is_proc_macro(id) {
+ true => self.root.proc_macro_stability.clone(),
+ false => self.root.per_def.stability.get(self, id).map(|stab| stab.decode(self)),
+ }
+ }
+
+ fn get_deprecation(&self, id: DefIndex) -> Option<attr::Deprecation> {
+ self.root.per_def.deprecation.get(self, id)
+ .filter(|_| !self.is_proc_macro(id))
+ .map(|depr| depr.decode(self))
+ }
+
+ fn get_visibility(&self, id: DefIndex) -> ty::Visibility {
+ match self.is_proc_macro(id) {
+ true => ty::Visibility::Public,
+ false => self.root.per_def.visibility.get(self, id).unwrap().decode(self),
+ }
+ }
+
+ fn get_impl_data(&self, id: DefIndex) -> ImplData {
+ match self.kind(id) {
+ EntryKind::Impl(data) => data.decode(self),
+ _ => bug!(),
+ }
+ }
+
+ fn get_parent_impl(&self, id: DefIndex) -> Option<DefId> {
+ self.get_impl_data(id).parent_impl
+ }
+
+ fn get_impl_polarity(&self, id: DefIndex) -> ty::ImplPolarity {
+ self.get_impl_data(id).polarity
+ }
+
+ fn get_impl_defaultness(&self, id: DefIndex) -> hir::Defaultness {
+ self.get_impl_data(id).defaultness
+ }
+
+ fn get_coerce_unsized_info(
+ &self,
+ id: DefIndex,
+ ) -> Option<ty::adjustment::CoerceUnsizedInfo> {
+ self.get_impl_data(id).coerce_unsized_info
+ }
+
+ fn get_impl_trait(&self, id: DefIndex, tcx: TyCtxt<'tcx>) -> Option<ty::TraitRef<'tcx>> {
+ self.root.per_def.impl_trait_ref.get(self, id).map(|tr| tr.decode((self, tcx)))
+ }
+
+ /// Iterates over all the stability attributes in the given crate.
+ fn get_lib_features(&self, tcx: TyCtxt<'tcx>) -> &'tcx [(ast::Name, Option<ast::Name>)] {
+ // FIXME: For a proc macro crate, not sure whether we should return the "host"
+ // features or an empty Vec. Both don't cause ICEs.
+ tcx.arena.alloc_from_iter(self.root
+ .lib_features
+ .decode(self))
+ }
+
+ /// Iterates over the language items in the given crate.
+ fn get_lang_items(&self, tcx: TyCtxt<'tcx>) -> &'tcx [(DefId, usize)] {
+ if self.is_proc_macro_crate() {
+ // Proc macro crates do not export any lang-items to the target.
+ &[]
+ } else {
+ tcx.arena.alloc_from_iter(self.root
+ .lang_items
+ .decode(self)
+ .map(|(def_index, index)| (self.local_def_id(def_index), index)))
+ }
+ }
+
+ /// Iterates over the diagnostic items in the given crate.
+ fn get_diagnostic_items(
+ &self,
+ tcx: TyCtxt<'tcx>,
+ ) -> &'tcx FxHashMap<Symbol, DefId> {
+ tcx.arena.alloc(if self.is_proc_macro_crate() {
+ // Proc macro crates do not export any diagnostic-items to the target.
+ Default::default()
+ } else {
+ self.root
+ .diagnostic_items
+ .decode(self)
+ .map(|(name, def_index)| (name, self.local_def_id(def_index)))
+ .collect()
+ })
+ }
+
+ /// Iterates over each child of the given item.
+ fn each_child_of_item<F>(&self, id: DefIndex, mut callback: F, sess: &Session)
+ where F: FnMut(def::Export<hir::HirId>)
+ {
+ if let Some(proc_macros_ids) = self.root.proc_macro_data.map(|d| d.decode(self)) {
+ /* If we are loading as a proc macro, we want to return the view of this crate
+ * as a proc macro crate.
+ */
+ if id == CRATE_DEF_INDEX {
+ for def_index in proc_macros_ids {
+ let raw_macro = self.raw_proc_macro(def_index);
+ let res = Res::Def(
+ DefKind::Macro(macro_kind(raw_macro)),
+ self.local_def_id(def_index),
+ );
+ let ident = Ident::from_str(raw_macro.name());
+ callback(def::Export {
+ ident: ident,
+ res: res,
+ vis: ty::Visibility::Public,
+ span: DUMMY_SP,
+ });
+ }
+ }
+ return
+ }
+
+ // Find the item.
+ let kind = match self.maybe_kind(id) {
+ None => return,
+ Some(kind) => kind,
+ };
+
+ // Iterate over all children.
+ let macros_only = self.dep_kind.lock().macros_only();
+ let children = self.root.per_def.children.get(self, id).unwrap_or(Lazy::empty());
+ for child_index in children.decode((self, sess)) {
+ if macros_only {
+ continue
+ }
+
+ // Get the item.
+ if let Some(child_kind) = self.maybe_kind(child_index) {
+ match child_kind {
+ EntryKind::MacroDef(..) => {}
+ _ if macros_only => continue,
+ _ => {}
+ }
+
+ // Hand off the item to the callback.
+ match child_kind {
+ // FIXME(eddyb) Don't encode these in children.
+ EntryKind::ForeignMod => {
+ let child_children =
+ self.root.per_def.children.get(self, child_index)
+ .unwrap_or(Lazy::empty());
+ for child_index in child_children.decode((self, sess)) {
+ if let Some(kind) = self.def_kind(child_index) {
+ callback(def::Export {
+ res: Res::Def(kind, self.local_def_id(child_index)),
+ ident: Ident::with_dummy_span(self.item_name(child_index)),
+ vis: self.get_visibility(child_index),
+ span: self.root.per_def.span.get(self, child_index).unwrap()
+ .decode((self, sess)),
+ });
+ }
+ }
+ continue;
+ }
+ EntryKind::Impl(_) => continue,
+
+ _ => {}
+ }
+
+ let def_key = self.def_key(child_index);
+ let span = self.get_span(child_index, sess);
+ if let (Some(kind), Some(name)) =
+ (self.def_kind(child_index), def_key.disambiguated_data.data.get_opt_name()) {
+ let ident = Ident::with_dummy_span(name);
+ let vis = self.get_visibility(child_index);
+ let def_id = self.local_def_id(child_index);
+ let res = Res::Def(kind, def_id);
+ callback(def::Export { res, ident, vis, span });
+ // For non-re-export structs and variants add their constructors to children.
+ // Re-export lists automatically contain constructors when necessary.
+ match kind {
+ DefKind::Struct => {
+ if let Some(ctor_def_id) = self.get_ctor_def_id(child_index) {
+ let ctor_kind = self.get_ctor_kind(child_index);
+ let ctor_res = Res::Def(
+ DefKind::Ctor(CtorOf::Struct, ctor_kind),
+ ctor_def_id,
+ );
+ let vis = self.get_visibility(ctor_def_id.index);
+ callback(def::Export { res: ctor_res, vis, ident, span });
+ }
+ }
+ DefKind::Variant => {
+ // Braced variants, unlike structs, generate unusable names in
+ // value namespace, they are reserved for possible future use.
+ // It's ok to use the variant's id as a ctor id since an
+ // error will be reported on any use of such resolution anyway.
+ let ctor_def_id = self.get_ctor_def_id(child_index).unwrap_or(def_id);
+ let ctor_kind = self.get_ctor_kind(child_index);
+ let ctor_res = Res::Def(
+ DefKind::Ctor(CtorOf::Variant, ctor_kind),
+ ctor_def_id,
+ );
+ let mut vis = self.get_visibility(ctor_def_id.index);
+ if ctor_def_id == def_id && vis == ty::Visibility::Public {
+ // For non-exhaustive variants lower the constructor visibility to
+ // within the crate. We only need this for fictive constructors,
+ // for other constructors correct visibilities
+ // were already encoded in metadata.
+ let attrs = self.get_item_attrs(def_id.index, sess);
+ if attr::contains_name(&attrs, sym::non_exhaustive) {
+ let crate_def_id = self.local_def_id(CRATE_DEF_INDEX);
+ vis = ty::Visibility::Restricted(crate_def_id);
+ }
+ }
+ callback(def::Export { res: ctor_res, ident, vis, span });
+ }
+ _ => {}
+ }
+ }
+ }
+ }
+
+ if let EntryKind::Mod(data) = kind {
+ for exp in data.decode((self, sess)).reexports.decode((self, sess)) {
+ match exp.res {
+ Res::Def(DefKind::Macro(..), _) => {}
+ _ if macros_only => continue,
+ _ => {}
+ }
+ callback(exp);
+ }
+ }
+ }
+
+ fn is_item_mir_available(&self, id: DefIndex) -> bool {
+ !self.is_proc_macro(id) &&
+ self.root.per_def.mir.get(self, id).is_some()
+ }
+
+ fn get_optimized_mir(&self, tcx: TyCtxt<'tcx>, id: DefIndex) -> Body<'tcx> {
+ self.root.per_def.mir.get(self, id)
+ .filter(|_| !self.is_proc_macro(id))
+ .unwrap_or_else(|| {
+ bug!("get_optimized_mir: missing MIR for `{:?}`", self.local_def_id(id))
+ })
+ .decode((self, tcx))
+ }
+
+ fn get_promoted_mir(
+ &self,
+ tcx: TyCtxt<'tcx>,
+ id: DefIndex,
+ ) -> IndexVec<Promoted, Body<'tcx>> {
+ self.root.per_def.promoted_mir.get(self, id)
+ .filter(|_| !self.is_proc_macro(id))
+ .unwrap_or_else(|| {
+ bug!("get_promoted_mir: missing MIR for `{:?}`", self.local_def_id(id))
+ })
+ .decode((self, tcx))
+ }
+
+ fn mir_const_qualif(&self, id: DefIndex) -> u8 {
+ match self.kind(id) {
+ EntryKind::Const(qualif, _) |
+ EntryKind::AssocConst(AssocContainer::ImplDefault, qualif, _) |
+ EntryKind::AssocConst(AssocContainer::ImplFinal, qualif, _) => {
+ qualif.mir
+ }
+ _ => bug!(),
+ }
+ }
+
+ fn get_associated_item(&self, id: DefIndex) -> ty::AssocItem {
+ let def_key = self.def_key(id);
+ let parent = self.local_def_id(def_key.parent.unwrap());
+ let name = def_key.disambiguated_data.data.get_opt_name().unwrap();
+
+ let (kind, container, has_self) = match self.kind(id) {
+ EntryKind::AssocConst(container, _, _) => {
+ (ty::AssocKind::Const, container, false)
+ }
+ EntryKind::Method(data) => {
+ let data = data.decode(self);
+ (ty::AssocKind::Method, data.container, data.has_self)
+ }
+ EntryKind::AssocType(container) => {
+ (ty::AssocKind::Type, container, false)
+ }
+ EntryKind::AssocOpaqueTy(container) => {
+ (ty::AssocKind::OpaqueTy, container, false)
+ }
+ _ => bug!("cannot get associated-item of `{:?}`", def_key)
+ };
+
+ ty::AssocItem {
+ ident: Ident::with_dummy_span(name),
+ kind,
+ vis: self.get_visibility(id),
+ defaultness: container.defaultness(),
+ def_id: self.local_def_id(id),
+ container: container.with_def_id(parent),
+ method_has_self_argument: has_self
+ }
+ }
+
+ fn get_item_variances(&self, id: DefIndex) -> Vec<ty::Variance> {
+ self.root.per_def.variances.get(self, id).unwrap_or(Lazy::empty())
+ .decode(self).collect()
+ }
+
+ fn get_ctor_kind(&self, node_id: DefIndex) -> CtorKind {
+ match self.kind(node_id) {
+ EntryKind::Struct(data, _) |
+ EntryKind::Union(data, _) |
+ EntryKind::Variant(data) => data.decode(self).ctor_kind,
+ _ => CtorKind::Fictive,
+ }
+ }
+
+ fn get_ctor_def_id(&self, node_id: DefIndex) -> Option<DefId> {
+ match self.kind(node_id) {
+ EntryKind::Struct(data, _) => {
+ data.decode(self).ctor.map(|index| self.local_def_id(index))
+ }
+ EntryKind::Variant(data) => {
+ data.decode(self).ctor.map(|index| self.local_def_id(index))
+ }
+ _ => None,
+ }
+ }
+
+ fn get_item_attrs(&self, node_id: DefIndex, sess: &Session) -> Lrc<[ast::Attribute]> {
+ // The attributes for a tuple struct/variant are attached to the definition, not the ctor;
+ // we assume that someone passing in a tuple struct ctor is actually wanting to
+ // look at the definition
+ let def_key = self.def_key(node_id);
+ let item_id = if def_key.disambiguated_data.data == DefPathData::Ctor {
+ def_key.parent.unwrap()
+ } else {
+ node_id
+ };
+
+ Lrc::from(self.root.per_def.attributes.get(self, item_id).unwrap_or(Lazy::empty())
+ .decode((self, sess))
+ .collect::<Vec<_>>())
+ }
+
+ fn get_struct_field_names(
+ &self,
+ id: DefIndex,
+ sess: &Session,
+ ) -> Vec<Spanned<ast::Name>> {
+ self.root.per_def.children.get(self, id).unwrap_or(Lazy::empty())
+ .decode(self)
+ .map(|index| respan(self.get_span(index, sess), self.item_name(index)))
+ .collect()
+ }
+
+ // Translate a DefId from the current compilation environment to a DefId
+ // for an external crate.
+ fn reverse_translate_def_id(&self, did: DefId) -> Option<DefId> {
+ for (local, &global) in self.cnum_map.iter_enumerated() {
+ if global == did.krate {
+ return Some(DefId {
+ krate: local,
+ index: did.index,
+ });
+ }
+ }
+
+ None
+ }
+
+ fn get_inherent_implementations_for_type(
+ &self,
+ tcx: TyCtxt<'tcx>,
+ id: DefIndex,
+ ) -> &'tcx [DefId] {
+ tcx.arena.alloc_from_iter(
+ self.root.per_def.inherent_impls.get(self, id).unwrap_or(Lazy::empty())
+ .decode(self)
+ .map(|index| self.local_def_id(index))
+ )
+ }
+
+ fn get_implementations_for_trait(
+ &self,
+ tcx: TyCtxt<'tcx>,
+ filter: Option<DefId>,
+ ) -> &'tcx [DefId] {
+ if self.is_proc_macro_crate() {
+ // proc-macro crates export no trait impls.
+ return &[]
+ }
+
+ // Do a reverse lookup beforehand to avoid touching the crate_num
+ // hash map in the loop below.
+ let filter = match filter.map(|def_id| self.reverse_translate_def_id(def_id)) {
+ Some(Some(def_id)) => Some((def_id.krate.as_u32(), def_id.index)),
+ Some(None) => return &[],
+ None => None,
+ };
+
+ if let Some(filter) = filter {
+ if let Some(impls) = self.trait_impls.get(&filter) {
+ tcx.arena.alloc_from_iter(impls.decode(self).map(|idx| self.local_def_id(idx)))
+ } else {
+ &[]
+ }
+ } else {
+ tcx.arena.alloc_from_iter(self.trait_impls.values().flat_map(|impls| {
+ impls.decode(self).map(|idx| self.local_def_id(idx))
+ }))
+ }
+ }
+
+ fn get_trait_of_item(&self, id: DefIndex) -> Option<DefId> {
+ let def_key = self.def_key(id);
+ match def_key.disambiguated_data.data {
+ DefPathData::TypeNs(..) | DefPathData::ValueNs(..) => (),
+ // Not an associated item
+ _ => return None,
+ }
+ def_key.parent.and_then(|parent_index| {
+ match self.kind(parent_index) {
+ EntryKind::Trait(_) |
+ EntryKind::TraitAlias => Some(self.local_def_id(parent_index)),
+ _ => None,
+ }
+ })
+ }
+
+
+ fn get_native_libraries(&self, sess: &Session) -> Vec<NativeLibrary> {
+ if self.is_proc_macro_crate() {
+ // Proc macro crates do not have any *target* native libraries.
+ vec![]
+ } else {
+ self.root.native_libraries.decode((self, sess)).collect()
+ }
+ }
+
+ fn get_foreign_modules(&self, tcx: TyCtxt<'tcx>) -> &'tcx [ForeignModule] {
+ if self.is_proc_macro_crate() {
+ // Proc macro crates do not have any *target* foreign modules.
+ &[]
+ } else {
+ tcx.arena.alloc_from_iter(self.root.foreign_modules.decode((self, tcx.sess)))
+ }
+ }
+
+ fn get_dylib_dependency_formats(
+ &self,
+ tcx: TyCtxt<'tcx>,
+ ) -> &'tcx [(CrateNum, LinkagePreference)] {
+ tcx.arena.alloc_from_iter(self.root
+ .dylib_dependency_formats
+ .decode(self)
+ .enumerate()
+ .flat_map(|(i, link)| {
+ let cnum = CrateNum::new(i + 1);
+ link.map(|link| (self.cnum_map[cnum], link))
+ }))
+ }
+
+ fn get_missing_lang_items(&self, tcx: TyCtxt<'tcx>) -> &'tcx [lang_items::LangItem] {
+ if self.is_proc_macro_crate() {
+ // Proc macro crates do not depend on any target weak lang-items.
+ &[]
+ } else {
+ tcx.arena.alloc_from_iter(self.root
+ .lang_items_missing
+ .decode(self))
+ }
+ }
+
+ fn get_fn_param_names(&self, id: DefIndex) -> Vec<ast::Name> {
+ let param_names = match self.kind(id) {
+ EntryKind::Fn(data) |
+ EntryKind::ForeignFn(data) => data.decode(self).param_names,
+ EntryKind::Method(data) => data.decode(self).fn_data.param_names,
+ _ => Lazy::empty(),
+ };
+ param_names.decode(self).collect()
+ }
+
+ fn exported_symbols(
+ &self,
+ tcx: TyCtxt<'tcx>,
+ ) -> Vec<(ExportedSymbol<'tcx>, SymbolExportLevel)> {
+ if self.is_proc_macro_crate() {
+ // If this crate is a custom derive crate, then we're not even going to
+ // link those in so we skip those crates.
+ vec![]
+ } else {
+ self.root.exported_symbols.decode((self, tcx)).collect()
+ }
+ }
+
+ fn get_rendered_const(&self, id: DefIndex) -> String {
+ match self.kind(id) {
+ EntryKind::Const(_, data) |
+ EntryKind::AssocConst(_, _, data) => data.decode(self).0,
+ _ => bug!(),
+ }
+ }
+
+ fn get_macro(&self, id: DefIndex) -> MacroDef {
+ match self.kind(id) {
+ EntryKind::MacroDef(macro_def) => macro_def.decode(self),
+ _ => bug!(),
+ }
+ }
+
+ fn is_const_fn_raw(&self, id: DefIndex) -> bool {
+ let constness = match self.kind(id) {
+ EntryKind::Method(data) => data.decode(self).fn_data.constness,
+ EntryKind::Fn(data) => data.decode(self).constness,
+ EntryKind::Variant(..) | EntryKind::Struct(..) => hir::Constness::Const,
+ _ => hir::Constness::NotConst,
+ };
+ constness == hir::Constness::Const
+ }
+
+ fn asyncness(&self, id: DefIndex) -> hir::IsAsync {
+ match self.kind(id) {
+ EntryKind::Fn(data) => data.decode(self).asyncness,
+ EntryKind::Method(data) => data.decode(self).fn_data.asyncness,
+ EntryKind::ForeignFn(data) => data.decode(self).asyncness,
+ _ => bug!("asyncness: expected function kind"),
+ }
+ }
+
+ fn is_foreign_item(&self, id: DefIndex) -> bool {
+ match self.kind(id) {
+ EntryKind::ForeignImmStatic |
+ EntryKind::ForeignMutStatic |
+ EntryKind::ForeignFn(_) => true,
+ _ => false,
+ }
+ }
+
+ fn static_mutability(&self, id: DefIndex) -> Option<hir::Mutability> {
+ match self.kind(id) {
+ EntryKind::ImmStatic |
+ EntryKind::ForeignImmStatic => Some(hir::MutImmutable),
+ EntryKind::MutStatic |
+ EntryKind::ForeignMutStatic => Some(hir::MutMutable),
+ _ => None,
+ }
+ }
+
+ fn fn_sig(&self, id: DefIndex, tcx: TyCtxt<'tcx>) -> ty::PolyFnSig<'tcx> {
+ self.root.per_def.fn_sig.get(self, id).unwrap().decode((self, tcx))
+ }
+
+ #[inline]
+ fn def_key(&self, index: DefIndex) -> DefKey {
+ let mut key = self.def_path_table.def_key(index);
+ if self.is_proc_macro(index) {
+ let name = self.raw_proc_macro(index).name();
+ key.disambiguated_data.data = DefPathData::MacroNs(Symbol::intern(name));
+ }
+ key
+ }
+
+ // Returns the path leading to the thing with this `id`.
+ fn def_path(&self, id: DefIndex) -> DefPath {
+ debug!("def_path(cnum={:?}, id={:?})", self.cnum, id);
+ DefPath::make(self.cnum, id, |parent| self.def_key(parent))
+ }
+
+ #[inline]
+ fn def_path_hash(&self, index: DefIndex) -> DefPathHash {
+ self.def_path_table.def_path_hash(index)
+ }
+
+ /// Imports the source_map from an external crate into the source_map of the crate
+ /// currently being compiled (the "local crate").
+ ///
+ /// The import algorithm works analogous to how AST items are inlined from an
+ /// external crate's metadata:
+ /// For every SourceFile in the external source_map an 'inline' copy is created in the
+ /// local source_map. The correspondence relation between external and local
+ /// SourceFiles is recorded in the `ImportedSourceFile` objects returned from this
+ /// function. When an item from an external crate is later inlined into this
+ /// crate, this correspondence information is used to translate the span
+ /// information of the inlined item so that it refers the correct positions in
+ /// the local source_map (see `<decoder::DecodeContext as SpecializedDecoder<Span>>`).
+ ///
+ /// The import algorithm in the function below will reuse SourceFiles already
+ /// existing in the local source_map. For example, even if the SourceFile of some
+ /// source file of libstd gets imported many times, there will only ever be
+ /// one SourceFile object for the corresponding file in the local source_map.
+ ///
+ /// Note that imported SourceFiles do not actually contain the source code of the
+ /// file they represent, just information about length, line breaks, and
+ /// multibyte characters. This information is enough to generate valid debuginfo
+ /// for items inlined from other crates.
+ ///
+ /// Proc macro crates don't currently export spans, so this function does not have
+ /// to work for them.
+ fn imported_source_files(
+ &'a self,
+ local_source_map: &source_map::SourceMap,
+ ) -> &[cstore::ImportedSourceFile] {
+ self.source_map_import_info.init_locking(|| {
+ let external_source_map = self.root.source_map.decode(self);
+
+ external_source_map.map(|source_file_to_import| {
+ // We can't reuse an existing SourceFile, so allocate a new one
+ // containing the information we need.
+ let syntax_pos::SourceFile { name,
+ name_was_remapped,
+ src_hash,
+ start_pos,
+ end_pos,
+ mut lines,
+ mut multibyte_chars,
+ mut non_narrow_chars,
+ mut normalized_pos,
+ name_hash,
+ .. } = source_file_to_import;
+
+ let source_length = (end_pos - start_pos).to_usize();
+
+ // Translate line-start positions and multibyte character
+ // position into frame of reference local to file.
+ // `SourceMap::new_imported_source_file()` will then translate those
+ // coordinates to their new global frame of reference when the
+ // offset of the SourceFile is known.
+ for pos in &mut lines {
+ *pos = *pos - start_pos;
+ }
+ for mbc in &mut multibyte_chars {
+ mbc.pos = mbc.pos - start_pos;
+ }
+ for swc in &mut non_narrow_chars {
+ *swc = *swc - start_pos;
+ }
+ for np in &mut normalized_pos {
+ np.pos = np.pos - start_pos;
+ }
+
+ let local_version = local_source_map.new_imported_source_file(name,
+ name_was_remapped,
+ self.cnum.as_u32(),
+ src_hash,
+ name_hash,
+ source_length,
+ lines,
+ multibyte_chars,
+ non_narrow_chars,
+ normalized_pos);
+ debug!("CrateMetaData::imported_source_files alloc \
+ source_file {:?} original (start_pos {:?} end_pos {:?}) \
+ translated (start_pos {:?} end_pos {:?})",
+ local_version.name, start_pos, end_pos,
+ local_version.start_pos, local_version.end_pos);
+
+ cstore::ImportedSourceFile {
+ original_start_pos: start_pos,
+ original_end_pos: end_pos,
+ translated_source_file: local_version,
+ }
+ }).collect()
+ })
+ }
+
+ /// Get the `DepNodeIndex` corresponding this crate. The result of this
+ /// method is cached in the `dep_node_index` field.
+ fn get_crate_dep_node_index(&self, tcx: TyCtxt<'tcx>) -> DepNodeIndex {
+ let mut dep_node_index = self.dep_node_index.load();
+
+ if unlikely!(dep_node_index == DepNodeIndex::INVALID) {
+ // We have not cached the DepNodeIndex for this upstream crate yet,
+ // so use the dep-graph to find it out and cache it.
+ // Note that multiple threads can enter this block concurrently.
+ // That is fine because the DepNodeIndex remains constant
+ // throughout the whole compilation session, and multiple stores
+ // would always write the same value.
+
+ let def_path_hash = self.def_path_hash(CRATE_DEF_INDEX);
+ let dep_node = def_path_hash.to_dep_node(DepKind::CrateMetadata);
+
+ dep_node_index = tcx.dep_graph.dep_node_index_of(&dep_node);
+ assert!(dep_node_index != DepNodeIndex::INVALID);
+ self.dep_node_index.store(dep_node_index);
+ }
+
+ dep_node_index
+ }
+}
+
+// Cannot be implemented on 'ProcMacro', as libproc_macro
+// does not depend on libsyntax
+fn macro_kind(raw: &ProcMacro) -> MacroKind {
+ match raw {
+ ProcMacro::CustomDerive { .. } => MacroKind::Derive,
+ ProcMacro::Attr { .. } => MacroKind::Attr,
+ ProcMacro::Bang { .. } => MacroKind::Bang
+ }
+}
--- /dev/null
+use crate::cstore::{self, LoadedMacro};
+use crate::link_args;
+use crate::native_libs;
+use crate::foreign_modules;
+use crate::rmeta::{self, encoder};
+
+use rustc::ty::query::QueryConfig;
+use rustc::middle::cstore::{CrateSource, CrateStore, DepKind, EncodedMetadata, NativeLibraryKind};
+use rustc::middle::exported_symbols::ExportedSymbol;
+use rustc::middle::stability::DeprecationEntry;
+use rustc::hir::def;
+use rustc::hir;
+use rustc::session::{CrateDisambiguator, Session};
+use rustc::ty::{self, TyCtxt};
+use rustc::ty::query::Providers;
+use rustc::hir::def_id::{CrateNum, DefId, LOCAL_CRATE, CRATE_DEF_INDEX};
+use rustc::hir::map::{DefKey, DefPath, DefPathHash};
+use rustc::hir::map::definitions::DefPathTable;
+use rustc::util::nodemap::DefIdMap;
+use rustc_data_structures::svh::Svh;
+
+use smallvec::SmallVec;
+use std::any::Any;
+use rustc_data_structures::sync::Lrc;
+use std::sync::Arc;
+
+use syntax::ast;
+use syntax::attr;
+use syntax::source_map;
+use syntax::parse::source_file_to_stream;
+use syntax::parse::parser::emit_unclosed_delims;
+use syntax::source_map::Spanned;
+use syntax::symbol::Symbol;
+use syntax_pos::{Span, FileName};
+use rustc_index::bit_set::BitSet;
+
+macro_rules! provide {
+ (<$lt:tt> $tcx:ident, $def_id:ident, $other:ident, $cdata:ident,
+ $($name:ident => $compute:block)*) => {
+ pub fn provide_extern<$lt>(providers: &mut Providers<$lt>) {
+ // HACK(eddyb) `$lt: $lt` forces `$lt` to be early-bound, which
+ // allows the associated type in the return type to be normalized.
+ $(fn $name<$lt: $lt, T: IntoArgs>(
+ $tcx: TyCtxt<$lt>,
+ def_id_arg: T,
+ ) -> <ty::queries::$name<$lt> as QueryConfig<$lt>>::Value {
+ let _prof_timer =
+ $tcx.prof.generic_activity("metadata_decode_entry");
+
+ #[allow(unused_variables)]
+ let ($def_id, $other) = def_id_arg.into_args();
+ assert!(!$def_id.is_local());
+
+ let $cdata = $tcx.crate_data_as_any($def_id.krate);
+ let $cdata = $cdata.downcast_ref::<cstore::CrateMetadata>()
+ .expect("CrateStore created data is not a CrateMetadata");
+
+ if $tcx.dep_graph.is_fully_enabled() {
+ let crate_dep_node_index = $cdata.get_crate_dep_node_index($tcx);
+ $tcx.dep_graph.read_index(crate_dep_node_index);
+ }
+
+ $compute
+ })*
+
+ *providers = Providers {
+ $($name,)*
+ ..*providers
+ };
+ }
+ }
+}
+
+// small trait to work around different signature queries all being defined via
+// the macro above.
+trait IntoArgs {
+ fn into_args(self) -> (DefId, DefId);
+}
+
+impl IntoArgs for DefId {
+ fn into_args(self) -> (DefId, DefId) { (self, self) }
+}
+
+impl IntoArgs for CrateNum {
+ fn into_args(self) -> (DefId, DefId) { (self.as_def_id(), self.as_def_id()) }
+}
+
+impl IntoArgs for (CrateNum, DefId) {
+ fn into_args(self) -> (DefId, DefId) { (self.0.as_def_id(), self.1) }
+}
+
+provide! { <'tcx> tcx, def_id, other, cdata,
+ type_of => { cdata.get_type(def_id.index, tcx) }
+ generics_of => {
+ tcx.arena.alloc(cdata.get_generics(def_id.index, tcx.sess))
+ }
+ explicit_predicates_of => { cdata.get_explicit_predicates(def_id.index, tcx) }
+ inferred_outlives_of => { cdata.get_inferred_outlives(def_id.index, tcx) }
+ super_predicates_of => { cdata.get_super_predicates(def_id.index, tcx) }
+ trait_def => {
+ tcx.arena.alloc(cdata.get_trait_def(def_id.index, tcx.sess))
+ }
+ adt_def => { cdata.get_adt_def(def_id.index, tcx) }
+ adt_destructor => {
+ let _ = cdata;
+ tcx.calculate_dtor(def_id, &mut |_,_| Ok(()))
+ }
+ variances_of => { tcx.arena.alloc_from_iter(cdata.get_item_variances(def_id.index)) }
+ associated_item_def_ids => {
+ let mut result = SmallVec::<[_; 8]>::new();
+ cdata.each_child_of_item(def_id.index,
+ |child| result.push(child.res.def_id()), tcx.sess);
+ tcx.arena.alloc_slice(&result)
+ }
+ associated_item => { cdata.get_associated_item(def_id.index) }
+ impl_trait_ref => { cdata.get_impl_trait(def_id.index, tcx) }
+ impl_polarity => { cdata.get_impl_polarity(def_id.index) }
+ coerce_unsized_info => {
+ cdata.get_coerce_unsized_info(def_id.index).unwrap_or_else(|| {
+ bug!("coerce_unsized_info: `{:?}` is missing its info", def_id);
+ })
+ }
+ optimized_mir => { tcx.arena.alloc(cdata.get_optimized_mir(tcx, def_id.index)) }
+ promoted_mir => { tcx.arena.alloc(cdata.get_promoted_mir(tcx, def_id.index)) }
+ mir_const_qualif => {
+ (cdata.mir_const_qualif(def_id.index), tcx.arena.alloc(BitSet::new_empty(0)))
+ }
+ fn_sig => { cdata.fn_sig(def_id.index, tcx) }
+ inherent_impls => { cdata.get_inherent_implementations_for_type(tcx, def_id.index) }
+ is_const_fn_raw => { cdata.is_const_fn_raw(def_id.index) }
+ asyncness => { cdata.asyncness(def_id.index) }
+ is_foreign_item => { cdata.is_foreign_item(def_id.index) }
+ static_mutability => { cdata.static_mutability(def_id.index) }
+ def_kind => { cdata.def_kind(def_id.index) }
+ def_span => { cdata.get_span(def_id.index, &tcx.sess) }
+ lookup_stability => {
+ cdata.get_stability(def_id.index).map(|s| tcx.intern_stability(s))
+ }
+ lookup_deprecation_entry => {
+ cdata.get_deprecation(def_id.index).map(DeprecationEntry::external)
+ }
+ item_attrs => { cdata.get_item_attrs(def_id.index, tcx.sess) }
+ // FIXME(#38501) We've skipped a `read` on the `HirBody` of
+ // a `fn` when encoding, so the dep-tracking wouldn't work.
+ // This is only used by rustdoc anyway, which shouldn't have
+ // incremental recompilation ever enabled.
+ fn_arg_names => { cdata.get_fn_param_names(def_id.index) }
+ rendered_const => { cdata.get_rendered_const(def_id.index) }
+ impl_parent => { cdata.get_parent_impl(def_id.index) }
+ trait_of_item => { cdata.get_trait_of_item(def_id.index) }
+ is_mir_available => { cdata.is_item_mir_available(def_id.index) }
+
+ dylib_dependency_formats => { cdata.get_dylib_dependency_formats(tcx) }
+ is_panic_runtime => { cdata.root.panic_runtime }
+ is_compiler_builtins => { cdata.root.compiler_builtins }
+ has_global_allocator => { cdata.root.has_global_allocator }
+ has_panic_handler => { cdata.root.has_panic_handler }
+ is_sanitizer_runtime => { cdata.root.sanitizer_runtime }
+ is_profiler_runtime => { cdata.root.profiler_runtime }
+ panic_strategy => { cdata.root.panic_strategy }
+ extern_crate => {
+ let r = *cdata.extern_crate.lock();
+ r.map(|c| &*tcx.arena.alloc(c))
+ }
+ is_no_builtins => { cdata.root.no_builtins }
+ symbol_mangling_version => { cdata.root.symbol_mangling_version }
+ impl_defaultness => { cdata.get_impl_defaultness(def_id.index) }
+ reachable_non_generics => {
+ let reachable_non_generics = tcx
+ .exported_symbols(cdata.cnum)
+ .iter()
+ .filter_map(|&(exported_symbol, export_level)| {
+ if let ExportedSymbol::NonGeneric(def_id) = exported_symbol {
+ return Some((def_id, export_level))
+ } else {
+ None
+ }
+ })
+ .collect();
+
+ tcx.arena.alloc(reachable_non_generics)
+ }
+ native_libraries => { Lrc::new(cdata.get_native_libraries(tcx.sess)) }
+ foreign_modules => { cdata.get_foreign_modules(tcx) }
+ plugin_registrar_fn => {
+ cdata.root.plugin_registrar_fn.map(|index| {
+ DefId { krate: def_id.krate, index }
+ })
+ }
+ proc_macro_decls_static => {
+ cdata.root.proc_macro_decls_static.map(|index| {
+ DefId { krate: def_id.krate, index }
+ })
+ }
+ crate_disambiguator => { cdata.root.disambiguator }
+ crate_hash => { cdata.root.hash }
+ original_crate_name => { cdata.root.name }
+
+ extra_filename => { cdata.root.extra_filename.clone() }
+
+ implementations_of_trait => {
+ cdata.get_implementations_for_trait(tcx, Some(other))
+ }
+
+ all_trait_implementations => {
+ cdata.get_implementations_for_trait(tcx, None)
+ }
+
+ visibility => { cdata.get_visibility(def_id.index) }
+ dep_kind => {
+ let r = *cdata.dep_kind.lock();
+ r
+ }
+ crate_name => { cdata.root.name }
+ item_children => {
+ let mut result = SmallVec::<[_; 8]>::new();
+ cdata.each_child_of_item(def_id.index, |child| result.push(child), tcx.sess);
+ tcx.arena.alloc_slice(&result)
+ }
+ defined_lib_features => { cdata.get_lib_features(tcx) }
+ defined_lang_items => { cdata.get_lang_items(tcx) }
+ diagnostic_items => { cdata.get_diagnostic_items(tcx) }
+ missing_lang_items => { cdata.get_missing_lang_items(tcx) }
+
+ missing_extern_crate_item => {
+ let r = match *cdata.extern_crate.borrow() {
+ Some(extern_crate) if !extern_crate.is_direct() => true,
+ _ => false,
+ };
+ r
+ }
+
+ used_crate_source => { Lrc::new(cdata.source.clone()) }
+
+ exported_symbols => {
+ let syms = cdata.exported_symbols(tcx);
+
+ // FIXME rust-lang/rust#64319, rust-lang/rust#64872: We want
+ // to block export of generics from dylibs, but we must fix
+ // rust-lang/rust#65890 before we can do that robustly.
+
+ Arc::new(syms)
+ }
+}
+
+pub fn provide(providers: &mut Providers<'_>) {
+ // FIXME(#44234) - almost all of these queries have no sub-queries and
+ // therefore no actual inputs, they're just reading tables calculated in
+ // resolve! Does this work? Unsure! That's what the issue is about
+ *providers = Providers {
+ is_dllimport_foreign_item: |tcx, id| {
+ match tcx.native_library_kind(id) {
+ Some(NativeLibraryKind::NativeUnknown) |
+ Some(NativeLibraryKind::NativeRawDylib) => true,
+ _ => false,
+ }
+ },
+ is_statically_included_foreign_item: |tcx, id| {
+ match tcx.native_library_kind(id) {
+ Some(NativeLibraryKind::NativeStatic) |
+ Some(NativeLibraryKind::NativeStaticNobundle) => true,
+ _ => false,
+ }
+ },
+ native_library_kind: |tcx, id| {
+ tcx.native_libraries(id.krate)
+ .iter()
+ .filter(|lib| native_libs::relevant_lib(&tcx.sess, lib))
+ .find(|lib| {
+ let fm_id = match lib.foreign_module {
+ Some(id) => id,
+ None => return false,
+ };
+ tcx.foreign_modules(id.krate)
+ .iter()
+ .find(|m| m.def_id == fm_id)
+ .expect("failed to find foreign module")
+ .foreign_items
+ .contains(&id)
+ })
+ .map(|l| l.kind)
+ },
+ native_libraries: |tcx, cnum| {
+ assert_eq!(cnum, LOCAL_CRATE);
+ Lrc::new(native_libs::collect(tcx))
+ },
+ foreign_modules: |tcx, cnum| {
+ assert_eq!(cnum, LOCAL_CRATE);
+ &tcx.arena.alloc(foreign_modules::collect(tcx))[..]
+ },
+ link_args: |tcx, cnum| {
+ assert_eq!(cnum, LOCAL_CRATE);
+ Lrc::new(link_args::collect(tcx))
+ },
+
+ // Returns a map from a sufficiently visible external item (i.e., an
+ // external item that is visible from at least one local module) to a
+ // sufficiently visible parent (considering modules that re-export the
+ // external item to be parents).
+ visible_parent_map: |tcx, cnum| {
+ use std::collections::vec_deque::VecDeque;
+ use std::collections::hash_map::Entry;
+
+ assert_eq!(cnum, LOCAL_CRATE);
+ let mut visible_parent_map: DefIdMap<DefId> = Default::default();
+
+ // Issue 46112: We want the map to prefer the shortest
+ // paths when reporting the path to an item. Therefore we
+ // build up the map via a breadth-first search (BFS),
+ // which naturally yields minimal-length paths.
+ //
+ // Note that it needs to be a BFS over the whole forest of
+ // crates, not just each individual crate; otherwise you
+ // only get paths that are locally minimal with respect to
+ // whatever crate we happened to encounter first in this
+ // traversal, but not globally minimal across all crates.
+ let bfs_queue = &mut VecDeque::new();
+
+ // Preferring shortest paths alone does not guarantee a
+ // deterministic result; so sort by crate num to avoid
+ // hashtable iteration non-determinism. This only makes
+ // things as deterministic as crate-nums assignment is,
+ // which is to say, its not deterministic in general. But
+ // we believe that libstd is consistently assigned crate
+ // num 1, so it should be enough to resolve #46112.
+ let mut crates: Vec<CrateNum> = (*tcx.crates()).to_owned();
+ crates.sort();
+
+ for &cnum in crates.iter() {
+ // Ignore crates without a corresponding local `extern crate` item.
+ if tcx.missing_extern_crate_item(cnum) {
+ continue
+ }
+
+ bfs_queue.push_back(DefId {
+ krate: cnum,
+ index: CRATE_DEF_INDEX
+ });
+ }
+
+ // (restrict scope of mutable-borrow of `visible_parent_map`)
+ {
+ let visible_parent_map = &mut visible_parent_map;
+ let mut add_child = |bfs_queue: &mut VecDeque<_>,
+ child: &def::Export<hir::HirId>,
+ parent: DefId| {
+ if child.vis != ty::Visibility::Public {
+ return;
+ }
+
+ if let Some(child) = child.res.opt_def_id() {
+ match visible_parent_map.entry(child) {
+ Entry::Occupied(mut entry) => {
+ // If `child` is defined in crate `cnum`, ensure
+ // that it is mapped to a parent in `cnum`.
+ if child.krate == cnum && entry.get().krate != cnum {
+ entry.insert(parent);
+ }
+ }
+ Entry::Vacant(entry) => {
+ entry.insert(parent);
+ bfs_queue.push_back(child);
+ }
+ }
+ }
+ };
+
+ while let Some(def) = bfs_queue.pop_front() {
+ for child in tcx.item_children(def).iter() {
+ add_child(bfs_queue, child, def);
+ }
+ }
+ }
+
+ tcx.arena.alloc(visible_parent_map)
+ },
+
+ dependency_formats: |tcx, cnum| {
+ assert_eq!(cnum, LOCAL_CRATE);
+ Lrc::new(crate::dependency_format::calculate(tcx))
+ },
+
+ ..*providers
+ };
+}
+
+impl cstore::CStore {
+ pub fn export_macros_untracked(&self, cnum: CrateNum) {
+ let data = self.get_crate_data(cnum);
+ let mut dep_kind = data.dep_kind.lock();
+ if *dep_kind == DepKind::UnexportedMacrosOnly {
+ *dep_kind = DepKind::MacrosOnly;
+ }
+ }
+
+ pub fn struct_field_names_untracked(&self, def: DefId, sess: &Session) -> Vec<Spanned<Symbol>> {
+ self.get_crate_data(def.krate).get_struct_field_names(def.index, sess)
+ }
+
+ pub fn item_children_untracked(
+ &self,
+ def_id: DefId,
+ sess: &Session
+ ) -> Vec<def::Export<hir::HirId>> {
+ let mut result = vec![];
+ self.get_crate_data(def_id.krate)
+ .each_child_of_item(def_id.index, |child| result.push(child), sess);
+ result
+ }
+
+ pub fn load_macro_untracked(&self, id: DefId, sess: &Session) -> LoadedMacro {
+ let _prof_timer = sess.prof.generic_activity("metadata_load_macro");
+
+ let data = self.get_crate_data(id.krate);
+ if data.is_proc_macro_crate() {
+ return LoadedMacro::ProcMacro(data.load_proc_macro(id.index, sess));
+ }
+
+ let def = data.get_macro(id.index);
+ let macro_full_name = data.def_path(id.index).to_string_friendly(|_| data.root.name);
+ let source_name = FileName::Macros(macro_full_name);
+
+ let source_file = sess.parse_sess.source_map().new_source_file(source_name, def.body);
+ let local_span = Span::with_root_ctxt(source_file.start_pos, source_file.end_pos);
+ let (body, mut errors) = source_file_to_stream(&sess.parse_sess, source_file, None);
+ emit_unclosed_delims(&mut errors, &sess.parse_sess);
+
+ // Mark the attrs as used
+ let attrs = data.get_item_attrs(id.index, sess);
+ for attr in attrs.iter() {
+ attr::mark_used(attr);
+ }
+
+ let name = data.def_key(id.index).disambiguated_data.data
+ .get_opt_name().expect("no name in load_macro");
+ sess.imported_macro_spans.borrow_mut()
+ .insert(local_span, (name.to_string(), data.get_span(id.index, sess)));
+
+ LoadedMacro::MacroDef(ast::Item {
+ // FIXME: cross-crate hygiene
+ ident: ast::Ident::with_dummy_span(name),
+ id: ast::DUMMY_NODE_ID,
+ span: local_span,
+ attrs: attrs.iter().cloned().collect(),
+ kind: ast::ItemKind::MacroDef(ast::MacroDef {
+ tokens: body.into(),
+ legacy: def.legacy,
+ }),
+ vis: source_map::respan(local_span.shrink_to_lo(), ast::VisibilityKind::Inherited),
+ tokens: None,
+ }, data.root.edition)
+ }
+
+ pub fn associated_item_cloned_untracked(&self, def: DefId) -> ty::AssocItem {
+ self.get_crate_data(def.krate).get_associated_item(def.index)
+ }
+
+ pub fn crate_source_untracked(&self, cnum: CrateNum) -> CrateSource {
+ self.get_crate_data(cnum).source.clone()
+ }
+}
+
+impl CrateStore for cstore::CStore {
+ fn crate_data_as_any(&self, cnum: CrateNum) -> &dyn Any {
+ self.get_crate_data(cnum)
+ }
+
+ fn item_generics_cloned_untracked(&self, def: DefId, sess: &Session) -> ty::Generics {
+ self.get_crate_data(def.krate).get_generics(def.index, sess)
+ }
+
+ fn crate_name_untracked(&self, cnum: CrateNum) -> Symbol
+ {
+ self.get_crate_data(cnum).root.name
+ }
+
+ fn crate_is_private_dep_untracked(&self, cnum: CrateNum) -> bool {
+ self.get_crate_data(cnum).private_dep
+ }
+
+ fn crate_disambiguator_untracked(&self, cnum: CrateNum) -> CrateDisambiguator
+ {
+ self.get_crate_data(cnum).root.disambiguator
+ }
+
+ fn crate_hash_untracked(&self, cnum: CrateNum) -> Svh
+ {
+ self.get_crate_data(cnum).root.hash
+ }
+
+ fn crate_host_hash_untracked(&self, cnum: CrateNum) -> Option<Svh> {
+ self.get_crate_data(cnum).host_hash
+ }
+
+ /// Returns the `DefKey` for a given `DefId`. This indicates the
+ /// parent `DefId` as well as some idea of what kind of data the
+ /// `DefId` refers to.
+ fn def_key(&self, def: DefId) -> DefKey {
+ self.get_crate_data(def.krate).def_key(def.index)
+ }
+
+ fn def_path(&self, def: DefId) -> DefPath {
+ self.get_crate_data(def.krate).def_path(def.index)
+ }
+
+ fn def_path_hash(&self, def: DefId) -> DefPathHash {
+ self.get_crate_data(def.krate).def_path_hash(def.index)
+ }
+
+ fn def_path_table(&self, cnum: CrateNum) -> &DefPathTable {
+ &self.get_crate_data(cnum).def_path_table
+ }
+
+ fn crates_untracked(&self) -> Vec<CrateNum>
+ {
+ let mut result = vec![];
+ self.iter_crate_data(|cnum, _| result.push(cnum));
+ result
+ }
+
+ fn postorder_cnums_untracked(&self) -> Vec<CrateNum> {
+ self.do_postorder_cnums_untracked()
+ }
+
+ fn encode_metadata(&self, tcx: TyCtxt<'_>) -> EncodedMetadata {
+ encoder::encode_metadata(tcx)
+ }
+
+ fn metadata_encoding_version(&self) -> &[u8]
+ {
+ rmeta::METADATA_HEADER
+ }
+}
--- /dev/null
+use crate::rmeta::*;
+use crate::rmeta::table::{FixedSizeEncoding, PerDefTable};
+
+use rustc::middle::cstore::{LinkagePreference, NativeLibrary,
+ EncodedMetadata, ForeignModule};
+use rustc::hir::def::CtorKind;
+use rustc::hir::def_id::{CrateNum, CRATE_DEF_INDEX, DefIndex, DefId, LocalDefId, LOCAL_CRATE};
+use rustc::hir::{GenericParamKind, AnonConst};
+use rustc::hir::map::definitions::DefPathTable;
+use rustc_data_structures::fingerprint::Fingerprint;
+use rustc_index::vec::IndexVec;
+use rustc::middle::dependency_format::Linkage;
+use rustc::middle::exported_symbols::{ExportedSymbol, SymbolExportLevel,
+ metadata_symbol_name};
+use rustc::middle::lang_items;
+use rustc::mir::{self, interpret};
+use rustc::traits::specialization_graph;
+use rustc::ty::{self, Ty, TyCtxt, SymbolName};
+use rustc::ty::codec::{self as ty_codec, TyEncoder};
+use rustc::ty::layout::VariantIdx;
+
+use rustc::session::config::{self, CrateType};
+use rustc::util::nodemap::FxHashMap;
+
+use rustc_data_structures::stable_hasher::StableHasher;
+use rustc_data_structures::sync::Lrc;
+use rustc_serialize::{Encodable, Encoder, SpecializedEncoder, opaque};
+
+use std::hash::Hash;
+use std::num::NonZeroUsize;
+use std::path::Path;
+use std::u32;
+use syntax::ast;
+use syntax::attr;
+use syntax::expand::is_proc_macro_attr;
+use syntax::source_map::Spanned;
+use syntax::symbol::{kw, sym, Ident, Symbol};
+use syntax_pos::{self, FileName, SourceFile, Span};
+use log::{debug, trace};
+
+use rustc::hir::{self, PatKind};
+use rustc::hir::itemlikevisit::ItemLikeVisitor;
+use rustc::hir::intravisit::{Visitor, NestedVisitorMap};
+use rustc::hir::intravisit;
+
+struct EncodeContext<'tcx> {
+ opaque: opaque::Encoder,
+ tcx: TyCtxt<'tcx>,
+
+ per_def: PerDefTables<'tcx>,
+
+ lazy_state: LazyState,
+ type_shorthands: FxHashMap<Ty<'tcx>, usize>,
+ predicate_shorthands: FxHashMap<ty::Predicate<'tcx>, usize>,
+
+ interpret_allocs: FxHashMap<interpret::AllocId, usize>,
+ interpret_allocs_inverse: Vec<interpret::AllocId>,
+
+ // This is used to speed up Span encoding.
+ source_file_cache: Lrc<SourceFile>,
+}
+
+#[derive(Default)]
+struct PerDefTables<'tcx> {
+ kind: PerDefTable<Lazy<EntryKind<'tcx>>>,
+ visibility: PerDefTable<Lazy<ty::Visibility>>,
+ span: PerDefTable<Lazy<Span>>,
+ attributes: PerDefTable<Lazy<[ast::Attribute]>>,
+ children: PerDefTable<Lazy<[DefIndex]>>,
+ stability: PerDefTable<Lazy<attr::Stability>>,
+ deprecation: PerDefTable<Lazy<attr::Deprecation>>,
+
+ ty: PerDefTable<Lazy<Ty<'tcx>>>,
+ fn_sig: PerDefTable<Lazy<ty::PolyFnSig<'tcx>>>,
+ impl_trait_ref: PerDefTable<Lazy<ty::TraitRef<'tcx>>>,
+ inherent_impls: PerDefTable<Lazy<[DefIndex]>>,
+ variances: PerDefTable<Lazy<[ty::Variance]>>,
+ generics: PerDefTable<Lazy<ty::Generics>>,
+ explicit_predicates: PerDefTable<Lazy<ty::GenericPredicates<'tcx>>>,
+ inferred_outlives: PerDefTable<Lazy<&'tcx [(ty::Predicate<'tcx>, Span)]>>,
+ super_predicates: PerDefTable<Lazy<ty::GenericPredicates<'tcx>>>,
+
+ mir: PerDefTable<Lazy<mir::Body<'tcx>>>,
+ promoted_mir: PerDefTable<Lazy<IndexVec<mir::Promoted, mir::Body<'tcx>>>>,
+}
+
+macro_rules! encoder_methods {
+ ($($name:ident($ty:ty);)*) => {
+ $(fn $name(&mut self, value: $ty) -> Result<(), Self::Error> {
+ self.opaque.$name(value)
+ })*
+ }
+}
+
+impl<'tcx> Encoder for EncodeContext<'tcx> {
+ type Error = <opaque::Encoder as Encoder>::Error;
+
+ fn emit_unit(&mut self) -> Result<(), Self::Error> {
+ Ok(())
+ }
+
+ encoder_methods! {
+ emit_usize(usize);
+ emit_u128(u128);
+ emit_u64(u64);
+ emit_u32(u32);
+ emit_u16(u16);
+ emit_u8(u8);
+
+ emit_isize(isize);
+ emit_i128(i128);
+ emit_i64(i64);
+ emit_i32(i32);
+ emit_i16(i16);
+ emit_i8(i8);
+
+ emit_bool(bool);
+ emit_f64(f64);
+ emit_f32(f32);
+ emit_char(char);
+ emit_str(&str);
+ }
+}
+
+impl<'tcx, T: Encodable> SpecializedEncoder<Lazy<T>> for EncodeContext<'tcx> {
+ fn specialized_encode(&mut self, lazy: &Lazy<T>) -> Result<(), Self::Error> {
+ self.emit_lazy_distance(*lazy)
+ }
+}
+
+impl<'tcx, T: Encodable> SpecializedEncoder<Lazy<[T]>> for EncodeContext<'tcx> {
+ fn specialized_encode(&mut self, lazy: &Lazy<[T]>) -> Result<(), Self::Error> {
+ self.emit_usize(lazy.meta)?;
+ if lazy.meta == 0 {
+ return Ok(());
+ }
+ self.emit_lazy_distance(*lazy)
+ }
+}
+
+impl<'tcx, T> SpecializedEncoder<Lazy<PerDefTable<T>>> for EncodeContext<'tcx>
+ where Option<T>: FixedSizeEncoding,
+{
+ fn specialized_encode(&mut self, lazy: &Lazy<PerDefTable<T>>) -> Result<(), Self::Error> {
+ self.emit_usize(lazy.meta)?;
+ self.emit_lazy_distance(*lazy)
+ }
+}
+
+impl<'tcx> SpecializedEncoder<CrateNum> for EncodeContext<'tcx> {
+ #[inline]
+ fn specialized_encode(&mut self, cnum: &CrateNum) -> Result<(), Self::Error> {
+ self.emit_u32(cnum.as_u32())
+ }
+}
+
+impl<'tcx> SpecializedEncoder<DefId> for EncodeContext<'tcx> {
+ #[inline]
+ fn specialized_encode(&mut self, def_id: &DefId) -> Result<(), Self::Error> {
+ let DefId {
+ krate,
+ index,
+ } = *def_id;
+
+ krate.encode(self)?;
+ index.encode(self)
+ }
+}
+
+impl<'tcx> SpecializedEncoder<DefIndex> for EncodeContext<'tcx> {
+ #[inline]
+ fn specialized_encode(&mut self, def_index: &DefIndex) -> Result<(), Self::Error> {
+ self.emit_u32(def_index.as_u32())
+ }
+}
+
+impl<'tcx> SpecializedEncoder<Span> for EncodeContext<'tcx> {
+ fn specialized_encode(&mut self, span: &Span) -> Result<(), Self::Error> {
+ if span.is_dummy() {
+ return TAG_INVALID_SPAN.encode(self)
+ }
+
+ let span = span.data();
+
+ // The Span infrastructure should make sure that this invariant holds:
+ debug_assert!(span.lo <= span.hi);
+
+ if !self.source_file_cache.contains(span.lo) {
+ let source_map = self.tcx.sess.source_map();
+ let source_file_index = source_map.lookup_source_file_idx(span.lo);
+ self.source_file_cache = source_map.files()[source_file_index].clone();
+ }
+
+ if !self.source_file_cache.contains(span.hi) {
+ // Unfortunately, macro expansion still sometimes generates Spans
+ // that malformed in this way.
+ return TAG_INVALID_SPAN.encode(self)
+ }
+
+ // HACK(eddyb) there's no way to indicate which crate a Span is coming
+ // from right now, so decoding would fail to find the SourceFile if
+ // it's not local to the crate the Span is found in.
+ if self.source_file_cache.is_imported() {
+ return TAG_INVALID_SPAN.encode(self)
+ }
+
+ TAG_VALID_SPAN.encode(self)?;
+ span.lo.encode(self)?;
+
+ // Encode length which is usually less than span.hi and profits more
+ // from the variable-length integer encoding that we use.
+ let len = span.hi - span.lo;
+ len.encode(self)
+
+ // Don't encode the expansion context.
+ }
+}
+
+impl SpecializedEncoder<Ident> for EncodeContext<'tcx> {
+ fn specialized_encode(&mut self, ident: &Ident) -> Result<(), Self::Error> {
+ // FIXME(jseyfried): intercrate hygiene
+ ident.name.encode(self)
+ }
+}
+
+impl<'tcx> SpecializedEncoder<LocalDefId> for EncodeContext<'tcx> {
+ #[inline]
+ fn specialized_encode(&mut self, def_id: &LocalDefId) -> Result<(), Self::Error> {
+ self.specialized_encode(&def_id.to_def_id())
+ }
+}
+
+impl<'tcx> SpecializedEncoder<Ty<'tcx>> for EncodeContext<'tcx> {
+ fn specialized_encode(&mut self, ty: &Ty<'tcx>) -> Result<(), Self::Error> {
+ ty_codec::encode_with_shorthand(self, ty, |ecx| &mut ecx.type_shorthands)
+ }
+}
+
+impl<'tcx> SpecializedEncoder<interpret::AllocId> for EncodeContext<'tcx> {
+ fn specialized_encode(&mut self, alloc_id: &interpret::AllocId) -> Result<(), Self::Error> {
+ use std::collections::hash_map::Entry;
+ let index = match self.interpret_allocs.entry(*alloc_id) {
+ Entry::Occupied(e) => *e.get(),
+ Entry::Vacant(e) => {
+ let idx = self.interpret_allocs_inverse.len();
+ self.interpret_allocs_inverse.push(*alloc_id);
+ e.insert(idx);
+ idx
+ },
+ };
+
+ index.encode(self)
+ }
+}
+
+impl<'tcx> SpecializedEncoder<&'tcx [(ty::Predicate<'tcx>, Span)]> for EncodeContext<'tcx> {
+ fn specialized_encode(&mut self,
+ predicates: &&'tcx [(ty::Predicate<'tcx>, Span)])
+ -> Result<(), Self::Error> {
+ ty_codec::encode_spanned_predicates(self, predicates, |ecx| &mut ecx.predicate_shorthands)
+ }
+}
+
+impl<'tcx> SpecializedEncoder<Fingerprint> for EncodeContext<'tcx> {
+ fn specialized_encode(&mut self, f: &Fingerprint) -> Result<(), Self::Error> {
+ f.encode_opaque(&mut self.opaque)
+ }
+}
+
+impl<'tcx, T: Encodable> SpecializedEncoder<mir::ClearCrossCrate<T>> for EncodeContext<'tcx> {
+ fn specialized_encode(&mut self,
+ _: &mir::ClearCrossCrate<T>)
+ -> Result<(), Self::Error> {
+ Ok(())
+ }
+}
+
+impl<'tcx> TyEncoder for EncodeContext<'tcx> {
+ fn position(&self) -> usize {
+ self.opaque.position()
+ }
+}
+
+/// Helper trait to allow overloading `EncodeContext::lazy` for iterators.
+trait EncodeContentsForLazy<T: ?Sized + LazyMeta> {
+ fn encode_contents_for_lazy(self, ecx: &mut EncodeContext<'tcx>) -> T::Meta;
+}
+
+impl<T: Encodable> EncodeContentsForLazy<T> for &T {
+ fn encode_contents_for_lazy(self, ecx: &mut EncodeContext<'tcx>) {
+ self.encode(ecx).unwrap()
+ }
+}
+
+impl<T: Encodable> EncodeContentsForLazy<T> for T {
+ fn encode_contents_for_lazy(self, ecx: &mut EncodeContext<'tcx>) {
+ self.encode(ecx).unwrap()
+ }
+}
+
+impl<I, T: Encodable> EncodeContentsForLazy<[T]> for I
+ where I: IntoIterator,
+ I::Item: EncodeContentsForLazy<T>,
+{
+ fn encode_contents_for_lazy(self, ecx: &mut EncodeContext<'tcx>) -> usize {
+ self.into_iter().map(|value| value.encode_contents_for_lazy(ecx)).count()
+ }
+}
+
+// Shorthand for `$self.$tables.$table.set($key, $self.lazy($value))`, which would
+// normally need extra variables to avoid errors about multiple mutable borrows.
+macro_rules! record {
+ ($self:ident.$tables:ident.$table:ident[$key:expr] <- $value:expr) => {{
+ {
+ let value = $value;
+ let lazy = $self.lazy(value);
+ $self.$tables.$table.set($key, lazy);
+ }
+ }}
+}
+
+impl<'tcx> EncodeContext<'tcx> {
+ fn emit_lazy_distance<T: ?Sized + LazyMeta>(
+ &mut self,
+ lazy: Lazy<T>,
+ ) -> Result<(), <Self as Encoder>::Error> {
+ let min_end = lazy.position.get() + T::min_size(lazy.meta);
+ let distance = match self.lazy_state {
+ LazyState::NoNode => bug!("emit_lazy_distance: outside of a metadata node"),
+ LazyState::NodeStart(start) => {
+ let start = start.get();
+ assert!(min_end <= start);
+ start - min_end
+ }
+ LazyState::Previous(last_min_end) => {
+ assert!(
+ last_min_end <= lazy.position,
+ "make sure that the calls to `lazy*` \
+ are in the same order as the metadata fields",
+ );
+ lazy.position.get() - last_min_end.get()
+ }
+ };
+ self.lazy_state = LazyState::Previous(NonZeroUsize::new(min_end).unwrap());
+ self.emit_usize(distance)
+ }
+
+ fn lazy<T: ?Sized + LazyMeta>(
+ &mut self,
+ value: impl EncodeContentsForLazy<T>,
+ ) -> Lazy<T> {
+ let pos = NonZeroUsize::new(self.position()).unwrap();
+
+ assert_eq!(self.lazy_state, LazyState::NoNode);
+ self.lazy_state = LazyState::NodeStart(pos);
+ let meta = value.encode_contents_for_lazy(self);
+ self.lazy_state = LazyState::NoNode;
+
+ assert!(pos.get() + <T>::min_size(meta) <= self.position());
+
+ Lazy::from_position_and_meta(pos, meta)
+ }
+
+ fn encode_info_for_items(&mut self) {
+ let krate = self.tcx.hir().krate();
+ let vis = Spanned { span: syntax_pos::DUMMY_SP, node: hir::VisibilityKind::Public };
+ self.encode_info_for_mod(hir::CRATE_HIR_ID, &krate.module, &krate.attrs, &vis);
+ krate.visit_all_item_likes(&mut self.as_deep_visitor());
+ for macro_def in &krate.exported_macros {
+ self.visit_macro_def(macro_def);
+ }
+ }
+
+ fn encode_def_path_table(&mut self) -> Lazy<DefPathTable> {
+ let definitions = self.tcx.hir().definitions();
+ self.lazy(definitions.def_path_table())
+ }
+
+ fn encode_source_map(&mut self) -> Lazy<[syntax_pos::SourceFile]> {
+ let source_map = self.tcx.sess.source_map();
+ let all_source_files = source_map.files();
+
+ let (working_dir, _cwd_remapped) = self.tcx.sess.working_dir.clone();
+
+ let adapted = all_source_files.iter()
+ .filter(|source_file| {
+ // No need to re-export imported source_files, as any downstream
+ // crate will import them from their original source.
+ // FIXME(eddyb) the `Span` encoding should take that into account.
+ !source_file.is_imported()
+ })
+ .map(|source_file| {
+ match source_file.name {
+ // This path of this SourceFile has been modified by
+ // path-remapping, so we use it verbatim (and avoid
+ // cloning the whole map in the process).
+ _ if source_file.name_was_remapped => source_file.clone(),
+
+ // Otherwise expand all paths to absolute paths because
+ // any relative paths are potentially relative to a
+ // wrong directory.
+ FileName::Real(ref name) => {
+ let mut adapted = (**source_file).clone();
+ adapted.name = Path::new(&working_dir).join(name).into();
+ adapted.name_hash = {
+ let mut hasher: StableHasher = StableHasher::new();
+ adapted.name.hash(&mut hasher);
+ hasher.finish::<u128>()
+ };
+ Lrc::new(adapted)
+ },
+
+ // expanded code, not from a file
+ _ => source_file.clone(),
+ }
+ })
+ .collect::<Vec<_>>();
+
+ self.lazy(adapted.iter().map(|rc| &**rc))
+ }
+
+ fn encode_crate_root(&mut self) -> Lazy<CrateRoot<'tcx>> {
+ let is_proc_macro = self.tcx.sess.crate_types.borrow().contains(&CrateType::ProcMacro);
+
+ let mut i = self.position();
+
+ let crate_deps = self.encode_crate_deps();
+ let dylib_dependency_formats = self.encode_dylib_dependency_formats();
+ let dep_bytes = self.position() - i;
+
+ // Encode the lib features.
+ i = self.position();
+ let lib_features = self.encode_lib_features();
+ let lib_feature_bytes = self.position() - i;
+
+ // Encode the language items.
+ i = self.position();
+ let lang_items = self.encode_lang_items();
+ let lang_items_missing = self.encode_lang_items_missing();
+ let lang_item_bytes = self.position() - i;
+
+ // Encode the diagnostic items.
+ i = self.position();
+ let diagnostic_items = self.encode_diagnostic_items();
+ let diagnostic_item_bytes = self.position() - i;
+
+ // Encode the native libraries used
+ i = self.position();
+ let native_libraries = self.encode_native_libraries();
+ let native_lib_bytes = self.position() - i;
+
+ let foreign_modules = self.encode_foreign_modules();
+
+ // Encode source_map
+ i = self.position();
+ let source_map = self.encode_source_map();
+ let source_map_bytes = self.position() - i;
+
+ // Encode DefPathTable
+ i = self.position();
+ let def_path_table = self.encode_def_path_table();
+ let def_path_table_bytes = self.position() - i;
+
+ // Encode the def IDs of impls, for coherence checking.
+ i = self.position();
+ let impls = self.encode_impls();
+ let impl_bytes = self.position() - i;
+
+ // Encode exported symbols info.
+ i = self.position();
+ let exported_symbols = self.tcx.exported_symbols(LOCAL_CRATE);
+ let exported_symbols = self.encode_exported_symbols(&exported_symbols);
+ let exported_symbols_bytes = self.position() - i;
+
+ let tcx = self.tcx;
+
+ // Encode the items.
+ i = self.position();
+ self.encode_info_for_items();
+ let item_bytes = self.position() - i;
+
+ // Encode the allocation index
+ let interpret_alloc_index = {
+ let mut interpret_alloc_index = Vec::new();
+ let mut n = 0;
+ trace!("beginning to encode alloc ids");
+ loop {
+ let new_n = self.interpret_allocs_inverse.len();
+ // if we have found new ids, serialize those, too
+ if n == new_n {
+ // otherwise, abort
+ break;
+ }
+ trace!("encoding {} further alloc ids", new_n - n);
+ for idx in n..new_n {
+ let id = self.interpret_allocs_inverse[idx];
+ let pos = self.position() as u32;
+ interpret_alloc_index.push(pos);
+ interpret::specialized_encode_alloc_id(
+ self,
+ tcx,
+ id,
+ ).unwrap();
+ }
+ n = new_n;
+ }
+ self.lazy(interpret_alloc_index)
+ };
+
+
+ i = self.position();
+ let per_def = LazyPerDefTables {
+ kind: self.per_def.kind.encode(&mut self.opaque),
+ visibility: self.per_def.visibility.encode(&mut self.opaque),
+ span: self.per_def.span.encode(&mut self.opaque),
+ attributes: self.per_def.attributes.encode(&mut self.opaque),
+ children: self.per_def.children.encode(&mut self.opaque),
+ stability: self.per_def.stability.encode(&mut self.opaque),
+ deprecation: self.per_def.deprecation.encode(&mut self.opaque),
+
+ ty: self.per_def.ty.encode(&mut self.opaque),
+ fn_sig: self.per_def.fn_sig.encode(&mut self.opaque),
+ impl_trait_ref: self.per_def.impl_trait_ref.encode(&mut self.opaque),
+ inherent_impls: self.per_def.inherent_impls.encode(&mut self.opaque),
+ variances: self.per_def.variances.encode(&mut self.opaque),
+ generics: self.per_def.generics.encode(&mut self.opaque),
+ explicit_predicates: self.per_def.explicit_predicates.encode(&mut self.opaque),
+ inferred_outlives: self.per_def.inferred_outlives.encode(&mut self.opaque),
+ super_predicates: self.per_def.super_predicates.encode(&mut self.opaque),
+
+ mir: self.per_def.mir.encode(&mut self.opaque),
+ promoted_mir: self.per_def.promoted_mir.encode(&mut self.opaque),
+ };
+ let per_def_bytes = self.position() - i;
+
+ // Encode the proc macro data
+ i = self.position();
+ let proc_macro_data = self.encode_proc_macros();
+ let proc_macro_data_bytes = self.position() - i;
+
+
+ let attrs = tcx.hir().krate_attrs();
+ let has_default_lib_allocator = attr::contains_name(&attrs, sym::default_lib_allocator);
+ let has_global_allocator = *tcx.sess.has_global_allocator.get();
+
+ let root = self.lazy(CrateRoot {
+ name: tcx.crate_name(LOCAL_CRATE),
+ extra_filename: tcx.sess.opts.cg.extra_filename.clone(),
+ triple: tcx.sess.opts.target_triple.clone(),
+ hash: tcx.crate_hash(LOCAL_CRATE),
+ disambiguator: tcx.sess.local_crate_disambiguator(),
+ panic_strategy: tcx.sess.panic_strategy(),
+ edition: tcx.sess.edition(),
+ has_global_allocator: has_global_allocator,
+ has_panic_handler: tcx.has_panic_handler(LOCAL_CRATE),
+ has_default_lib_allocator: has_default_lib_allocator,
+ plugin_registrar_fn: tcx.plugin_registrar_fn(LOCAL_CRATE).map(|id| id.index),
+ proc_macro_decls_static: if is_proc_macro {
+ let id = tcx.proc_macro_decls_static(LOCAL_CRATE).unwrap();
+ Some(id.index)
+ } else {
+ None
+ },
+ proc_macro_data,
+ proc_macro_stability: if is_proc_macro {
+ tcx.lookup_stability(DefId::local(CRATE_DEF_INDEX)).map(|stab| stab.clone())
+ } else {
+ None
+ },
+ compiler_builtins: attr::contains_name(&attrs, sym::compiler_builtins),
+ needs_allocator: attr::contains_name(&attrs, sym::needs_allocator),
+ needs_panic_runtime: attr::contains_name(&attrs, sym::needs_panic_runtime),
+ no_builtins: attr::contains_name(&attrs, sym::no_builtins),
+ panic_runtime: attr::contains_name(&attrs, sym::panic_runtime),
+ profiler_runtime: attr::contains_name(&attrs, sym::profiler_runtime),
+ sanitizer_runtime: attr::contains_name(&attrs, sym::sanitizer_runtime),
+ symbol_mangling_version: tcx.sess.opts.debugging_opts.symbol_mangling_version,
+
+ crate_deps,
+ dylib_dependency_formats,
+ lib_features,
+ lang_items,
+ diagnostic_items,
+ lang_items_missing,
+ native_libraries,
+ foreign_modules,
+ source_map,
+ def_path_table,
+ impls,
+ exported_symbols,
+ interpret_alloc_index,
+ per_def,
+ });
+
+ let total_bytes = self.position();
+
+ if self.tcx.sess.meta_stats() {
+ let mut zero_bytes = 0;
+ for e in self.opaque.data.iter() {
+ if *e == 0 {
+ zero_bytes += 1;
+ }
+ }
+
+ println!("metadata stats:");
+ println!(" dep bytes: {}", dep_bytes);
+ println!(" lib feature bytes: {}", lib_feature_bytes);
+ println!(" lang item bytes: {}", lang_item_bytes);
+ println!(" diagnostic item bytes: {}", diagnostic_item_bytes);
+ println!(" native bytes: {}", native_lib_bytes);
+ println!(" source_map bytes: {}", source_map_bytes);
+ println!(" impl bytes: {}", impl_bytes);
+ println!(" exp. symbols bytes: {}", exported_symbols_bytes);
+ println!(" def-path table bytes: {}", def_path_table_bytes);
+ println!(" proc-macro-data-bytes: {}", proc_macro_data_bytes);
+ println!(" item bytes: {}", item_bytes);
+ println!(" per-def table bytes: {}", per_def_bytes);
+ println!(" zero bytes: {}", zero_bytes);
+ println!(" total bytes: {}", total_bytes);
+ }
+
+ root
+ }
+}
+
+impl EncodeContext<'tcx> {
+ fn encode_variances_of(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_variances_of({:?})", def_id);
+ record!(self.per_def.variances[def_id] <- &self.tcx.variances_of(def_id)[..]);
+ }
+
+ fn encode_item_type(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_item_type({:?})", def_id);
+ record!(self.per_def.ty[def_id] <- self.tcx.type_of(def_id));
+ }
+
+ fn encode_enum_variant_info(
+ &mut self,
+ enum_did: DefId,
+ index: VariantIdx,
+ ) {
+ let tcx = self.tcx;
+ let def = tcx.adt_def(enum_did);
+ let variant = &def.variants[index];
+ let def_id = variant.def_id;
+ debug!("EncodeContext::encode_enum_variant_info({:?})", def_id);
+
+ let data = VariantData {
+ ctor_kind: variant.ctor_kind,
+ discr: variant.discr,
+ ctor: variant.ctor_def_id.map(|did| did.index),
+ };
+
+ let enum_id = tcx.hir().as_local_hir_id(enum_did).unwrap();
+ let enum_vis = &tcx.hir().expect_item(enum_id).vis;
+
+ record!(self.per_def.kind[def_id] <- EntryKind::Variant(self.lazy(data)));
+ record!(self.per_def.visibility[def_id] <-
+ ty::Visibility::from_hir(enum_vis, enum_id, self.tcx));
+ record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
+ record!(self.per_def.attributes[def_id] <- &self.tcx.get_attrs(def_id)[..]);
+ record!(self.per_def.children[def_id] <- variant.fields.iter().map(|f| {
+ assert!(f.did.is_local());
+ f.did.index
+ }));
+ self.encode_stability(def_id);
+ self.encode_deprecation(def_id);
+ self.encode_item_type(def_id);
+ if variant.ctor_kind == CtorKind::Fn {
+ // FIXME(eddyb) encode signature only in `encode_enum_variant_ctor`.
+ if let Some(ctor_def_id) = variant.ctor_def_id {
+ record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(ctor_def_id));
+ }
+ // FIXME(eddyb) is this ever used?
+ self.encode_variances_of(def_id);
+ }
+ self.encode_generics(def_id);
+ self.encode_explicit_predicates(def_id);
+ self.encode_inferred_outlives(def_id);
+ self.encode_optimized_mir(def_id);
+ self.encode_promoted_mir(def_id);
+ }
+
+ fn encode_enum_variant_ctor(
+ &mut self,
+ enum_did: DefId,
+ index: VariantIdx,
+ ) {
+ let tcx = self.tcx;
+ let def = tcx.adt_def(enum_did);
+ let variant = &def.variants[index];
+ let def_id = variant.ctor_def_id.unwrap();
+ debug!("EncodeContext::encode_enum_variant_ctor({:?})", def_id);
+
+ // FIXME(eddyb) encode only the `CtorKind` for constructors.
+ let data = VariantData {
+ ctor_kind: variant.ctor_kind,
+ discr: variant.discr,
+ ctor: Some(def_id.index),
+ };
+
+ // Variant constructors have the same visibility as the parent enums, unless marked as
+ // non-exhaustive, in which case they are lowered to `pub(crate)`.
+ let enum_id = tcx.hir().as_local_hir_id(enum_did).unwrap();
+ let enum_vis = &tcx.hir().expect_item(enum_id).vis;
+ let mut ctor_vis = ty::Visibility::from_hir(enum_vis, enum_id, tcx);
+ if variant.is_field_list_non_exhaustive() && ctor_vis == ty::Visibility::Public {
+ ctor_vis = ty::Visibility::Restricted(DefId::local(CRATE_DEF_INDEX));
+ }
+
+ record!(self.per_def.kind[def_id] <- EntryKind::Variant(self.lazy(data)));
+ record!(self.per_def.visibility[def_id] <- ctor_vis);
+ record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
+ self.encode_stability(def_id);
+ self.encode_deprecation(def_id);
+ self.encode_item_type(def_id);
+ if variant.ctor_kind == CtorKind::Fn {
+ record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(def_id));
+ self.encode_variances_of(def_id);
+ }
+ self.encode_generics(def_id);
+ self.encode_explicit_predicates(def_id);
+ self.encode_inferred_outlives(def_id);
+ self.encode_optimized_mir(def_id);
+ self.encode_promoted_mir(def_id);
+ }
+
+ fn encode_info_for_mod(
+ &mut self,
+ id: hir::HirId,
+ md: &hir::Mod,
+ attrs: &[ast::Attribute],
+ vis: &hir::Visibility,
+ ) {
+ let tcx = self.tcx;
+ let def_id = tcx.hir().local_def_id(id);
+ debug!("EncodeContext::encode_info_for_mod({:?})", def_id);
+
+ let data = ModData {
+ reexports: match tcx.module_exports(def_id) {
+ Some(exports) => self.lazy(exports),
+ _ => Lazy::empty(),
+ },
+ };
+
+ record!(self.per_def.kind[def_id] <- EntryKind::Mod(self.lazy(data)));
+ record!(self.per_def.visibility[def_id] <- ty::Visibility::from_hir(vis, id, self.tcx));
+ record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
+ record!(self.per_def.attributes[def_id] <- attrs);
+ record!(self.per_def.children[def_id] <- md.item_ids.iter().map(|item_id| {
+ tcx.hir().local_def_id(item_id.id).index
+ }));
+ self.encode_stability(def_id);
+ self.encode_deprecation(def_id);
+ }
+
+ fn encode_field(
+ &mut self,
+ adt_def_id: DefId,
+ variant_index: VariantIdx,
+ field_index: usize,
+ ) {
+ let tcx = self.tcx;
+ let variant = &tcx.adt_def(adt_def_id).variants[variant_index];
+ let field = &variant.fields[field_index];
+
+ let def_id = field.did;
+ debug!("EncodeContext::encode_field({:?})", def_id);
+
+ let variant_id = tcx.hir().as_local_hir_id(variant.def_id).unwrap();
+ let variant_data = tcx.hir().expect_variant_data(variant_id);
+
+ record!(self.per_def.kind[def_id] <- EntryKind::Field);
+ record!(self.per_def.visibility[def_id] <- field.vis);
+ record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
+ record!(self.per_def.attributes[def_id] <- &variant_data.fields()[field_index].attrs);
+ self.encode_stability(def_id);
+ self.encode_deprecation(def_id);
+ self.encode_item_type(def_id);
+ self.encode_generics(def_id);
+ self.encode_explicit_predicates(def_id);
+ self.encode_inferred_outlives(def_id);
+ }
+
+ fn encode_struct_ctor(&mut self, adt_def_id: DefId, def_id: DefId) {
+ debug!("EncodeContext::encode_struct_ctor({:?})", def_id);
+ let tcx = self.tcx;
+ let adt_def = tcx.adt_def(adt_def_id);
+ let variant = adt_def.non_enum_variant();
+
+ let data = VariantData {
+ ctor_kind: variant.ctor_kind,
+ discr: variant.discr,
+ ctor: Some(def_id.index),
+ };
+
+ let struct_id = tcx.hir().as_local_hir_id(adt_def_id).unwrap();
+ let struct_vis = &tcx.hir().expect_item(struct_id).vis;
+ let mut ctor_vis = ty::Visibility::from_hir(struct_vis, struct_id, tcx);
+ for field in &variant.fields {
+ if ctor_vis.is_at_least(field.vis, tcx) {
+ ctor_vis = field.vis;
+ }
+ }
+
+ // If the structure is marked as non_exhaustive then lower the visibility
+ // to within the crate.
+ if adt_def.non_enum_variant().is_field_list_non_exhaustive() &&
+ ctor_vis == ty::Visibility::Public
+ {
+ ctor_vis = ty::Visibility::Restricted(DefId::local(CRATE_DEF_INDEX));
+ }
+
+ record!(self.per_def.kind[def_id] <- EntryKind::Struct(self.lazy(data), adt_def.repr));
+ record!(self.per_def.visibility[def_id] <- ctor_vis);
+ record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
+ self.encode_stability(def_id);
+ self.encode_deprecation(def_id);
+ self.encode_item_type(def_id);
+ if variant.ctor_kind == CtorKind::Fn {
+ record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(def_id));
+ self.encode_variances_of(def_id);
+ }
+ self.encode_generics(def_id);
+ self.encode_explicit_predicates(def_id);
+ self.encode_inferred_outlives(def_id);
+ self.encode_optimized_mir(def_id);
+ self.encode_promoted_mir(def_id);
+ }
+
+ fn encode_generics(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_generics({:?})", def_id);
+ record!(self.per_def.generics[def_id] <- self.tcx.generics_of(def_id));
+ }
+
+ fn encode_explicit_predicates(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_explicit_predicates({:?})", def_id);
+ record!(self.per_def.explicit_predicates[def_id] <-
+ self.tcx.explicit_predicates_of(def_id));
+ }
+
+ fn encode_inferred_outlives(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_inferred_outlives({:?})", def_id);
+ let inferred_outlives = self.tcx.inferred_outlives_of(def_id);
+ if !inferred_outlives.is_empty() {
+ record!(self.per_def.inferred_outlives[def_id] <- inferred_outlives);
+ }
+ }
+
+ fn encode_super_predicates(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_super_predicates({:?})", def_id);
+ record!(self.per_def.super_predicates[def_id] <- self.tcx.super_predicates_of(def_id));
+ }
+
+ fn encode_info_for_trait_item(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_info_for_trait_item({:?})", def_id);
+ let tcx = self.tcx;
+
+ let hir_id = tcx.hir().as_local_hir_id(def_id).unwrap();
+ let ast_item = tcx.hir().expect_trait_item(hir_id);
+ let trait_item = tcx.associated_item(def_id);
+
+ let container = match trait_item.defaultness {
+ hir::Defaultness::Default { has_value: true } =>
+ AssocContainer::TraitWithDefault,
+ hir::Defaultness::Default { has_value: false } =>
+ AssocContainer::TraitRequired,
+ hir::Defaultness::Final =>
+ span_bug!(ast_item.span, "traits cannot have final items"),
+ };
+
+ record!(self.per_def.kind[def_id] <- match trait_item.kind {
+ ty::AssocKind::Const => {
+ let rendered =
+ hir::print::to_string(self.tcx.hir(), |s| s.print_trait_item(ast_item));
+ let rendered_const = self.lazy(RenderedConst(rendered));
+
+ EntryKind::AssocConst(container, ConstQualif { mir: 0 }, rendered_const)
+ }
+ ty::AssocKind::Method => {
+ let fn_data = if let hir::TraitItemKind::Method(m_sig, m) = &ast_item.kind {
+ let param_names = match *m {
+ hir::TraitMethod::Required(ref names) => {
+ self.encode_fn_param_names(names)
+ }
+ hir::TraitMethod::Provided(body) => {
+ self.encode_fn_param_names_for_body(body)
+ }
+ };
+ FnData {
+ asyncness: m_sig.header.asyncness,
+ constness: hir::Constness::NotConst,
+ param_names,
+ }
+ } else {
+ bug!()
+ };
+ EntryKind::Method(self.lazy(MethodData {
+ fn_data,
+ container,
+ has_self: trait_item.method_has_self_argument,
+ }))
+ }
+ ty::AssocKind::Type => EntryKind::AssocType(container),
+ ty::AssocKind::OpaqueTy => span_bug!(ast_item.span, "opaque type in trait"),
+ });
+ record!(self.per_def.visibility[def_id] <- trait_item.vis);
+ record!(self.per_def.span[def_id] <- ast_item.span);
+ record!(self.per_def.attributes[def_id] <- &ast_item.attrs);
+ self.encode_stability(def_id);
+ self.encode_deprecation(def_id);
+ match trait_item.kind {
+ ty::AssocKind::Const |
+ ty::AssocKind::Method => {
+ self.encode_item_type(def_id);
+ }
+ ty::AssocKind::Type => {
+ if trait_item.defaultness.has_value() {
+ self.encode_item_type(def_id);
+ }
+ }
+ ty::AssocKind::OpaqueTy => unreachable!(),
+ }
+ if trait_item.kind == ty::AssocKind::Method {
+ record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(def_id));
+ self.encode_variances_of(def_id);
+ }
+ self.encode_generics(def_id);
+ self.encode_explicit_predicates(def_id);
+ self.encode_inferred_outlives(def_id);
+ self.encode_optimized_mir(def_id);
+ self.encode_promoted_mir(def_id);
+ }
+
+ fn metadata_output_only(&self) -> bool {
+ // MIR optimisation can be skipped when we're just interested in the metadata.
+ !self.tcx.sess.opts.output_types.should_codegen()
+ }
+
+ fn encode_info_for_impl_item(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_info_for_impl_item({:?})", def_id);
+ let tcx = self.tcx;
+
+ let hir_id = self.tcx.hir().as_local_hir_id(def_id).unwrap();
+ let ast_item = self.tcx.hir().expect_impl_item(hir_id);
+ let impl_item = self.tcx.associated_item(def_id);
+
+ let container = match impl_item.defaultness {
+ hir::Defaultness::Default { has_value: true } => AssocContainer::ImplDefault,
+ hir::Defaultness::Final => AssocContainer::ImplFinal,
+ hir::Defaultness::Default { has_value: false } =>
+ span_bug!(ast_item.span, "impl items always have values (currently)"),
+ };
+
+ record!(self.per_def.kind[def_id] <- match impl_item.kind {
+ ty::AssocKind::Const => {
+ if let hir::ImplItemKind::Const(_, body_id) = ast_item.kind {
+ let mir = self.tcx.at(ast_item.span).mir_const_qualif(def_id).0;
+
+ EntryKind::AssocConst(container,
+ ConstQualif { mir },
+ self.encode_rendered_const_for_body(body_id))
+ } else {
+ bug!()
+ }
+ }
+ ty::AssocKind::Method => {
+ let fn_data = if let hir::ImplItemKind::Method(ref sig, body) = ast_item.kind {
+ FnData {
+ asyncness: sig.header.asyncness,
+ constness: sig.header.constness,
+ param_names: self.encode_fn_param_names_for_body(body),
+ }
+ } else {
+ bug!()
+ };
+ EntryKind::Method(self.lazy(MethodData {
+ fn_data,
+ container,
+ has_self: impl_item.method_has_self_argument,
+ }))
+ }
+ ty::AssocKind::OpaqueTy => EntryKind::AssocOpaqueTy(container),
+ ty::AssocKind::Type => EntryKind::AssocType(container)
+ });
+ record!(self.per_def.visibility[def_id] <- impl_item.vis);
+ record!(self.per_def.span[def_id] <- ast_item.span);
+ record!(self.per_def.attributes[def_id] <- &ast_item.attrs);
+ self.encode_stability(def_id);
+ self.encode_deprecation(def_id);
+ self.encode_item_type(def_id);
+ if impl_item.kind == ty::AssocKind::Method {
+ record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(def_id));
+ self.encode_variances_of(def_id);
+ }
+ self.encode_generics(def_id);
+ self.encode_explicit_predicates(def_id);
+ self.encode_inferred_outlives(def_id);
+ let mir = match ast_item.kind {
+ hir::ImplItemKind::Const(..) => true,
+ hir::ImplItemKind::Method(ref sig, _) => {
+ let generics = self.tcx.generics_of(def_id);
+ let needs_inline = (generics.requires_monomorphization(self.tcx) ||
+ tcx.codegen_fn_attrs(def_id).requests_inline()) &&
+ !self.metadata_output_only();
+ let is_const_fn = sig.header.constness == hir::Constness::Const;
+ let always_encode_mir = self.tcx.sess.opts.debugging_opts.always_encode_mir;
+ needs_inline || is_const_fn || always_encode_mir
+ },
+ hir::ImplItemKind::OpaqueTy(..) |
+ hir::ImplItemKind::TyAlias(..) => false,
+ };
+ if mir {
+ self.encode_optimized_mir(def_id);
+ self.encode_promoted_mir(def_id);
+ }
+ }
+
+ fn encode_fn_param_names_for_body(&mut self, body_id: hir::BodyId)
+ -> Lazy<[ast::Name]> {
+ self.tcx.dep_graph.with_ignore(|| {
+ let body = self.tcx.hir().body(body_id);
+ self.lazy(body.params.iter().map(|arg| {
+ match arg.pat.kind {
+ PatKind::Binding(_, _, ident, _) => ident.name,
+ _ => kw::Invalid,
+ }
+ }))
+ })
+ }
+
+ fn encode_fn_param_names(&mut self, param_names: &[ast::Ident]) -> Lazy<[ast::Name]> {
+ self.lazy(param_names.iter().map(|ident| ident.name))
+ }
+
+ fn encode_optimized_mir(&mut self, def_id: DefId) {
+ debug!("EntryBuilder::encode_mir({:?})", def_id);
+ if self.tcx.mir_keys(LOCAL_CRATE).contains(&def_id) {
+ record!(self.per_def.mir[def_id] <- self.tcx.optimized_mir(def_id));
+ }
+ }
+
+ fn encode_promoted_mir(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_promoted_mir({:?})", def_id);
+ if self.tcx.mir_keys(LOCAL_CRATE).contains(&def_id) {
+ record!(self.per_def.promoted_mir[def_id] <- self.tcx.promoted_mir(def_id));
+ }
+ }
+
+ // Encodes the inherent implementations of a structure, enumeration, or trait.
+ fn encode_inherent_implementations(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_inherent_implementations({:?})", def_id);
+ let implementations = self.tcx.inherent_impls(def_id);
+ if !implementations.is_empty() {
+ record!(self.per_def.inherent_impls[def_id] <- implementations.iter().map(|&def_id| {
+ assert!(def_id.is_local());
+ def_id.index
+ }));
+ }
+ }
+
+ fn encode_stability(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_stability({:?})", def_id);
+ if let Some(stab) = self.tcx.lookup_stability(def_id) {
+ record!(self.per_def.stability[def_id] <- stab)
+ }
+ }
+
+ fn encode_deprecation(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_deprecation({:?})", def_id);
+ if let Some(depr) = self.tcx.lookup_deprecation(def_id) {
+ record!(self.per_def.deprecation[def_id] <- depr);
+ }
+ }
+
+ fn encode_rendered_const_for_body(&mut self, body_id: hir::BodyId) -> Lazy<RenderedConst> {
+ let body = self.tcx.hir().body(body_id);
+ let rendered = hir::print::to_string(self.tcx.hir(), |s| s.print_expr(&body.value));
+ let rendered_const = &RenderedConst(rendered);
+ self.lazy(rendered_const)
+ }
+
+ fn encode_info_for_item(&mut self, def_id: DefId, item: &'tcx hir::Item) {
+ let tcx = self.tcx;
+
+ debug!("EncodeContext::encode_info_for_item({:?})", def_id);
+
+ record!(self.per_def.kind[def_id] <- match item.kind {
+ hir::ItemKind::Static(_, hir::MutMutable, _) => EntryKind::MutStatic,
+ hir::ItemKind::Static(_, hir::MutImmutable, _) => EntryKind::ImmStatic,
+ hir::ItemKind::Const(_, body_id) => {
+ let mir = self.tcx.at(item.span).mir_const_qualif(def_id).0;
+ EntryKind::Const(
+ ConstQualif { mir },
+ self.encode_rendered_const_for_body(body_id)
+ )
+ }
+ hir::ItemKind::Fn(_, header, .., body) => {
+ let data = FnData {
+ asyncness: header.asyncness,
+ constness: header.constness,
+ param_names: self.encode_fn_param_names_for_body(body),
+ };
+
+ EntryKind::Fn(self.lazy(data))
+ }
+ hir::ItemKind::Mod(ref m) => {
+ return self.encode_info_for_mod(item.hir_id, m, &item.attrs, &item.vis);
+ }
+ hir::ItemKind::ForeignMod(_) => EntryKind::ForeignMod,
+ hir::ItemKind::GlobalAsm(..) => EntryKind::GlobalAsm,
+ hir::ItemKind::TyAlias(..) => EntryKind::Type,
+ hir::ItemKind::OpaqueTy(..) => EntryKind::OpaqueTy,
+ hir::ItemKind::Enum(..) => EntryKind::Enum(self.tcx.adt_def(def_id).repr),
+ hir::ItemKind::Struct(ref struct_def, _) => {
+ let adt_def = self.tcx.adt_def(def_id);
+ let variant = adt_def.non_enum_variant();
+
+ // Encode def_ids for each field and method
+ // for methods, write all the stuff get_trait_method
+ // needs to know
+ let ctor = struct_def.ctor_hir_id().map(|ctor_hir_id| {
+ self.tcx.hir().local_def_id(ctor_hir_id).index
+ });
+
+ EntryKind::Struct(self.lazy(VariantData {
+ ctor_kind: variant.ctor_kind,
+ discr: variant.discr,
+ ctor,
+ }), adt_def.repr)
+ }
+ hir::ItemKind::Union(..) => {
+ let adt_def = self.tcx.adt_def(def_id);
+ let variant = adt_def.non_enum_variant();
+
+ EntryKind::Union(self.lazy(VariantData {
+ ctor_kind: variant.ctor_kind,
+ discr: variant.discr,
+ ctor: None,
+ }), adt_def.repr)
+ }
+ hir::ItemKind::Impl(_, _, defaultness, ..) => {
+ let trait_ref = self.tcx.impl_trait_ref(def_id);
+ let polarity = self.tcx.impl_polarity(def_id);
+ let parent = if let Some(trait_ref) = trait_ref {
+ let trait_def = self.tcx.trait_def(trait_ref.def_id);
+ trait_def.ancestors(self.tcx, def_id).nth(1).and_then(|node| {
+ match node {
+ specialization_graph::Node::Impl(parent) => Some(parent),
+ _ => None,
+ }
+ })
+ } else {
+ None
+ };
+
+ // if this is an impl of `CoerceUnsized`, create its
+ // "unsized info", else just store None
+ let coerce_unsized_info =
+ trait_ref.and_then(|t| {
+ if Some(t.def_id) == self.tcx.lang_items().coerce_unsized_trait() {
+ Some(self.tcx.at(item.span).coerce_unsized_info(def_id))
+ } else {
+ None
+ }
+ });
+
+ let data = ImplData {
+ polarity,
+ defaultness,
+ parent_impl: parent,
+ coerce_unsized_info,
+ };
+
+ EntryKind::Impl(self.lazy(data))
+ }
+ hir::ItemKind::Trait(..) => {
+ let trait_def = self.tcx.trait_def(def_id);
+ let data = TraitData {
+ unsafety: trait_def.unsafety,
+ paren_sugar: trait_def.paren_sugar,
+ has_auto_impl: self.tcx.trait_is_auto(def_id),
+ is_marker: trait_def.is_marker,
+ };
+
+ EntryKind::Trait(self.lazy(data))
+ }
+ hir::ItemKind::TraitAlias(..) => EntryKind::TraitAlias,
+ hir::ItemKind::ExternCrate(_) |
+ hir::ItemKind::Use(..) => bug!("cannot encode info for item {:?}", item),
+ });
+ record!(self.per_def.visibility[def_id] <-
+ ty::Visibility::from_hir(&item.vis, item.hir_id, tcx));
+ record!(self.per_def.span[def_id] <- item.span);
+ record!(self.per_def.attributes[def_id] <- &item.attrs);
+ // FIXME(eddyb) there should be a nicer way to do this.
+ match item.kind {
+ hir::ItemKind::ForeignMod(ref fm) => record!(self.per_def.children[def_id] <-
+ fm.items
+ .iter()
+ .map(|foreign_item| tcx.hir().local_def_id(
+ foreign_item.hir_id).index)
+ ),
+ hir::ItemKind::Enum(..) => record!(self.per_def.children[def_id] <-
+ self.tcx.adt_def(def_id).variants.iter().map(|v| {
+ assert!(v.def_id.is_local());
+ v.def_id.index
+ })
+ ),
+ hir::ItemKind::Struct(..) |
+ hir::ItemKind::Union(..) => record!(self.per_def.children[def_id] <-
+ self.tcx.adt_def(def_id).non_enum_variant().fields.iter().map(|f| {
+ assert!(f.did.is_local());
+ f.did.index
+ })
+ ),
+ hir::ItemKind::Impl(..) |
+ hir::ItemKind::Trait(..) => {
+ let associated_item_def_ids = self.tcx.associated_item_def_ids(def_id);
+ record!(self.per_def.children[def_id] <-
+ associated_item_def_ids.iter().map(|&def_id| {
+ assert!(def_id.is_local());
+ def_id.index
+ })
+ );
+ }
+ _ => {}
+ }
+ self.encode_stability(def_id);
+ self.encode_deprecation(def_id);
+ match item.kind {
+ hir::ItemKind::Static(..) |
+ hir::ItemKind::Const(..) |
+ hir::ItemKind::Fn(..) |
+ hir::ItemKind::TyAlias(..) |
+ hir::ItemKind::OpaqueTy(..) |
+ hir::ItemKind::Enum(..) |
+ hir::ItemKind::Struct(..) |
+ hir::ItemKind::Union(..) |
+ hir::ItemKind::Impl(..) => self.encode_item_type(def_id),
+ _ => {}
+ }
+ if let hir::ItemKind::Fn(..) = item.kind {
+ record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(def_id));
+ }
+ if let hir::ItemKind::Impl(..) = item.kind {
+ if let Some(trait_ref) = self.tcx.impl_trait_ref(def_id) {
+ record!(self.per_def.impl_trait_ref[def_id] <- trait_ref);
+ }
+ }
+ self.encode_inherent_implementations(def_id);
+ match item.kind {
+ hir::ItemKind::Enum(..) |
+ hir::ItemKind::Struct(..) |
+ hir::ItemKind::Union(..) |
+ hir::ItemKind::Fn(..) => self.encode_variances_of(def_id),
+ _ => {}
+ }
+ match item.kind {
+ hir::ItemKind::Static(..) |
+ hir::ItemKind::Const(..) |
+ hir::ItemKind::Fn(..) |
+ hir::ItemKind::TyAlias(..) |
+ hir::ItemKind::Enum(..) |
+ hir::ItemKind::Struct(..) |
+ hir::ItemKind::Union(..) |
+ hir::ItemKind::Impl(..) |
+ hir::ItemKind::OpaqueTy(..) |
+ hir::ItemKind::Trait(..) |
+ hir::ItemKind::TraitAlias(..) => {
+ self.encode_generics(def_id);
+ self.encode_explicit_predicates(def_id);
+ self.encode_inferred_outlives(def_id);
+ }
+ _ => {}
+ }
+ match item.kind {
+ hir::ItemKind::Trait(..) |
+ hir::ItemKind::TraitAlias(..) => {
+ self.encode_super_predicates(def_id);
+ }
+ _ => {}
+ }
+
+ let mir = match item.kind {
+ hir::ItemKind::Static(..) | hir::ItemKind::Const(..) => true,
+ hir::ItemKind::Fn(_, header, ..) => {
+ let generics = tcx.generics_of(def_id);
+ let needs_inline =
+ (generics.requires_monomorphization(tcx) ||
+ tcx.codegen_fn_attrs(def_id).requests_inline()) &&
+ !self.metadata_output_only();
+ let always_encode_mir = self.tcx.sess.opts.debugging_opts.always_encode_mir;
+ needs_inline || header.constness == hir::Constness::Const || always_encode_mir
+ }
+ _ => false,
+ };
+ if mir {
+ self.encode_optimized_mir(def_id);
+ self.encode_promoted_mir(def_id);
+ }
+ }
+
+ /// Serialize the text of exported macros
+ fn encode_info_for_macro_def(&mut self, macro_def: &hir::MacroDef) {
+ use syntax::print::pprust;
+ let def_id = self.tcx.hir().local_def_id(macro_def.hir_id);
+ record!(self.per_def.kind[def_id] <- EntryKind::MacroDef(self.lazy(MacroDef {
+ body: pprust::tts_to_string(macro_def.body.clone()),
+ legacy: macro_def.legacy,
+ })));
+ record!(self.per_def.visibility[def_id] <- ty::Visibility::Public);
+ record!(self.per_def.span[def_id] <- macro_def.span);
+ record!(self.per_def.attributes[def_id] <- ¯o_def.attrs);
+ self.encode_stability(def_id);
+ self.encode_deprecation(def_id);
+ }
+
+ fn encode_info_for_generic_param(
+ &mut self,
+ def_id: DefId,
+ kind: EntryKind<'tcx>,
+ encode_type: bool,
+ ) {
+ record!(self.per_def.kind[def_id] <- kind);
+ record!(self.per_def.visibility[def_id] <- ty::Visibility::Public);
+ record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
+ if encode_type {
+ self.encode_item_type(def_id);
+ }
+ }
+
+ fn encode_info_for_closure(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_info_for_closure({:?})", def_id);
+
+ // NOTE(eddyb) `tcx.type_of(def_id)` isn't used because it's fully generic,
+ // including on the signature, which is inferred in `typeck_tables_of.
+ let hir_id = self.tcx.hir().as_local_hir_id(def_id).unwrap();
+ let ty = self.tcx.typeck_tables_of(def_id).node_type(hir_id);
+
+ record!(self.per_def.kind[def_id] <- match ty.kind {
+ ty::Generator(def_id, ..) => {
+ let layout = self.tcx.generator_layout(def_id);
+ let data = GeneratorData {
+ layout: layout.clone(),
+ };
+ EntryKind::Generator(self.lazy(data))
+ }
+
+ ty::Closure(..) => EntryKind::Closure,
+
+ _ => bug!("closure that is neither generator nor closure"),
+ });
+ record!(self.per_def.visibility[def_id] <- ty::Visibility::Public);
+ record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
+ record!(self.per_def.attributes[def_id] <- &self.tcx.get_attrs(def_id)[..]);
+ self.encode_item_type(def_id);
+ if let ty::Closure(def_id, substs) = ty.kind {
+ record!(self.per_def.fn_sig[def_id] <- substs.as_closure().sig(def_id, self.tcx));
+ }
+ self.encode_generics(def_id);
+ self.encode_optimized_mir(def_id);
+ self.encode_promoted_mir(def_id);
+ }
+
+ fn encode_info_for_anon_const(&mut self, def_id: DefId) {
+ debug!("EncodeContext::encode_info_for_anon_const({:?})", def_id);
+ let id = self.tcx.hir().as_local_hir_id(def_id).unwrap();
+ let body_id = self.tcx.hir().body_owned_by(id);
+ let const_data = self.encode_rendered_const_for_body(body_id);
+ let mir = self.tcx.mir_const_qualif(def_id).0;
+
+ record!(self.per_def.kind[def_id] <- EntryKind::Const(ConstQualif { mir }, const_data));
+ record!(self.per_def.visibility[def_id] <- ty::Visibility::Public);
+ record!(self.per_def.span[def_id] <- self.tcx.def_span(def_id));
+ self.encode_item_type(def_id);
+ self.encode_generics(def_id);
+ self.encode_explicit_predicates(def_id);
+ self.encode_inferred_outlives(def_id);
+ self.encode_optimized_mir(def_id);
+ self.encode_promoted_mir(def_id);
+ }
+
+ fn encode_native_libraries(&mut self) -> Lazy<[NativeLibrary]> {
+ let used_libraries = self.tcx.native_libraries(LOCAL_CRATE);
+ self.lazy(used_libraries.iter().cloned())
+ }
+
+ fn encode_foreign_modules(&mut self) -> Lazy<[ForeignModule]> {
+ let foreign_modules = self.tcx.foreign_modules(LOCAL_CRATE);
+ self.lazy(foreign_modules.iter().cloned())
+ }
+
+ fn encode_proc_macros(&mut self) -> Option<Lazy<[DefIndex]>> {
+ let is_proc_macro = self.tcx.sess.crate_types.borrow().contains(&CrateType::ProcMacro);
+ if is_proc_macro {
+ let tcx = self.tcx;
+ Some(self.lazy(tcx.hir().krate().items.values().filter_map(|item| {
+ if item.attrs.iter().any(|attr| is_proc_macro_attr(attr)) {
+ Some(item.hir_id.owner)
+ } else {
+ None
+ }
+ })))
+ } else {
+ None
+ }
+ }
+
+ fn encode_crate_deps(&mut self) -> Lazy<[CrateDep]> {
+ let crates = self.tcx.crates();
+
+ let mut deps = crates
+ .iter()
+ .map(|&cnum| {
+ let dep = CrateDep {
+ name: self.tcx.original_crate_name(cnum),
+ hash: self.tcx.crate_hash(cnum),
+ host_hash: self.tcx.crate_host_hash(cnum),
+ kind: self.tcx.dep_kind(cnum),
+ extra_filename: self.tcx.extra_filename(cnum),
+ };
+ (cnum, dep)
+ })
+ .collect::<Vec<_>>();
+
+ deps.sort_by_key(|&(cnum, _)| cnum);
+
+ {
+ // Sanity-check the crate numbers
+ let mut expected_cnum = 1;
+ for &(n, _) in &deps {
+ assert_eq!(n, CrateNum::new(expected_cnum));
+ expected_cnum += 1;
+ }
+ }
+
+ // We're just going to write a list of crate 'name-hash-version's, with
+ // the assumption that they are numbered 1 to n.
+ // FIXME (#2166): This is not nearly enough to support correct versioning
+ // but is enough to get transitive crate dependencies working.
+ self.lazy(deps.iter().map(|&(_, ref dep)| dep))
+ }
+
+ fn encode_lib_features(&mut self) -> Lazy<[(ast::Name, Option<ast::Name>)]> {
+ let tcx = self.tcx;
+ let lib_features = tcx.lib_features();
+ self.lazy(lib_features.to_vec())
+ }
+
+ fn encode_diagnostic_items(&mut self) -> Lazy<[(Symbol, DefIndex)]> {
+ let tcx = self.tcx;
+ let diagnostic_items = tcx.diagnostic_items(LOCAL_CRATE);
+ self.lazy(diagnostic_items.iter().map(|(&name, def_id)| (name, def_id.index)))
+ }
+
+ fn encode_lang_items(&mut self) -> Lazy<[(DefIndex, usize)]> {
+ let tcx = self.tcx;
+ let lang_items = tcx.lang_items();
+ let lang_items = lang_items.items().iter();
+ self.lazy(lang_items.enumerate().filter_map(|(i, &opt_def_id)| {
+ if let Some(def_id) = opt_def_id {
+ if def_id.is_local() {
+ return Some((def_id.index, i));
+ }
+ }
+ None
+ }))
+ }
+
+ fn encode_lang_items_missing(&mut self) -> Lazy<[lang_items::LangItem]> {
+ let tcx = self.tcx;
+ self.lazy(&tcx.lang_items().missing)
+ }
+
+ /// Encodes an index, mapping each trait to its (local) implementations.
+ fn encode_impls(&mut self) -> Lazy<[TraitImpls]> {
+ debug!("EncodeContext::encode_impls()");
+ let tcx = self.tcx;
+ let mut visitor = ImplVisitor {
+ tcx,
+ impls: FxHashMap::default(),
+ };
+ tcx.hir().krate().visit_all_item_likes(&mut visitor);
+
+ let mut all_impls: Vec<_> = visitor.impls.into_iter().collect();
+
+ // Bring everything into deterministic order for hashing
+ all_impls.sort_by_cached_key(|&(trait_def_id, _)| {
+ tcx.def_path_hash(trait_def_id)
+ });
+
+ let all_impls: Vec<_> = all_impls
+ .into_iter()
+ .map(|(trait_def_id, mut impls)| {
+ // Bring everything into deterministic order for hashing
+ impls.sort_by_cached_key(|&def_index| {
+ tcx.hir().definitions().def_path_hash(def_index)
+ });
+
+ TraitImpls {
+ trait_id: (trait_def_id.krate.as_u32(), trait_def_id.index),
+ impls: self.lazy(&impls),
+ }
+ })
+ .collect();
+
+ self.lazy(&all_impls)
+ }
+
+ // Encodes all symbols exported from this crate into the metadata.
+ //
+ // This pass is seeded off the reachability list calculated in the
+ // middle::reachable module but filters out items that either don't have a
+ // symbol associated with them (they weren't translated) or if they're an FFI
+ // definition (as that's not defined in this crate).
+ fn encode_exported_symbols(&mut self,
+ exported_symbols: &[(ExportedSymbol<'tcx>, SymbolExportLevel)])
+ -> Lazy<[(ExportedSymbol<'tcx>, SymbolExportLevel)]> {
+ // The metadata symbol name is special. It should not show up in
+ // downstream crates.
+ let metadata_symbol_name = SymbolName::new(&metadata_symbol_name(self.tcx));
+
+ self.lazy(exported_symbols
+ .iter()
+ .filter(|&&(ref exported_symbol, _)| {
+ match *exported_symbol {
+ ExportedSymbol::NoDefId(symbol_name) => {
+ symbol_name != metadata_symbol_name
+ },
+ _ => true,
+ }
+ })
+ .cloned())
+ }
+
+ fn encode_dylib_dependency_formats(&mut self) -> Lazy<[Option<LinkagePreference>]> {
+ let formats = self.tcx.dependency_formats(LOCAL_CRATE);
+ for (ty, arr) in formats.iter() {
+ if *ty != config::CrateType::Dylib {
+ continue;
+ }
+ return self.lazy(arr.iter().map(|slot| {
+ match *slot {
+ Linkage::NotLinked |
+ Linkage::IncludedFromDylib => None,
+
+ Linkage::Dynamic => Some(LinkagePreference::RequireDynamic),
+ Linkage::Static => Some(LinkagePreference::RequireStatic),
+ }
+ }));
+ }
+ Lazy::empty()
+ }
+
+ fn encode_info_for_foreign_item(
+ &mut self,
+ def_id: DefId,
+ nitem: &hir::ForeignItem,
+ ) {
+ let tcx = self.tcx;
+
+ debug!("EncodeContext::encode_info_for_foreign_item({:?})", def_id);
+
+ record!(self.per_def.kind[def_id] <- match nitem.kind {
+ hir::ForeignItemKind::Fn(_, ref names, _) => {
+ let data = FnData {
+ asyncness: hir::IsAsync::NotAsync,
+ constness: hir::Constness::NotConst,
+ param_names: self.encode_fn_param_names(names),
+ };
+ EntryKind::ForeignFn(self.lazy(data))
+ }
+ hir::ForeignItemKind::Static(_, hir::MutMutable) => EntryKind::ForeignMutStatic,
+ hir::ForeignItemKind::Static(_, hir::MutImmutable) => EntryKind::ForeignImmStatic,
+ hir::ForeignItemKind::Type => EntryKind::ForeignType,
+ });
+ record!(self.per_def.visibility[def_id] <-
+ ty::Visibility::from_hir(&nitem.vis, nitem.hir_id, self.tcx));
+ record!(self.per_def.span[def_id] <- nitem.span);
+ record!(self.per_def.attributes[def_id] <- &nitem.attrs);
+ self.encode_stability(def_id);
+ self.encode_deprecation(def_id);
+ self.encode_item_type(def_id);
+ if let hir::ForeignItemKind::Fn(..) = nitem.kind {
+ record!(self.per_def.fn_sig[def_id] <- tcx.fn_sig(def_id));
+ self.encode_variances_of(def_id);
+ }
+ self.encode_generics(def_id);
+ self.encode_explicit_predicates(def_id);
+ self.encode_inferred_outlives(def_id);
+ }
+}
+
+// FIXME(eddyb) make metadata encoding walk over all definitions, instead of HIR.
+impl Visitor<'tcx> for EncodeContext<'tcx> {
+ fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> {
+ NestedVisitorMap::OnlyBodies(&self.tcx.hir())
+ }
+ fn visit_expr(&mut self, ex: &'tcx hir::Expr) {
+ intravisit::walk_expr(self, ex);
+ self.encode_info_for_expr(ex);
+ }
+ fn visit_anon_const(&mut self, c: &'tcx AnonConst) {
+ intravisit::walk_anon_const(self, c);
+ let def_id = self.tcx.hir().local_def_id(c.hir_id);
+ self.encode_info_for_anon_const(def_id);
+ }
+ fn visit_item(&mut self, item: &'tcx hir::Item) {
+ intravisit::walk_item(self, item);
+ let def_id = self.tcx.hir().local_def_id(item.hir_id);
+ match item.kind {
+ hir::ItemKind::ExternCrate(_) |
+ hir::ItemKind::Use(..) => {} // ignore these
+ _ => self.encode_info_for_item(def_id, item),
+ }
+ self.encode_addl_info_for_item(item);
+ }
+ fn visit_foreign_item(&mut self, ni: &'tcx hir::ForeignItem) {
+ intravisit::walk_foreign_item(self, ni);
+ let def_id = self.tcx.hir().local_def_id(ni.hir_id);
+ self.encode_info_for_foreign_item(def_id, ni);
+ }
+ fn visit_generics(&mut self, generics: &'tcx hir::Generics) {
+ intravisit::walk_generics(self, generics);
+ self.encode_info_for_generics(generics);
+ }
+ fn visit_macro_def(&mut self, macro_def: &'tcx hir::MacroDef) {
+ self.encode_info_for_macro_def(macro_def);
+ }
+}
+
+impl EncodeContext<'tcx> {
+ fn encode_fields(&mut self, adt_def_id: DefId) {
+ let def = self.tcx.adt_def(adt_def_id);
+ for (variant_index, variant) in def.variants.iter_enumerated() {
+ for (field_index, _field) in variant.fields.iter().enumerate() {
+ // FIXME(eddyb) `adt_def_id` is leftover from incremental isolation,
+ // pass `def`, `variant` or `field` instead.
+ self.encode_field(adt_def_id, variant_index, field_index);
+ }
+ }
+ }
+
+ fn encode_info_for_generics(&mut self, generics: &hir::Generics) {
+ for param in &generics.params {
+ let def_id = self.tcx.hir().local_def_id(param.hir_id);
+ match param.kind {
+ GenericParamKind::Lifetime { .. } => continue,
+ GenericParamKind::Type { ref default, .. } => {
+ self.encode_info_for_generic_param(
+ def_id,
+ EntryKind::TypeParam,
+ default.is_some(),
+ );
+ }
+ GenericParamKind::Const { .. } => {
+ self.encode_info_for_generic_param(def_id, EntryKind::ConstParam, true);
+ }
+ }
+ }
+ }
+
+ fn encode_info_for_expr(&mut self, expr: &hir::Expr) {
+ match expr.kind {
+ hir::ExprKind::Closure(..) => {
+ let def_id = self.tcx.hir().local_def_id(expr.hir_id);
+ self.encode_info_for_closure(def_id);
+ }
+ _ => {}
+ }
+ }
+
+ /// In some cases, along with the item itself, we also
+ /// encode some sub-items. Usually we want some info from the item
+ /// so it's easier to do that here then to wait until we would encounter
+ /// normally in the visitor walk.
+ fn encode_addl_info_for_item(&mut self, item: &hir::Item) {
+ let def_id = self.tcx.hir().local_def_id(item.hir_id);
+ match item.kind {
+ hir::ItemKind::Static(..) |
+ hir::ItemKind::Const(..) |
+ hir::ItemKind::Fn(..) |
+ hir::ItemKind::Mod(..) |
+ hir::ItemKind::ForeignMod(..) |
+ hir::ItemKind::GlobalAsm(..) |
+ hir::ItemKind::ExternCrate(..) |
+ hir::ItemKind::Use(..) |
+ hir::ItemKind::TyAlias(..) |
+ hir::ItemKind::OpaqueTy(..) |
+ hir::ItemKind::TraitAlias(..) => {
+ // no sub-item recording needed in these cases
+ }
+ hir::ItemKind::Enum(..) => {
+ self.encode_fields(def_id);
+
+ let def = self.tcx.adt_def(def_id);
+ for (i, variant) in def.variants.iter_enumerated() {
+ // FIXME(eddyb) `def_id` is leftover from incremental isolation,
+ // pass `def` or `variant` instead.
+ self.encode_enum_variant_info(def_id, i);
+
+ // FIXME(eddyb) `def_id` is leftover from incremental isolation,
+ // pass `def`, `variant` or `ctor_def_id` instead.
+ if let Some(_ctor_def_id) = variant.ctor_def_id {
+ self.encode_enum_variant_ctor(def_id, i);
+ }
+ }
+ }
+ hir::ItemKind::Struct(ref struct_def, _) => {
+ self.encode_fields(def_id);
+
+ // If the struct has a constructor, encode it.
+ if let Some(ctor_hir_id) = struct_def.ctor_hir_id() {
+ let ctor_def_id = self.tcx.hir().local_def_id(ctor_hir_id);
+ self.encode_struct_ctor(def_id, ctor_def_id);
+ }
+ }
+ hir::ItemKind::Union(..) => {
+ self.encode_fields(def_id);
+ }
+ hir::ItemKind::Impl(..) => {
+ for &trait_item_def_id in self.tcx.associated_item_def_ids(def_id).iter() {
+ self.encode_info_for_impl_item(trait_item_def_id);
+ }
+ }
+ hir::ItemKind::Trait(..) => {
+ for &item_def_id in self.tcx.associated_item_def_ids(def_id).iter() {
+ self.encode_info_for_trait_item(item_def_id);
+ }
+ }
+ }
+ }
+}
+
+struct ImplVisitor<'tcx> {
+ tcx: TyCtxt<'tcx>,
+ impls: FxHashMap<DefId, Vec<DefIndex>>,
+}
+
+impl<'tcx, 'v> ItemLikeVisitor<'v> for ImplVisitor<'tcx> {
+ fn visit_item(&mut self, item: &hir::Item) {
+ if let hir::ItemKind::Impl(..) = item.kind {
+ let impl_id = self.tcx.hir().local_def_id(item.hir_id);
+ if let Some(trait_ref) = self.tcx.impl_trait_ref(impl_id) {
+ self.impls
+ .entry(trait_ref.def_id)
+ .or_default()
+ .push(impl_id.index);
+ }
+ }
+ }
+
+ fn visit_trait_item(&mut self, _trait_item: &'v hir::TraitItem) {}
+
+ fn visit_impl_item(&mut self, _impl_item: &'v hir::ImplItem) {
+ // handled in `visit_item` above
+ }
+}
+
+// NOTE(eddyb) The following comment was preserved for posterity, even
+// though it's no longer relevant as EBML (which uses nested & tagged
+// "documents") was replaced with a scheme that can't go out of bounds.
+//
+// And here we run into yet another obscure archive bug: in which metadata
+// loaded from archives may have trailing garbage bytes. Awhile back one of
+// our tests was failing sporadically on the macOS 64-bit builders (both nopt
+// and opt) by having ebml generate an out-of-bounds panic when looking at
+// metadata.
+//
+// Upon investigation it turned out that the metadata file inside of an rlib
+// (and ar archive) was being corrupted. Some compilations would generate a
+// metadata file which would end in a few extra bytes, while other
+// compilations would not have these extra bytes appended to the end. These
+// extra bytes were interpreted by ebml as an extra tag, so they ended up
+// being interpreted causing the out-of-bounds.
+//
+// The root cause of why these extra bytes were appearing was never
+// discovered, and in the meantime the solution we're employing is to insert
+// the length of the metadata to the start of the metadata. Later on this
+// will allow us to slice the metadata to the precise length that we just
+// generated regardless of trailing bytes that end up in it.
+
+pub(super) fn encode_metadata(tcx: TyCtxt<'_>) -> EncodedMetadata {
+ let mut encoder = opaque::Encoder::new(vec![]);
+ encoder.emit_raw_bytes(METADATA_HEADER);
+
+ // Will be filled with the root position after encoding everything.
+ encoder.emit_raw_bytes(&[0, 0, 0, 0]);
+
+ // Since encoding metadata is not in a query, and nothing is cached,
+ // there's no need to do dep-graph tracking for any of it.
+ let (root, mut result) = tcx.dep_graph.with_ignore(move || {
+ let mut ecx = EncodeContext {
+ opaque: encoder,
+ tcx,
+ per_def: Default::default(),
+ lazy_state: LazyState::NoNode,
+ type_shorthands: Default::default(),
+ predicate_shorthands: Default::default(),
+ source_file_cache: tcx.sess.source_map().files()[0].clone(),
+ interpret_allocs: Default::default(),
+ interpret_allocs_inverse: Default::default(),
+ };
+
+ // Encode the rustc version string in a predictable location.
+ rustc_version().encode(&mut ecx).unwrap();
+
+ // Encode all the entries and extra information in the crate,
+ // culminating in the `CrateRoot` which points to all of it.
+ let root = ecx.encode_crate_root();
+ (root, ecx.opaque.into_inner())
+ });
+
+ // Encode the root position.
+ let header = METADATA_HEADER.len();
+ let pos = root.position.get();
+ result[header + 0] = (pos >> 24) as u8;
+ result[header + 1] = (pos >> 16) as u8;
+ result[header + 2] = (pos >> 8) as u8;
+ result[header + 3] = (pos >> 0) as u8;
+
+ EncodedMetadata { raw_data: result }
+}
--- /dev/null
+use decoder::Metadata;
+use table::PerDefTable;
+
+use rustc::hir;
+use rustc::hir::def::{self, CtorKind};
+use rustc::hir::def_id::{DefIndex, DefId};
+use rustc::middle::exported_symbols::{ExportedSymbol, SymbolExportLevel};
+use rustc::middle::cstore::{DepKind, LinkagePreference, NativeLibrary, ForeignModule};
+use rustc::middle::lang_items;
+use rustc::mir;
+use rustc::session::CrateDisambiguator;
+use rustc::session::config::SymbolManglingVersion;
+use rustc::ty::{self, Ty, ReprOptions};
+use rustc_target::spec::{PanicStrategy, TargetTriple};
+use rustc_index::vec::IndexVec;
+use rustc_data_structures::svh::Svh;
+use rustc_serialize::Encodable;
+use syntax::{ast, attr};
+use syntax::edition::Edition;
+use syntax::symbol::Symbol;
+use syntax_pos::{self, Span};
+
+use std::marker::PhantomData;
+use std::num::NonZeroUsize;
+
+pub use decoder::{provide, provide_extern};
+
+mod decoder;
+mod encoder;
+mod table;
+
+crate fn rustc_version() -> String {
+ format!("rustc {}",
+ option_env!("CFG_VERSION").unwrap_or("unknown version"))
+}
+
+/// Metadata encoding version.
+/// N.B., increment this if you change the format of metadata such that
+/// the rustc version can't be found to compare with `rustc_version()`.
+const METADATA_VERSION: u8 = 4;
+
+/// Metadata header which includes `METADATA_VERSION`.
+/// To get older versions of rustc to ignore this metadata,
+/// there are 4 zero bytes at the start, which are treated
+/// as a length of 0 by old compilers.
+///
+/// This header is followed by the position of the `CrateRoot`,
+/// which is encoded as a 32-bit big-endian unsigned integer,
+/// and further followed by the rustc version string.
+crate const METADATA_HEADER: &[u8; 12] =
+ &[0, 0, 0, 0, b'r', b'u', b's', b't', 0, 0, 0, METADATA_VERSION];
+
+/// Additional metadata for a `Lazy<T>` where `T` may not be `Sized`,
+/// e.g. for `Lazy<[T]>`, this is the length (count of `T` values).
+crate trait LazyMeta {
+ type Meta: Copy + 'static;
+
+ /// Returns the minimum encoded size.
+ // FIXME(eddyb) Give better estimates for certain types.
+ fn min_size(meta: Self::Meta) -> usize;
+}
+
+impl<T: Encodable> LazyMeta for T {
+ type Meta = ();
+
+ fn min_size(_: ()) -> usize {
+ assert_ne!(std::mem::size_of::<T>(), 0);
+ 1
+ }
+}
+
+impl<T: Encodable> LazyMeta for [T] {
+ type Meta = usize;
+
+ fn min_size(len: usize) -> usize {
+ len * T::min_size(())
+ }
+}
+
+/// A value of type T referred to by its absolute position
+/// in the metadata, and which can be decoded lazily.
+///
+/// Metadata is effective a tree, encoded in post-order,
+/// and with the root's position written next to the header.
+/// That means every single `Lazy` points to some previous
+/// location in the metadata and is part of a larger node.
+///
+/// The first `Lazy` in a node is encoded as the backwards
+/// distance from the position where the containing node
+/// starts and where the `Lazy` points to, while the rest
+/// use the forward distance from the previous `Lazy`.
+/// Distances start at 1, as 0-byte nodes are invalid.
+/// Also invalid are nodes being referred in a different
+/// order than they were encoded in.
+///
+/// # Sequences (`Lazy<[T]>`)
+///
+/// Unlike `Lazy<Vec<T>>`, the length is encoded next to the
+/// position, not at the position, which means that the length
+/// doesn't need to be known before encoding all the elements.
+///
+/// If the length is 0, no position is encoded, but otherwise,
+/// the encoding is that of `Lazy`, with the distinction that
+/// the minimal distance the length of the sequence, i.e.
+/// it's assumed there's no 0-byte element in the sequence.
+#[must_use]
+// FIXME(#59875) the `Meta` parameter only exists to dodge
+// invariance wrt `T` (coming from the `meta: T::Meta` field).
+crate struct Lazy<T, Meta = <T as LazyMeta>::Meta>
+ where T: ?Sized + LazyMeta<Meta = Meta>,
+ Meta: 'static + Copy,
+{
+ position: NonZeroUsize,
+ meta: Meta,
+ _marker: PhantomData<T>,
+}
+
+impl<T: ?Sized + LazyMeta> Lazy<T> {
+ fn from_position_and_meta(position: NonZeroUsize, meta: T::Meta) -> Lazy<T> {
+ Lazy {
+ position,
+ meta,
+ _marker: PhantomData,
+ }
+ }
+}
+
+impl<T: Encodable> Lazy<T> {
+ fn from_position(position: NonZeroUsize) -> Lazy<T> {
+ Lazy::from_position_and_meta(position, ())
+ }
+}
+
+impl<T: Encodable> Lazy<[T]> {
+ fn empty() -> Lazy<[T]> {
+ Lazy::from_position_and_meta(NonZeroUsize::new(1).unwrap(), 0)
+ }
+}
+
+impl<T: ?Sized + LazyMeta> Copy for Lazy<T> {}
+impl<T: ?Sized + LazyMeta> Clone for Lazy<T> {
+ fn clone(&self) -> Self {
+ *self
+ }
+}
+
+impl<T: ?Sized + LazyMeta> rustc_serialize::UseSpecializedEncodable for Lazy<T> {}
+impl<T: ?Sized + LazyMeta> rustc_serialize::UseSpecializedDecodable for Lazy<T> {}
+
+/// Encoding / decoding state for `Lazy`.
+#[derive(Copy, Clone, PartialEq, Eq, Debug)]
+enum LazyState {
+ /// Outside of a metadata node.
+ NoNode,
+
+ /// Inside a metadata node, and before any `Lazy`.
+ /// The position is that of the node itself.
+ NodeStart(NonZeroUsize),
+
+ /// Inside a metadata node, with a previous `Lazy`.
+ /// The position is a conservative estimate of where that
+ /// previous `Lazy` would end (see their comments).
+ Previous(NonZeroUsize),
+}
+
+// FIXME(#59875) `Lazy!(T)` replaces `Lazy<T>`, passing the `Meta` parameter
+// manually, instead of relying on the default, to get the correct variance.
+// Only needed when `T` itself contains a parameter (e.g. `'tcx`).
+macro_rules! Lazy {
+ (Table<$T:ty>) => {Lazy<Table<$T>, usize>};
+ (PerDefTable<$T:ty>) => {Lazy<PerDefTable<$T>, usize>};
+ ([$T:ty]) => {Lazy<[$T], usize>};
+ ($T:ty) => {Lazy<$T, ()>};
+}
+
+#[derive(RustcEncodable, RustcDecodable)]
+crate struct CrateRoot<'tcx> {
+ pub name: Symbol,
+ pub triple: TargetTriple,
+ extra_filename: String,
+ pub hash: Svh,
+ pub disambiguator: CrateDisambiguator,
+ pub panic_strategy: PanicStrategy,
+ edition: Edition,
+ pub has_global_allocator: bool,
+ has_panic_handler: bool,
+ pub has_default_lib_allocator: bool,
+ plugin_registrar_fn: Option<DefIndex>,
+ proc_macro_decls_static: Option<DefIndex>,
+ proc_macro_stability: Option<attr::Stability>,
+
+ pub crate_deps: Lazy<[CrateDep]>,
+ dylib_dependency_formats: Lazy<[Option<LinkagePreference>]>,
+ lib_features: Lazy<[(Symbol, Option<Symbol>)]>,
+ lang_items: Lazy<[(DefIndex, usize)]>,
+ lang_items_missing: Lazy<[lang_items::LangItem]>,
+ diagnostic_items: Lazy<[(Symbol, DefIndex)]>,
+ native_libraries: Lazy<[NativeLibrary]>,
+ foreign_modules: Lazy<[ForeignModule]>,
+ source_map: Lazy<[syntax_pos::SourceFile]>,
+ pub def_path_table: Lazy<hir::map::definitions::DefPathTable>,
+ pub impls: Lazy<[TraitImpls]>,
+ exported_symbols: Lazy!([(ExportedSymbol<'tcx>, SymbolExportLevel)]),
+ pub interpret_alloc_index: Lazy<[u32]>,
+
+ per_def: LazyPerDefTables<'tcx>,
+
+ /// The DefIndex's of any proc macros delcared by
+ /// this crate
+ pub proc_macro_data: Option<Lazy<[DefIndex]>>,
+
+ compiler_builtins: bool,
+ pub needs_allocator: bool,
+ pub needs_panic_runtime: bool,
+ no_builtins: bool,
+ pub panic_runtime: bool,
+ pub profiler_runtime: bool,
+ pub sanitizer_runtime: bool,
+ symbol_mangling_version: SymbolManglingVersion,
+}
+
+#[derive(RustcEncodable, RustcDecodable)]
+crate struct CrateDep {
+ pub name: ast::Name,
+ pub hash: Svh,
+ pub host_hash: Option<Svh>,
+ pub kind: DepKind,
+ pub extra_filename: String,
+}
+
+#[derive(RustcEncodable, RustcDecodable)]
+crate struct TraitImpls {
+ pub trait_id: (u32, DefIndex),
+ pub impls: Lazy<[DefIndex]>,
+}
+
+#[derive(RustcEncodable, RustcDecodable)]
+crate struct LazyPerDefTables<'tcx> {
+ kind: Lazy!(PerDefTable<Lazy!(EntryKind<'tcx>)>),
+ visibility: Lazy!(PerDefTable<Lazy<ty::Visibility>>),
+ span: Lazy!(PerDefTable<Lazy<Span>>),
+ attributes: Lazy!(PerDefTable<Lazy<[ast::Attribute]>>),
+ children: Lazy!(PerDefTable<Lazy<[DefIndex]>>),
+ stability: Lazy!(PerDefTable<Lazy<attr::Stability>>),
+ deprecation: Lazy!(PerDefTable<Lazy<attr::Deprecation>>),
+ ty: Lazy!(PerDefTable<Lazy!(Ty<'tcx>)>),
+ fn_sig: Lazy!(PerDefTable<Lazy!(ty::PolyFnSig<'tcx>)>),
+ impl_trait_ref: Lazy!(PerDefTable<Lazy!(ty::TraitRef<'tcx>)>),
+ inherent_impls: Lazy!(PerDefTable<Lazy<[DefIndex]>>),
+ variances: Lazy!(PerDefTable<Lazy<[ty::Variance]>>),
+ generics: Lazy!(PerDefTable<Lazy<ty::Generics>>),
+ explicit_predicates: Lazy!(PerDefTable<Lazy!(ty::GenericPredicates<'tcx>)>),
+ // FIXME(eddyb) this would ideally be `Lazy<[...]>` but `ty::Predicate`
+ // doesn't handle shorthands in its own (de)serialization impls,
+ // as it's an `enum` for which we want to derive (de)serialization,
+ // so the `ty::codec` APIs handle the whole `&'tcx [...]` at once.
+ // Also, as an optimization, a missing entry indicates an empty `&[]`.
+ inferred_outlives: Lazy!(PerDefTable<Lazy!(&'tcx [(ty::Predicate<'tcx>, Span)])>),
+ super_predicates: Lazy!(PerDefTable<Lazy!(ty::GenericPredicates<'tcx>)>),
+ mir: Lazy!(PerDefTable<Lazy!(mir::Body<'tcx>)>),
+ promoted_mir: Lazy!(PerDefTable<Lazy!(IndexVec<mir::Promoted, mir::Body<'tcx>>)>),
+}
+
+#[derive(Copy, Clone, RustcEncodable, RustcDecodable)]
+enum EntryKind<'tcx> {
+ Const(ConstQualif, Lazy<RenderedConst>),
+ ImmStatic,
+ MutStatic,
+ ForeignImmStatic,
+ ForeignMutStatic,
+ ForeignMod,
+ ForeignType,
+ GlobalAsm,
+ Type,
+ TypeParam,
+ ConstParam,
+ OpaqueTy,
+ Enum(ReprOptions),
+ Field,
+ Variant(Lazy<VariantData>),
+ Struct(Lazy<VariantData>, ReprOptions),
+ Union(Lazy<VariantData>, ReprOptions),
+ Fn(Lazy<FnData>),
+ ForeignFn(Lazy<FnData>),
+ Mod(Lazy<ModData>),
+ MacroDef(Lazy<MacroDef>),
+ Closure,
+ Generator(Lazy!(GeneratorData<'tcx>)),
+ Trait(Lazy<TraitData>),
+ Impl(Lazy<ImplData>),
+ Method(Lazy<MethodData>),
+ AssocType(AssocContainer),
+ AssocOpaqueTy(AssocContainer),
+ AssocConst(AssocContainer, ConstQualif, Lazy<RenderedConst>),
+ TraitAlias,
+}
+
+/// Additional data for EntryKind::Const and EntryKind::AssocConst
+#[derive(Clone, Copy, RustcEncodable, RustcDecodable)]
+struct ConstQualif {
+ mir: u8,
+}
+
+/// Contains a constant which has been rendered to a String.
+/// Used by rustdoc.
+#[derive(RustcEncodable, RustcDecodable)]
+struct RenderedConst(String);
+
+#[derive(RustcEncodable, RustcDecodable)]
+struct ModData {
+ reexports: Lazy<[def::Export<hir::HirId>]>,
+}
+
+#[derive(RustcEncodable, RustcDecodable)]
+struct MacroDef {
+ body: String,
+ legacy: bool,
+}
+
+#[derive(RustcEncodable, RustcDecodable)]
+struct FnData {
+ asyncness: hir::IsAsync,
+ constness: hir::Constness,
+ param_names: Lazy<[ast::Name]>,
+}
+
+#[derive(RustcEncodable, RustcDecodable)]
+struct VariantData {
+ ctor_kind: CtorKind,
+ discr: ty::VariantDiscr,
+ /// If this is unit or tuple-variant/struct, then this is the index of the ctor id.
+ ctor: Option<DefIndex>,
+}
+
+#[derive(RustcEncodable, RustcDecodable)]
+struct TraitData {
+ unsafety: hir::Unsafety,
+ paren_sugar: bool,
+ has_auto_impl: bool,
+ is_marker: bool,
+}
+
+#[derive(RustcEncodable, RustcDecodable)]
+struct ImplData {
+ polarity: ty::ImplPolarity,
+ defaultness: hir::Defaultness,
+ parent_impl: Option<DefId>,
+
+ /// This is `Some` only for impls of `CoerceUnsized`.
+ // FIXME(eddyb) perhaps compute this on the fly if cheap enough?
+ coerce_unsized_info: Option<ty::adjustment::CoerceUnsizedInfo>,
+}
+
+
+/// Describes whether the container of an associated item
+/// is a trait or an impl and whether, in a trait, it has
+/// a default, or an in impl, whether it's marked "default".
+#[derive(Copy, Clone, RustcEncodable, RustcDecodable)]
+enum AssocContainer {
+ TraitRequired,
+ TraitWithDefault,
+ ImplDefault,
+ ImplFinal,
+}
+
+impl AssocContainer {
+ fn with_def_id(&self, def_id: DefId) -> ty::AssocItemContainer {
+ match *self {
+ AssocContainer::TraitRequired |
+ AssocContainer::TraitWithDefault => ty::TraitContainer(def_id),
+
+ AssocContainer::ImplDefault |
+ AssocContainer::ImplFinal => ty::ImplContainer(def_id),
+ }
+ }
+
+ fn defaultness(&self) -> hir::Defaultness {
+ match *self {
+ AssocContainer::TraitRequired => hir::Defaultness::Default {
+ has_value: false,
+ },
+
+ AssocContainer::TraitWithDefault |
+ AssocContainer::ImplDefault => hir::Defaultness::Default {
+ has_value: true,
+ },
+
+ AssocContainer::ImplFinal => hir::Defaultness::Final,
+ }
+ }
+}
+
+#[derive(RustcEncodable, RustcDecodable)]
+struct MethodData {
+ fn_data: FnData,
+ container: AssocContainer,
+ has_self: bool,
+}
+
+#[derive(RustcEncodable, RustcDecodable)]
+struct GeneratorData<'tcx> {
+ layout: mir::GeneratorLayout<'tcx>,
+}
+
+// Tags used for encoding Spans:
+const TAG_VALID_SPAN: u8 = 0;
+const TAG_INVALID_SPAN: u8 = 1;
--- /dev/null
+use crate::rmeta::*;
+
+use rustc::hir::def_id::{DefId, DefIndex};
+use rustc_serialize::{Encodable, opaque::Encoder};
+use std::convert::TryInto;
+use std::marker::PhantomData;
+use std::num::NonZeroUsize;
+use log::debug;
+
+/// Helper trait, for encoding to, and decoding from, a fixed number of bytes.
+/// Used mainly for Lazy positions and lengths.
+/// Unchecked invariant: `Self::default()` should encode as `[0; BYTE_LEN]`,
+/// but this has no impact on safety.
+pub(super) trait FixedSizeEncoding: Default {
+ const BYTE_LEN: usize;
+
+ // FIXME(eddyb) convert to and from `[u8; Self::BYTE_LEN]` instead,
+ // once that starts being allowed by the compiler (i.e. lazy normalization).
+ fn from_bytes(b: &[u8]) -> Self;
+ fn write_to_bytes(self, b: &mut [u8]);
+
+ // FIXME(eddyb) make these generic functions, or at least defaults here.
+ // (same problem as above, needs `[u8; Self::BYTE_LEN]`)
+ // For now, a macro (`fixed_size_encoding_byte_len_and_defaults`) is used.
+
+ /// Read a `Self` value (encoded as `Self::BYTE_LEN` bytes),
+ /// from `&b[i * Self::BYTE_LEN..]`, returning `None` if `i`
+ /// is not in bounds, or `Some(Self::from_bytes(...))` otherwise.
+ fn maybe_read_from_bytes_at(b: &[u8], i: usize) -> Option<Self>;
+ /// Write a `Self` value (encoded as `Self::BYTE_LEN` bytes),
+ /// at `&mut b[i * Self::BYTE_LEN..]`, using `Self::write_to_bytes`.
+ fn write_to_bytes_at(self, b: &mut [u8], i: usize);
+}
+
+// HACK(eddyb) this shouldn't be needed (see comments on the methods above).
+macro_rules! fixed_size_encoding_byte_len_and_defaults {
+ ($byte_len:expr) => {
+ const BYTE_LEN: usize = $byte_len;
+ fn maybe_read_from_bytes_at(b: &[u8], i: usize) -> Option<Self> {
+ const BYTE_LEN: usize = $byte_len;
+ // HACK(eddyb) ideally this would be done with fully safe code,
+ // but slicing `[u8]` with `i * N..` is optimized worse, due to the
+ // possibility of `i * N` overflowing, than indexing `[[u8; N]]`.
+ let b = unsafe {
+ std::slice::from_raw_parts(
+ b.as_ptr() as *const [u8; BYTE_LEN],
+ b.len() / BYTE_LEN,
+ )
+ };
+ b.get(i).map(|b| FixedSizeEncoding::from_bytes(b))
+ }
+ fn write_to_bytes_at(self, b: &mut [u8], i: usize) {
+ const BYTE_LEN: usize = $byte_len;
+ // HACK(eddyb) ideally this would be done with fully safe code,
+ // see similar comment in `read_from_bytes_at` for why it can't yet.
+ let b = unsafe {
+ std::slice::from_raw_parts_mut(
+ b.as_mut_ptr() as *mut [u8; BYTE_LEN],
+ b.len() / BYTE_LEN,
+ )
+ };
+ self.write_to_bytes(&mut b[i]);
+ }
+ }
+}
+
+impl FixedSizeEncoding for u32 {
+ fixed_size_encoding_byte_len_and_defaults!(4);
+
+ fn from_bytes(b: &[u8]) -> Self {
+ let mut bytes = [0; Self::BYTE_LEN];
+ bytes.copy_from_slice(&b[..Self::BYTE_LEN]);
+ Self::from_le_bytes(bytes)
+ }
+
+ fn write_to_bytes(self, b: &mut [u8]) {
+ b[..Self::BYTE_LEN].copy_from_slice(&self.to_le_bytes());
+ }
+}
+
+// NOTE(eddyb) there could be an impl for `usize`, which would enable a more
+// generic `Lazy<T>` impl, but in the general case we might not need / want to
+// fit every `usize` in `u32`.
+impl<T: Encodable> FixedSizeEncoding for Option<Lazy<T>> {
+ fixed_size_encoding_byte_len_and_defaults!(u32::BYTE_LEN);
+
+ fn from_bytes(b: &[u8]) -> Self {
+ Some(Lazy::from_position(NonZeroUsize::new(u32::from_bytes(b) as usize)?))
+ }
+
+ fn write_to_bytes(self, b: &mut [u8]) {
+ let position = self.map_or(0, |lazy| lazy.position.get());
+ let position: u32 = position.try_into().unwrap();
+
+ position.write_to_bytes(b)
+ }
+}
+
+impl<T: Encodable> FixedSizeEncoding for Option<Lazy<[T]>> {
+ fixed_size_encoding_byte_len_and_defaults!(u32::BYTE_LEN * 2);
+
+ fn from_bytes(b: &[u8]) -> Self {
+ Some(Lazy::from_position_and_meta(
+ <Option<Lazy<T>>>::from_bytes(b)?.position,
+ u32::from_bytes(&b[u32::BYTE_LEN..]) as usize,
+ ))
+ }
+
+ fn write_to_bytes(self, b: &mut [u8]) {
+ self.map(|lazy| Lazy::<T>::from_position(lazy.position))
+ .write_to_bytes(b);
+
+ let len = self.map_or(0, |lazy| lazy.meta);
+ let len: u32 = len.try_into().unwrap();
+
+ len.write_to_bytes(&mut b[u32::BYTE_LEN..]);
+ }
+}
+
+/// Random-access table (i.e. offeringconstant-time `get`/`set`), similar to
+/// `Vec<Option<T>>`, but without requiring encoding or decoding all the values
+/// eagerly and in-order.
+/// A total of `(max_idx + 1) * <Option<T> as FixedSizeEncoding>::BYTE_LEN` bytes
+/// are used for a table, where `max_idx` is the largest index passed to `set`.
+// FIXME(eddyb) replace `Vec` with `[_]` here, such that `Box<Table<T>>` would be used
+// when building it, and `Lazy<Table<T>>` or `&Table<T>` when reading it.
+// (not sure if that is possible given that the `Vec` is being resized now)
+pub(super) struct Table<T> where Option<T>: FixedSizeEncoding {
+ // FIXME(eddyb) store `[u8; <Option<T>>::BYTE_LEN]` instead of `u8` in `Vec`,
+ // once that starts being allowed by the compiler (i.e. lazy normalization).
+ bytes: Vec<u8>,
+ _marker: PhantomData<T>,
+}
+
+impl<T> Default for Table<T> where Option<T>: FixedSizeEncoding {
+ fn default() -> Self {
+ Table {
+ bytes: vec![],
+ _marker: PhantomData,
+ }
+ }
+}
+
+impl<T> Table<T> where Option<T>: FixedSizeEncoding {
+ fn set(&mut self, i: usize, value: T) {
+ // FIXME(eddyb) investigate more compact encodings for sparse tables.
+ // On the PR @michaelwoerister mentioned:
+ // > Space requirements could perhaps be optimized by using the HAMT `popcnt`
+ // > trick (i.e. divide things into buckets of 32 or 64 items and then
+ // > store bit-masks of which item in each bucket is actually serialized).
+ let needed = (i + 1) * <Option<T>>::BYTE_LEN;
+ if self.bytes.len() < needed {
+ self.bytes.resize(needed, 0);
+ }
+
+ Some(value).write_to_bytes_at(&mut self.bytes, i);
+ }
+
+ fn encode(&self, buf: &mut Encoder) -> Lazy<Self> {
+ let pos = buf.position();
+ buf.emit_raw_bytes(&self.bytes);
+ Lazy::from_position_and_meta(
+ NonZeroUsize::new(pos as usize).unwrap(),
+ self.bytes.len(),
+ )
+ }
+}
+
+impl<T> LazyMeta for Table<T> where Option<T>: FixedSizeEncoding {
+ type Meta = usize;
+
+ fn min_size(len: usize) -> usize {
+ len
+ }
+}
+
+impl<T> Lazy<Table<T>> where Option<T>: FixedSizeEncoding {
+ /// Given the metadata, extract out the value at a particular index (if any).
+ #[inline(never)]
+ fn get<'a, 'tcx, M: Metadata<'a, 'tcx>>(
+ &self,
+ metadata: M,
+ i: usize,
+ ) -> Option<T> {
+ debug!("Table::lookup: index={:?} len={:?}", i, self.meta);
+
+ let start = self.position.get();
+ let bytes = &metadata.raw_bytes()[start..start + self.meta];
+ <Option<T>>::maybe_read_from_bytes_at(bytes, i)?
+ }
+}
+
+/// Like a `Table` but using `DefIndex` instead of `usize` as keys.
+// FIXME(eddyb) replace by making `Table` behave like `IndexVec`,
+// and by using `newtype_index!` to define `DefIndex`.
+pub(super) struct PerDefTable<T>(Table<T>) where Option<T>: FixedSizeEncoding;
+
+impl<T> Default for PerDefTable<T> where Option<T>: FixedSizeEncoding {
+ fn default() -> Self {
+ PerDefTable(Table::default())
+ }
+}
+
+impl<T> PerDefTable<T> where Option<T>: FixedSizeEncoding {
+ pub(super) fn set(&mut self, def_id: DefId, value: T) {
+ assert!(def_id.is_local());
+ self.0.set(def_id.index.index(), value);
+ }
+
+ pub(super) fn encode(&self, buf: &mut Encoder) -> Lazy<Self> {
+ let lazy = self.0.encode(buf);
+ Lazy::from_position_and_meta(lazy.position, lazy.meta)
+ }
+}
+
+impl<T> LazyMeta for PerDefTable<T> where Option<T>: FixedSizeEncoding {
+ type Meta = <Table<T> as LazyMeta>::Meta;
+
+ fn min_size(meta: Self::Meta) -> usize {
+ Table::<T>::min_size(meta)
+ }
+}
+
+impl<T> Lazy<PerDefTable<T>> where Option<T>: FixedSizeEncoding {
+ fn as_table(&self) -> Lazy<Table<T>> {
+ Lazy::from_position_and_meta(self.position, self.meta)
+ }
+
+ /// Given the metadata, extract out the value at a particular DefIndex (if any).
+ #[inline(never)]
+ pub(super) fn get<'a, 'tcx, M: Metadata<'a, 'tcx>>(
+ &self,
+ metadata: M,
+ def_index: DefIndex,
+ ) -> Option<T> {
+ self.as_table().get(metadata, def_index.index())
+ }
+}
+++ /dev/null
-use crate::table::PerDefTable;
-
-use rustc::hir;
-use rustc::hir::def::{self, CtorKind};
-use rustc::hir::def_id::{DefIndex, DefId};
-use rustc::middle::exported_symbols::{ExportedSymbol, SymbolExportLevel};
-use rustc::middle::cstore::{DepKind, LinkagePreference, NativeLibrary, ForeignModule};
-use rustc::middle::lang_items;
-use rustc::mir;
-use rustc::session::CrateDisambiguator;
-use rustc::session::config::SymbolManglingVersion;
-use rustc::ty::{self, Ty, ReprOptions};
-use rustc_target::spec::{PanicStrategy, TargetTriple};
-use rustc_index::vec::IndexVec;
-use rustc_data_structures::svh::Svh;
-
-use rustc_serialize::Encodable;
-use syntax::{ast, attr};
-use syntax::edition::Edition;
-use syntax::symbol::Symbol;
-use syntax_pos::{self, Span};
-
-use std::marker::PhantomData;
-use std::num::NonZeroUsize;
-
-crate fn rustc_version() -> String {
- format!("rustc {}",
- option_env!("CFG_VERSION").unwrap_or("unknown version"))
-}
-
-/// Metadata encoding version.
-/// N.B., increment this if you change the format of metadata such that
-/// the rustc version can't be found to compare with `rustc_version()`.
-const METADATA_VERSION: u8 = 4;
-
-/// Metadata header which includes `METADATA_VERSION`.
-/// To get older versions of rustc to ignore this metadata,
-/// there are 4 zero bytes at the start, which are treated
-/// as a length of 0 by old compilers.
-///
-/// This header is followed by the position of the `CrateRoot`,
-/// which is encoded as a 32-bit big-endian unsigned integer,
-/// and further followed by the rustc version string.
-crate const METADATA_HEADER: &[u8; 12] =
- &[0, 0, 0, 0, b'r', b'u', b's', b't', 0, 0, 0, METADATA_VERSION];
-
-/// Additional metadata for a `Lazy<T>` where `T` may not be `Sized`,
-/// e.g. for `Lazy<[T]>`, this is the length (count of `T` values).
-crate trait LazyMeta {
- type Meta: Copy + 'static;
-
- /// Returns the minimum encoded size.
- // FIXME(eddyb) Give better estimates for certain types.
- fn min_size(meta: Self::Meta) -> usize;
-}
-
-impl<T: Encodable> LazyMeta for T {
- type Meta = ();
-
- fn min_size(_: ()) -> usize {
- assert_ne!(std::mem::size_of::<T>(), 0);
- 1
- }
-}
-
-impl<T: Encodable> LazyMeta for [T] {
- type Meta = usize;
-
- fn min_size(len: usize) -> usize {
- len * T::min_size(())
- }
-}
-
-/// A value of type T referred to by its absolute position
-/// in the metadata, and which can be decoded lazily.
-///
-/// Metadata is effective a tree, encoded in post-order,
-/// and with the root's position written next to the header.
-/// That means every single `Lazy` points to some previous
-/// location in the metadata and is part of a larger node.
-///
-/// The first `Lazy` in a node is encoded as the backwards
-/// distance from the position where the containing node
-/// starts and where the `Lazy` points to, while the rest
-/// use the forward distance from the previous `Lazy`.
-/// Distances start at 1, as 0-byte nodes are invalid.
-/// Also invalid are nodes being referred in a different
-/// order than they were encoded in.
-///
-/// # Sequences (`Lazy<[T]>`)
-///
-/// Unlike `Lazy<Vec<T>>`, the length is encoded next to the
-/// position, not at the position, which means that the length
-/// doesn't need to be known before encoding all the elements.
-///
-/// If the length is 0, no position is encoded, but otherwise,
-/// the encoding is that of `Lazy`, with the distinction that
-/// the minimal distance the length of the sequence, i.e.
-/// it's assumed there's no 0-byte element in the sequence.
-#[must_use]
-// FIXME(#59875) the `Meta` parameter only exists to dodge
-// invariance wrt `T` (coming from the `meta: T::Meta` field).
-crate struct Lazy<T, Meta = <T as LazyMeta>::Meta>
- where T: ?Sized + LazyMeta<Meta = Meta>,
- Meta: 'static + Copy,
-{
- pub position: NonZeroUsize,
- pub meta: Meta,
- _marker: PhantomData<T>,
-}
-
-impl<T: ?Sized + LazyMeta> Lazy<T> {
- crate fn from_position_and_meta(position: NonZeroUsize, meta: T::Meta) -> Lazy<T> {
- Lazy {
- position,
- meta,
- _marker: PhantomData,
- }
- }
-}
-
-impl<T: Encodable> Lazy<T> {
- crate fn from_position(position: NonZeroUsize) -> Lazy<T> {
- Lazy::from_position_and_meta(position, ())
- }
-}
-
-impl<T: Encodable> Lazy<[T]> {
- crate fn empty() -> Lazy<[T]> {
- Lazy::from_position_and_meta(NonZeroUsize::new(1).unwrap(), 0)
- }
-}
-
-impl<T: ?Sized + LazyMeta> Copy for Lazy<T> {}
-impl<T: ?Sized + LazyMeta> Clone for Lazy<T> {
- fn clone(&self) -> Self {
- *self
- }
-}
-
-impl<T: ?Sized + LazyMeta> rustc_serialize::UseSpecializedEncodable for Lazy<T> {}
-impl<T: ?Sized + LazyMeta> rustc_serialize::UseSpecializedDecodable for Lazy<T> {}
-
-/// Encoding / decoding state for `Lazy`.
-#[derive(Copy, Clone, PartialEq, Eq, Debug)]
-crate enum LazyState {
- /// Outside of a metadata node.
- NoNode,
-
- /// Inside a metadata node, and before any `Lazy`.
- /// The position is that of the node itself.
- NodeStart(NonZeroUsize),
-
- /// Inside a metadata node, with a previous `Lazy`.
- /// The position is a conservative estimate of where that
- /// previous `Lazy` would end (see their comments).
- Previous(NonZeroUsize),
-}
-
-// FIXME(#59875) `Lazy!(T)` replaces `Lazy<T>`, passing the `Meta` parameter
-// manually, instead of relying on the default, to get the correct variance.
-// Only needed when `T` itself contains a parameter (e.g. `'tcx`).
-macro_rules! Lazy {
- (Table<$T:ty>) => {Lazy<Table<$T>, usize>};
- (PerDefTable<$T:ty>) => {Lazy<PerDefTable<$T>, usize>};
- ([$T:ty]) => {Lazy<[$T], usize>};
- ($T:ty) => {Lazy<$T, ()>};
-}
-
-#[derive(RustcEncodable, RustcDecodable)]
-crate struct CrateRoot<'tcx> {
- pub name: Symbol,
- pub triple: TargetTriple,
- pub extra_filename: String,
- pub hash: Svh,
- pub disambiguator: CrateDisambiguator,
- pub panic_strategy: PanicStrategy,
- pub edition: Edition,
- pub has_global_allocator: bool,
- pub has_panic_handler: bool,
- pub has_default_lib_allocator: bool,
- pub plugin_registrar_fn: Option<DefIndex>,
- pub proc_macro_decls_static: Option<DefIndex>,
- pub proc_macro_stability: Option<attr::Stability>,
-
- pub crate_deps: Lazy<[CrateDep]>,
- pub dylib_dependency_formats: Lazy<[Option<LinkagePreference>]>,
- pub lib_features: Lazy<[(Symbol, Option<Symbol>)]>,
- pub lang_items: Lazy<[(DefIndex, usize)]>,
- pub lang_items_missing: Lazy<[lang_items::LangItem]>,
- pub diagnostic_items: Lazy<[(Symbol, DefIndex)]>,
- pub native_libraries: Lazy<[NativeLibrary]>,
- pub foreign_modules: Lazy<[ForeignModule]>,
- pub source_map: Lazy<[syntax_pos::SourceFile]>,
- pub def_path_table: Lazy<hir::map::definitions::DefPathTable>,
- pub impls: Lazy<[TraitImpls]>,
- pub exported_symbols: Lazy!([(ExportedSymbol<'tcx>, SymbolExportLevel)]),
- pub interpret_alloc_index: Lazy<[u32]>,
-
- pub per_def: LazyPerDefTables<'tcx>,
-
- /// The DefIndex's of any proc macros delcared by
- /// this crate
- pub proc_macro_data: Option<Lazy<[DefIndex]>>,
-
- pub compiler_builtins: bool,
- pub needs_allocator: bool,
- pub needs_panic_runtime: bool,
- pub no_builtins: bool,
- pub panic_runtime: bool,
- pub profiler_runtime: bool,
- pub sanitizer_runtime: bool,
- pub symbol_mangling_version: SymbolManglingVersion,
-}
-
-#[derive(RustcEncodable, RustcDecodable)]
-crate struct CrateDep {
- pub name: ast::Name,
- pub hash: Svh,
- pub host_hash: Option<Svh>,
- pub kind: DepKind,
- pub extra_filename: String,
-}
-
-#[derive(RustcEncodable, RustcDecodable)]
-crate struct TraitImpls {
- pub trait_id: (u32, DefIndex),
- pub impls: Lazy<[DefIndex]>,
-}
-
-#[derive(RustcEncodable, RustcDecodable)]
-crate struct LazyPerDefTables<'tcx> {
- pub kind: Lazy!(PerDefTable<Lazy!(EntryKind<'tcx>)>),
- pub visibility: Lazy!(PerDefTable<Lazy<ty::Visibility>>),
- pub span: Lazy!(PerDefTable<Lazy<Span>>),
- pub attributes: Lazy!(PerDefTable<Lazy<[ast::Attribute]>>),
- pub children: Lazy!(PerDefTable<Lazy<[DefIndex]>>),
- pub stability: Lazy!(PerDefTable<Lazy<attr::Stability>>),
- pub deprecation: Lazy!(PerDefTable<Lazy<attr::Deprecation>>),
-
- pub ty: Lazy!(PerDefTable<Lazy!(Ty<'tcx>)>),
- pub fn_sig: Lazy!(PerDefTable<Lazy!(ty::PolyFnSig<'tcx>)>),
- pub impl_trait_ref: Lazy!(PerDefTable<Lazy!(ty::TraitRef<'tcx>)>),
- pub inherent_impls: Lazy!(PerDefTable<Lazy<[DefIndex]>>),
- pub variances: Lazy!(PerDefTable<Lazy<[ty::Variance]>>),
- pub generics: Lazy!(PerDefTable<Lazy<ty::Generics>>),
- pub predicates: Lazy!(PerDefTable<Lazy!(ty::GenericPredicates<'tcx>)>),
- pub predicates_defined_on: Lazy!(PerDefTable<Lazy!(ty::GenericPredicates<'tcx>)>),
- pub super_predicates: Lazy!(PerDefTable<Lazy!(ty::GenericPredicates<'tcx>)>),
-
- pub mir: Lazy!(PerDefTable<Lazy!(mir::Body<'tcx>)>),
- pub promoted_mir: Lazy!(PerDefTable<Lazy!(IndexVec<mir::Promoted, mir::Body<'tcx>>)>),
-}
-
-#[derive(Copy, Clone, RustcEncodable, RustcDecodable)]
-crate enum EntryKind<'tcx> {
- Const(ConstQualif, Lazy<RenderedConst>),
- ImmStatic,
- MutStatic,
- ForeignImmStatic,
- ForeignMutStatic,
- ForeignMod,
- ForeignType,
- GlobalAsm,
- Type,
- TypeParam,
- ConstParam,
- OpaqueTy,
- Enum(ReprOptions),
- Field,
- Variant(Lazy<VariantData>),
- Struct(Lazy<VariantData>, ReprOptions),
- Union(Lazy<VariantData>, ReprOptions),
- Fn(Lazy<FnData>),
- ForeignFn(Lazy<FnData>),
- Mod(Lazy<ModData>),
- MacroDef(Lazy<MacroDef>),
- Closure,
- Generator(Lazy!(GeneratorData<'tcx>)),
- Trait(Lazy<TraitData>),
- Impl(Lazy<ImplData>),
- Method(Lazy<MethodData>),
- AssocType(AssocContainer),
- AssocOpaqueTy(AssocContainer),
- AssocConst(AssocContainer, ConstQualif, Lazy<RenderedConst>),
- TraitAlias,
-}
-
-/// Additional data for EntryKind::Const and EntryKind::AssocConst
-#[derive(Clone, Copy, RustcEncodable, RustcDecodable)]
-crate struct ConstQualif {
- pub mir: u8,
-}
-
-/// Contains a constant which has been rendered to a String.
-/// Used by rustdoc.
-#[derive(RustcEncodable, RustcDecodable)]
-crate struct RenderedConst(pub String);
-
-#[derive(RustcEncodable, RustcDecodable)]
-crate struct ModData {
- pub reexports: Lazy<[def::Export<hir::HirId>]>,
-}
-
-#[derive(RustcEncodable, RustcDecodable)]
-crate struct MacroDef {
- pub body: String,
- pub legacy: bool,
-}
-
-#[derive(RustcEncodable, RustcDecodable)]
-crate struct FnData {
- pub asyncness: hir::IsAsync,
- pub constness: hir::Constness,
- pub param_names: Lazy<[ast::Name]>,
-}
-
-#[derive(RustcEncodable, RustcDecodable)]
-crate struct VariantData {
- pub ctor_kind: CtorKind,
- pub discr: ty::VariantDiscr,
- /// If this is unit or tuple-variant/struct, then this is the index of the ctor id.
- pub ctor: Option<DefIndex>,
-}
-
-#[derive(RustcEncodable, RustcDecodable)]
-crate struct TraitData {
- pub unsafety: hir::Unsafety,
- pub paren_sugar: bool,
- pub has_auto_impl: bool,
- pub is_marker: bool,
-}
-
-#[derive(RustcEncodable, RustcDecodable)]
-crate struct ImplData {
- pub polarity: ty::ImplPolarity,
- pub defaultness: hir::Defaultness,
- pub parent_impl: Option<DefId>,
-
- /// This is `Some` only for impls of `CoerceUnsized`.
- // FIXME(eddyb) perhaps compute this on the fly if cheap enough?
- pub coerce_unsized_info: Option<ty::adjustment::CoerceUnsizedInfo>,
-}
-
-
-/// Describes whether the container of an associated item
-/// is a trait or an impl and whether, in a trait, it has
-/// a default, or an in impl, whether it's marked "default".
-#[derive(Copy, Clone, RustcEncodable, RustcDecodable)]
-crate enum AssocContainer {
- TraitRequired,
- TraitWithDefault,
- ImplDefault,
- ImplFinal,
-}
-
-impl AssocContainer {
- crate fn with_def_id(&self, def_id: DefId) -> ty::AssocItemContainer {
- match *self {
- AssocContainer::TraitRequired |
- AssocContainer::TraitWithDefault => ty::TraitContainer(def_id),
-
- AssocContainer::ImplDefault |
- AssocContainer::ImplFinal => ty::ImplContainer(def_id),
- }
- }
-
- crate fn defaultness(&self) -> hir::Defaultness {
- match *self {
- AssocContainer::TraitRequired => hir::Defaultness::Default {
- has_value: false,
- },
-
- AssocContainer::TraitWithDefault |
- AssocContainer::ImplDefault => hir::Defaultness::Default {
- has_value: true,
- },
-
- AssocContainer::ImplFinal => hir::Defaultness::Final,
- }
- }
-}
-
-#[derive(RustcEncodable, RustcDecodable)]
-crate struct MethodData {
- pub fn_data: FnData,
- pub container: AssocContainer,
- pub has_self: bool,
-}
-
-#[derive(RustcEncodable, RustcDecodable)]
-crate struct GeneratorData<'tcx> {
- pub layout: mir::GeneratorLayout<'tcx>,
-}
-
-// Tags used for encoding Spans:
-crate const TAG_VALID_SPAN: u8 = 0;
-crate const TAG_INVALID_SPAN: u8 = 1;
+++ /dev/null
-use crate::decoder::Metadata;
-use crate::schema::*;
-
-use rustc::hir::def_id::{DefId, DefIndex};
-use rustc_serialize::{Encodable, opaque::Encoder};
-use std::convert::TryInto;
-use std::marker::PhantomData;
-use std::num::NonZeroUsize;
-use log::debug;
-
-/// Helper trait, for encoding to, and decoding from, a fixed number of bytes.
-/// Used mainly for Lazy positions and lengths.
-/// Unchecked invariant: `Self::default()` should encode as `[0; BYTE_LEN]`,
-/// but this has no impact on safety.
-crate trait FixedSizeEncoding: Default {
- const BYTE_LEN: usize;
-
- // FIXME(eddyb) convert to and from `[u8; Self::BYTE_LEN]` instead,
- // once that starts being allowed by the compiler (i.e. lazy normalization).
- fn from_bytes(b: &[u8]) -> Self;
- fn write_to_bytes(self, b: &mut [u8]);
-
- // FIXME(eddyb) make these generic functions, or at least defaults here.
- // (same problem as above, needs `[u8; Self::BYTE_LEN]`)
- // For now, a macro (`fixed_size_encoding_byte_len_and_defaults`) is used.
-
- /// Read a `Self` value (encoded as `Self::BYTE_LEN` bytes),
- /// from `&b[i * Self::BYTE_LEN..]`, returning `None` if `i`
- /// is not in bounds, or `Some(Self::from_bytes(...))` otherwise.
- fn maybe_read_from_bytes_at(b: &[u8], i: usize) -> Option<Self>;
- /// Write a `Self` value (encoded as `Self::BYTE_LEN` bytes),
- /// at `&mut b[i * Self::BYTE_LEN..]`, using `Self::write_to_bytes`.
- fn write_to_bytes_at(self, b: &mut [u8], i: usize);
-}
-
-// HACK(eddyb) this shouldn't be needed (see comments on the methods above).
-macro_rules! fixed_size_encoding_byte_len_and_defaults {
- ($byte_len:expr) => {
- const BYTE_LEN: usize = $byte_len;
- fn maybe_read_from_bytes_at(b: &[u8], i: usize) -> Option<Self> {
- const BYTE_LEN: usize = $byte_len;
- // HACK(eddyb) ideally this would be done with fully safe code,
- // but slicing `[u8]` with `i * N..` is optimized worse, due to the
- // possibility of `i * N` overflowing, than indexing `[[u8; N]]`.
- let b = unsafe {
- std::slice::from_raw_parts(
- b.as_ptr() as *const [u8; BYTE_LEN],
- b.len() / BYTE_LEN,
- )
- };
- b.get(i).map(|b| FixedSizeEncoding::from_bytes(b))
- }
- fn write_to_bytes_at(self, b: &mut [u8], i: usize) {
- const BYTE_LEN: usize = $byte_len;
- // HACK(eddyb) ideally this would be done with fully safe code,
- // see similar comment in `read_from_bytes_at` for why it can't yet.
- let b = unsafe {
- std::slice::from_raw_parts_mut(
- b.as_mut_ptr() as *mut [u8; BYTE_LEN],
- b.len() / BYTE_LEN,
- )
- };
- self.write_to_bytes(&mut b[i]);
- }
- }
-}
-
-impl FixedSizeEncoding for u32 {
- fixed_size_encoding_byte_len_and_defaults!(4);
-
- fn from_bytes(b: &[u8]) -> Self {
- let mut bytes = [0; Self::BYTE_LEN];
- bytes.copy_from_slice(&b[..Self::BYTE_LEN]);
- Self::from_le_bytes(bytes)
- }
-
- fn write_to_bytes(self, b: &mut [u8]) {
- b[..Self::BYTE_LEN].copy_from_slice(&self.to_le_bytes());
- }
-}
-
-// NOTE(eddyb) there could be an impl for `usize`, which would enable a more
-// generic `Lazy<T>` impl, but in the general case we might not need / want to
-// fit every `usize` in `u32`.
-impl<T: Encodable> FixedSizeEncoding for Option<Lazy<T>> {
- fixed_size_encoding_byte_len_and_defaults!(u32::BYTE_LEN);
-
- fn from_bytes(b: &[u8]) -> Self {
- Some(Lazy::from_position(NonZeroUsize::new(u32::from_bytes(b) as usize)?))
- }
-
- fn write_to_bytes(self, b: &mut [u8]) {
- let position = self.map_or(0, |lazy| lazy.position.get());
- let position: u32 = position.try_into().unwrap();
-
- position.write_to_bytes(b)
- }
-}
-
-impl<T: Encodable> FixedSizeEncoding for Option<Lazy<[T]>> {
- fixed_size_encoding_byte_len_and_defaults!(u32::BYTE_LEN * 2);
-
- fn from_bytes(b: &[u8]) -> Self {
- Some(Lazy::from_position_and_meta(
- <Option<Lazy<T>>>::from_bytes(b)?.position,
- u32::from_bytes(&b[u32::BYTE_LEN..]) as usize,
- ))
- }
-
- fn write_to_bytes(self, b: &mut [u8]) {
- self.map(|lazy| Lazy::<T>::from_position(lazy.position))
- .write_to_bytes(b);
-
- let len = self.map_or(0, |lazy| lazy.meta);
- let len: u32 = len.try_into().unwrap();
-
- len.write_to_bytes(&mut b[u32::BYTE_LEN..]);
- }
-}
-
-/// Random-access table (i.e. offeringconstant-time `get`/`set`), similar to
-/// `Vec<Option<T>>`, but without requiring encoding or decoding all the values
-/// eagerly and in-order.
-/// A total of `(max_idx + 1) * <Option<T> as FixedSizeEncoding>::BYTE_LEN` bytes
-/// are used for a table, where `max_idx` is the largest index passed to `set`.
-// FIXME(eddyb) replace `Vec` with `[_]` here, such that `Box<Table<T>>` would be used
-// when building it, and `Lazy<Table<T>>` or `&Table<T>` when reading it.
-// (not sure if that is possible given that the `Vec` is being resized now)
-crate struct Table<T> where Option<T>: FixedSizeEncoding {
- // FIXME(eddyb) store `[u8; <Option<T>>::BYTE_LEN]` instead of `u8` in `Vec`,
- // once that starts being allowed by the compiler (i.e. lazy normalization).
- bytes: Vec<u8>,
- _marker: PhantomData<T>,
-}
-
-impl<T> Default for Table<T> where Option<T>: FixedSizeEncoding {
- fn default() -> Self {
- Table {
- bytes: vec![],
- _marker: PhantomData,
- }
- }
-}
-
-impl<T> Table<T> where Option<T>: FixedSizeEncoding {
- crate fn set(&mut self, i: usize, value: T) {
- // FIXME(eddyb) investigate more compact encodings for sparse tables.
- // On the PR @michaelwoerister mentioned:
- // > Space requirements could perhaps be optimized by using the HAMT `popcnt`
- // > trick (i.e. divide things into buckets of 32 or 64 items and then
- // > store bit-masks of which item in each bucket is actually serialized).
- let needed = (i + 1) * <Option<T>>::BYTE_LEN;
- if self.bytes.len() < needed {
- self.bytes.resize(needed, 0);
- }
-
- Some(value).write_to_bytes_at(&mut self.bytes, i);
- }
-
- crate fn encode(&self, buf: &mut Encoder) -> Lazy<Self> {
- let pos = buf.position();
- buf.emit_raw_bytes(&self.bytes);
- Lazy::from_position_and_meta(
- NonZeroUsize::new(pos as usize).unwrap(),
- self.bytes.len(),
- )
- }
-}
-
-impl<T> LazyMeta for Table<T> where Option<T>: FixedSizeEncoding {
- type Meta = usize;
-
- fn min_size(len: usize) -> usize {
- len
- }
-}
-
-impl<T> Lazy<Table<T>> where Option<T>: FixedSizeEncoding {
- /// Given the metadata, extract out the value at a particular index (if any).
- #[inline(never)]
- crate fn get<'a, 'tcx, M: Metadata<'a, 'tcx>>(
- &self,
- metadata: M,
- i: usize,
- ) -> Option<T> {
- debug!("Table::lookup: index={:?} len={:?}", i, self.meta);
-
- let start = self.position.get();
- let bytes = &metadata.raw_bytes()[start..start + self.meta];
- <Option<T>>::maybe_read_from_bytes_at(bytes, i)?
- }
-}
-
-/// Like a `Table` but using `DefIndex` instead of `usize` as keys.
-// FIXME(eddyb) replace by making `Table` behave like `IndexVec`,
-// and by using `newtype_index!` to define `DefIndex`.
-crate struct PerDefTable<T>(Table<T>) where Option<T>: FixedSizeEncoding;
-
-impl<T> Default for PerDefTable<T> where Option<T>: FixedSizeEncoding {
- fn default() -> Self {
- PerDefTable(Table::default())
- }
-}
-
-impl<T> PerDefTable<T> where Option<T>: FixedSizeEncoding {
- crate fn set(&mut self, def_id: DefId, value: T) {
- assert!(def_id.is_local());
- self.0.set(def_id.index.index(), value);
- }
-
- crate fn encode(&self, buf: &mut Encoder) -> Lazy<Self> {
- let lazy = self.0.encode(buf);
- Lazy::from_position_and_meta(lazy.position, lazy.meta)
- }
-}
-
-impl<T> LazyMeta for PerDefTable<T> where Option<T>: FixedSizeEncoding {
- type Meta = <Table<T> as LazyMeta>::Meta;
-
- fn min_size(meta: Self::Meta) -> usize {
- Table::<T>::min_size(meta)
- }
-}
-
-impl<T> Lazy<PerDefTable<T>> where Option<T>: FixedSizeEncoding {
- fn as_table(&self) -> Lazy<Table<T>> {
- Lazy::from_position_and_meta(self.position, self.meta)
- }
-
- /// Given the metadata, extract out the value at a particular DefIndex (if any).
- #[inline(never)]
- crate fn get<'a, 'tcx, M: Metadata<'a, 'tcx>>(
- &self,
- metadata: M,
- def_index: DefIndex,
- ) -> Option<T> {
- self.as_table().get(metadata, def_index.index())
- }
-}
trunc(n as u128)?
},
LitKind::Int(n, _) => trunc(n)?,
- LitKind::Float(n, fty) => {
- parse_float(n, fty, neg).map_err(|_| LitToConstError::UnparseableFloat)?
- }
- LitKind::FloatUnsuffixed(n) => {
+ LitKind::Float(n, _) => {
let fty = match ty.kind {
ty::Float(fty) => fty,
_ => bug!()
match self.ty.kind {
ty::Adt(def, _) if def.is_box() => write!(f, "box ")?,
ty::Ref(_, _, mutbl) => {
- write!(f, "&")?;
- if mutbl == hir::MutMutable {
- write!(f, "mut ")?;
- }
+ write!(f, "&{}", mutbl.prefix_str())?;
}
_ => bug!("{} is a bad Deref pattern type", self.ty)
}
match (&src_pointee_ty.kind, &dest_pointee_ty.kind) {
(&ty::Array(_, length), &ty::Slice(_)) => {
- let ptr = self.read_immediate(src)?.to_scalar_ptr()?;
+ let ptr = self.read_immediate(src)?.to_scalar()?;
// u64 cast is from usize to u64, which is always good
let val = Immediate::new_slice(
ptr,
(_, &ty::Dynamic(ref data, _)) => {
// Initial cast from sized to dyn trait
let vtable = self.get_vtable(src_pointee_ty, data.principal())?;
- let ptr = self.read_immediate(src)?.to_scalar_ptr()?;
+ let ptr = self.read_immediate(src)?.to_scalar()?;
let val = Immediate::new_dyn_trait(ptr, vtable);
self.write_immediate(val, dest)
}
let ty = mplace.layout.ty;
if let ty::Ref(_, referenced_ty, mutability) = ty.kind {
let value = self.ecx.read_immediate(mplace.into())?;
+ let mplace = self.ecx.ref_to_mplace(value)?;
// Handle trait object vtables
- if let Ok(meta) = value.to_meta() {
- if let ty::Dynamic(..) =
- self.ecx.tcx.struct_tail_erasing_lifetimes(
- referenced_ty, self.ecx.param_env).kind
- {
- if let Ok(vtable) = meta.unwrap().to_ptr() {
- // explitly choose `Immutable` here, since vtables are immutable, even
- // if the reference of the fat pointer is mutable
- self.intern_shallow(vtable.alloc_id, Mutability::Immutable, None)?;
- }
+ if let ty::Dynamic(..) =
+ self.ecx.tcx.struct_tail_erasing_lifetimes(
+ referenced_ty, self.ecx.param_env).kind
+ {
+ if let Ok(vtable) = mplace.meta.unwrap().to_ptr() {
+ // explitly choose `Immutable` here, since vtables are immutable, even
+ // if the reference of the fat pointer is mutable
+ self.intern_shallow(vtable.alloc_id, Mutability::Immutable, None)?;
}
}
- let mplace = self.ecx.ref_to_mplace(value)?;
// Check if we have encountered this pointer+layout combination before.
// Only recurse for allocation-backed pointers.
if let Scalar::Ptr(ptr) = mplace.ptr {
ty::Array(_, n)
if n.eval_usize(self.ecx.tcx.tcx, self.ecx.param_env) == 0 => {}
ty::Slice(_)
- if value.to_meta().unwrap().unwrap().to_usize(self.ecx)? == 0 => {}
+ if mplace.meta.unwrap().to_usize(self.ecx)? == 0 => {}
_ => bug!("const qualif failed to prevent mutable references"),
}
},
Immediate::ScalarPair(a, b) => Ok((a.not_undef()?, b.not_undef()?))
}
}
-
- /// Converts the immediate into a pointer (or a pointer-sized integer).
- /// Throws away the second half of a ScalarPair!
- #[inline]
- pub fn to_scalar_ptr(self) -> InterpResult<'tcx, Scalar<Tag>> {
- match self {
- Immediate::Scalar(ptr) |
- Immediate::ScalarPair(ptr, _) => ptr.not_undef(),
- }
- }
-
- /// Converts the value into its metadata.
- /// Throws away the first half of a ScalarPair!
- #[inline]
- pub fn to_meta(self) -> InterpResult<'tcx, Option<Scalar<Tag>>> {
- Ok(match self {
- Immediate::Scalar(_) => None,
- Immediate::ScalarPair(_, meta) => Some(meta.not_undef()?),
- })
- }
}
// ScalarPair needs a type to interpret, so we often have an immediate and a type together
&self,
val: ImmTy<'tcx, M::PointerTag>,
) -> InterpResult<'tcx, MPlaceTy<'tcx, M::PointerTag>> {
- let pointee_type = val.layout.ty.builtin_deref(true).unwrap().ty;
+ let pointee_type = val.layout.ty.builtin_deref(true)
+ .expect("`ref_to_mplace` called on non-ptr type")
+ .ty;
let layout = self.layout_of(pointee_type)?;
+ let (ptr, meta) = match *val {
+ Immediate::Scalar(ptr) => (ptr.not_undef()?, None),
+ Immediate::ScalarPair(ptr, meta) => (ptr.not_undef()?, Some(meta.not_undef()?)),
+ };
let mplace = MemPlace {
- ptr: val.to_scalar_ptr()?,
+ ptr,
// We could use the run-time alignment here. For now, we do not, because
// the point of tracking the alignment here is to make sure that the *static*
// alignment information emitted with the loads is correct. The run-time
// alignment can only be more restrictive.
align: layout.align.abi,
- meta: val.to_meta()?,
+ meta,
};
Ok(MPlaceTy { mplace, layout })
}
}
}
ty::RawPtr(..) => {
- // Check pointer part.
- if self.ref_tracking_for_consts.is_some() {
- // Integers/floats in CTFE: For consistency with integers, we do not
- // accept undef.
- let _ptr = try_validation!(value.to_scalar_ptr(),
- "undefined address in raw pointer", self.path);
- } else {
- // Remain consistent with `usize`: Accept anything.
- }
-
- // Check metadata.
- let meta = try_validation!(value.to_meta(),
- "uninitialized data in wide pointer metadata", self.path);
- let layout = self.ecx.layout_of(value.layout.ty.builtin_deref(true).unwrap().ty)?;
- if layout.is_unsized() {
- self.check_wide_ptr_meta(meta, layout)?;
+ // We are conservative with undef for integers, but try to
+ // actually enforce our current rules for raw pointers.
+ let place = try_validation!(self.ecx.ref_to_mplace(value),
+ "undefined pointer", self.path);
+ if place.layout.is_unsized() {
+ self.check_wide_ptr_meta(place.meta, place.layout)?;
}
}
_ if ty.is_box() || ty.is_region_ptr() => {
// Handle wide pointers.
// Check metadata early, for better diagnostics
- let ptr = try_validation!(value.to_scalar_ptr(),
- "undefined address in pointer", self.path);
- let meta = try_validation!(value.to_meta(),
- "uninitialized data in wide pointer metadata", self.path);
- let layout = self.ecx.layout_of(value.layout.ty.builtin_deref(true).unwrap().ty)?;
- if layout.is_unsized() {
- self.check_wide_ptr_meta(meta, layout)?;
+ let place = try_validation!(self.ecx.ref_to_mplace(value),
+ "undefined pointer", self.path);
+ if place.layout.is_unsized() {
+ self.check_wide_ptr_meta(place.meta, place.layout)?;
}
// Make sure this is dereferencable and all.
- let (size, align) = self.ecx.size_and_align_of(meta, layout)?
+ let (size, align) = self.ecx.size_and_align_of(place.meta, place.layout)?
// for the purpose of validity, consider foreign types to have
// alignment and size determined by the layout (size will be 0,
// alignment should take attributes into account).
- .unwrap_or_else(|| (layout.size, layout.align.abi));
+ .unwrap_or_else(|| (place.layout.size, place.layout.align.abi));
let ptr: Option<_> = match
self.ecx.memory.check_ptr_access_align(
- ptr,
+ place.ptr,
size,
Some(align),
CheckInAllocMsg::InboundsTest,
Err(err) => {
info!(
"{:?} did not pass access check for size {:?}, align {:?}",
- ptr, size, align
+ place.ptr, size, align
);
match err.kind {
err_unsup!(InvalidNullPointerUsage) =>
};
// Recursive checking
if let Some(ref mut ref_tracking) = self.ref_tracking_for_consts {
- let place = self.ecx.ref_to_mplace(value)?;
if let Some(ptr) = ptr { // not a ZST
// Skip validation entirely for some external statics
let alloc_kind = self.ecx.tcx.alloc_map.lock().get(ptr.alloc_id);
// reject it. However, that's good: We don't inherently want
// to reject those pointers, we just do not have the machinery to
// talk about parts of a pointer.
- // We also accept undef, for consistency with the type-based checks.
+ // We also accept undef, for consistency with the slow path.
match self.ecx.memory.get(ptr.alloc_id)?.check_bytes(
self.ecx,
ptr,
&self.unchecked_promotion_candidates,
)
};
+
debug!("qualify_const: promotion_candidates={:?}", promotion_candidates);
for candidate in promotion_candidates {
match candidate {
- Candidate::Repeat(Location { block: bb, statement_index: stmt_idx }) => {
- if let StatementKind::Assign(box(_, Rvalue::Repeat(
- Operand::Move(place),
- _
- ))) = &self.body[bb].statements[stmt_idx].kind {
- if let Some(index) = place.as_local() {
- promoted_temps.insert(index);
- }
- }
- }
Candidate::Ref(Location { block: bb, statement_index: stmt_idx }) => {
- if let StatementKind::Assign(
- box(
- _,
- Rvalue::Ref(_, _, place)
- )
- ) = &self.body[bb].statements[stmt_idx].kind {
- if let Some(index) = place.as_local() {
- promoted_temps.insert(index);
+ if let StatementKind::Assign(box( _, Rvalue::Ref(_, _, place)))
+ = &self.body[bb].statements[stmt_idx].kind
+ {
+ if let PlaceBase::Local(local) = place.base {
+ promoted_temps.insert(local);
}
}
}
- Candidate::Argument { .. } => {}
+
+ // Only rvalue-static promotion requires extending the lifetime of the promoted
+ // local.
+ Candidate::Argument { .. } | Candidate::Repeat(_) => {}
}
}
let arr = [sym::allow, sym::cfg, sym::cfg_attr, sym::deny, sym::forbid, sym::warn];
!arr.contains(&attr.name_or_empty()) && is_builtin_attr(attr)
})
- .for_each(|attr| if attr.is_sugared_doc {
+ .for_each(|attr| if attr.is_doc_comment() {
let mut err = self.err_handler().struct_span_err(
attr.span,
"documentation comments cannot be applied to function parameters"
use syntax::ast::{self, Block, ForeignItem, ForeignItemKind, Item, ItemKind, NodeId};
use syntax::ast::{MetaItemKind, StmtKind, TraitItem, TraitItemKind};
use syntax::feature_gate::is_builtin_attr;
-use syntax::parse::token::{self, Token};
+use syntax::token::{self, Token};
use syntax::print::pprust;
use syntax::{span_err, struct_span_err};
use syntax::source_map::{respan, Spanned};
}
fn visit_attribute(&mut self, attr: &'b ast::Attribute) {
- if !attr.is_sugared_doc && is_builtin_attr(attr) {
- self.r.builtin_attrs.push((attr.path.segments[0].ident, self.parent_scope));
+ if !attr.is_doc_comment() && is_builtin_attr(attr) {
+ self.r.builtin_attrs.push(
+ (attr.get_normal_item().path.segments[0].ident, self.parent_scope)
+ );
}
visit::walk_attribute(self, attr);
}
let (path, kind, derives, after_derive) = match invoc.kind {
InvocationKind::Attr { ref attr, ref derives, after_derive, .. } =>
- (&attr.path, MacroKind::Attr, self.arenas.alloc_ast_paths(derives), after_derive),
+ (&attr.get_normal_item().path,
+ MacroKind::Attr,
+ self.arenas.alloc_ast_paths(derives),
+ after_derive),
InvocationKind::Bang { ref mac, .. } =>
(&mac.path, MacroKind::Bang, &[][..], false),
InvocationKind::Derive { ref path, .. } =>
rustc = { path = "../librustc" }
rustc_data_structures = { path = "../librustc_data_structures" }
rustc_codegen_utils = { path = "../librustc_codegen_utils" }
-rustc_target = { path = "../librustc_target" }
serde_json = "1"
syntax = { path = "../libsyntax" }
syntax_pos = { path = "../libsyntax_pos" }
use std::env;
use syntax::ast::{self, Attribute, NodeId, PatKind};
-use syntax::parse::token;
+use syntax::token;
use syntax::visit::{self, Visitor};
use syntax::print::pprust::{
bounds_to_string,
use syntax::ast::{self, Attribute, DUMMY_NODE_ID, NodeId, PatKind};
use syntax::source_map::Spanned;
-use syntax::parse::lexer::comments::strip_doc_comment_decoration;
+use syntax::util::comments::strip_doc_comment_decoration;
use syntax::print::pprust;
use syntax::visit::{self, Visitor};
use syntax::print::pprust::{param_to_string, ty_to_string};
for attr in attrs {
if attr.check_name(sym::doc) {
if let Some(val) = attr.value_str() {
- if attr.is_sugared_doc {
+ if attr.is_doc_comment() {
result.push_str(&strip_doc_comment_decoration(&val.as_str()));
} else {
result.push_str(&val.as_str());
fn lower_attributes(attrs: Vec<Attribute>, scx: &SaveContext<'_, '_>) -> Vec<rls_data::Attribute> {
attrs.into_iter()
// Only retain real attributes. Doc comments are lowered separately.
- .filter(|attr| attr.path != sym::doc)
+ .filter(|attr| !attr.has_name(sym::doc))
.map(|mut attr| {
// Remove the surrounding '#[..]' or '#![..]' of the pretty printed
// attribute. First normalize all inner attribute (#![..]) to outer
use rustc::hir::def::{Res, DefKind};
use syntax::ast::{self, NodeId};
use syntax::print::pprust;
-
+use syntax_pos::sym;
pub fn item_signature(item: &ast::Item, scx: &SaveContext<'_, '_>) -> Option<Signature> {
if !scx.config.signatures {
}
}
+fn push_abi(text: &mut String, abi: ast::Abi) {
+ if abi.symbol != sym::Rust {
+ text.push_str(&format!("extern \"{}\" ", abi.symbol));
+ }
+}
+
impl Sig for ast::Ty {
fn make(&self, offset: usize, _parent_id: Option<NodeId>, scx: &SaveContext<'_, '_>) -> Result {
let id = Some(self.id);
if f.unsafety == ast::Unsafety::Unsafe {
text.push_str("unsafe ");
}
- if f.abi != rustc_target::spec::abi::Abi::Rust {
- text.push_str("extern");
- text.push_str(&f.abi.to_string());
- text.push(' ');
- }
+ push_abi(&mut text, f.abi);
text.push_str("fn(");
let mut defs = vec![];
if header.unsafety == ast::Unsafety::Unsafe {
text.push_str("unsafe ");
}
- if header.abi != rustc_target::spec::abi::Abi::Rust {
- text.push_str("extern");
- text.push_str(&header.abi.to_string());
- text.push(' ');
- }
+ push_abi(&mut text, header.abi);
text.push_str("fn ");
let mut sig = name_and_generics(text, offset, generics, self.id, self.ident, scx)?;
if m.header.unsafety == ast::Unsafety::Unsafe {
text.push_str("unsafe ");
}
- if m.header.abi != rustc_target::spec::abi::Abi::Rust {
- text.push_str("extern");
- text.push_str(&m.header.abi.to_string());
- text.push(' ');
- }
+ push_abi(&mut text, m.header.abi);
text.push_str("fn ");
let mut sig = name_and_generics(text, 0, generics, id, ident, scx)?;
use crate::generated_code;
use syntax::parse::lexer::{self, StringReader};
-use syntax::parse::token::{self, TokenKind};
+use syntax::token::{self, TokenKind};
use syntax_pos::*;
#[derive(Clone)]
{
match ret.layout.field(cx, i).abi {
abi::Abi::Scalar(ref scalar) => match scalar.value {
- abi::Float(abi::FloatTy::F32) => Some(Reg::f32()),
- abi::Float(abi::FloatTy::F64) => Some(Reg::f64()),
+ abi::F32 => Some(Reg::f32()),
+ abi::F64 => Some(Reg::f64()),
_ => None
},
_ => None
// We only care about aligned doubles
if let abi::Abi::Scalar(ref scalar) = field.abi {
- if let abi::Float(abi::FloatTy::F64) = scalar.value {
+ if let abi::F64 = scalar.value {
if offset.is_aligned(dl.f64_align.abi) {
// Insert enough integers to cover [last_offset, offset)
assert!(last_offset.is_aligned(dl.f64_align.abi));
let kind = match scalar.value {
abi::Int(..) |
abi::Pointer => RegKind::Integer,
- abi::Float(_) => RegKind::Float,
+ abi::F32 | abi::F64 => RegKind::Float,
};
HomogeneousAggregate::Homogeneous(Reg {
kind,
match scalar.value {
abi::Int(..) |
abi::Pointer => Class::Int,
- abi::Float(_) => Class::Sse
+ abi::F32 | abi::F64 => Class::Sse
}
}
use crate::spec::Target;
-use std::fmt;
use std::ops::{Add, Deref, Sub, Mul, AddAssign, Range, RangeInclusive};
use rustc_index::vec::{Idx, IndexVec};
-use syntax_pos::symbol::{sym, Symbol};
use syntax_pos::Span;
pub mod call;
}
}
-
-#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Copy,
- PartialOrd, Ord)]
-pub enum FloatTy {
- F32,
- F64,
-}
-
-impl fmt::Debug for FloatTy {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- fmt::Display::fmt(self, f)
- }
-}
-
-impl fmt::Display for FloatTy {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- write!(f, "{}", self.ty_to_string())
- }
-}
-
-impl FloatTy {
- pub fn ty_to_string(self) -> &'static str {
- match self {
- FloatTy::F32 => "f32",
- FloatTy::F64 => "f64",
- }
- }
-
- pub fn to_symbol(self) -> Symbol {
- match self {
- FloatTy::F32 => sym::f32,
- FloatTy::F64 => sym::f64,
- }
- }
-
- pub fn bit_width(self) -> usize {
- match self {
- FloatTy::F32 => 32,
- FloatTy::F64 => 64,
- }
- }
-}
-
/// Fundamental unit of memory access and layout.
#[derive(Copy, Clone, PartialEq, Eq, Hash, Debug)]
pub enum Primitive {
/// a negative integer passed by zero-extension will appear positive in
/// the callee, and most operations on it will produce the wrong values.
Int(Integer, bool),
- Float(FloatTy),
+ F32,
+ F64,
Pointer
}
match self {
Int(i, _) => i.size(),
- Float(FloatTy::F32) => Size::from_bits(32),
- Float(FloatTy::F64) => Size::from_bits(64),
+ F32 => Size::from_bits(32),
+ F64 => Size::from_bits(64),
Pointer => dl.pointer_size
}
}
match self {
Int(i, _) => i.align(dl),
- Float(FloatTy::F32) => dl.f32_align,
- Float(FloatTy::F64) => dl.f64_align,
+ F32 => dl.f32_align,
+ F64 => dl.f64_align,
Pointer => dl.pointer_align
}
}
pub fn is_float(self) -> bool {
match self {
- Float(_) => true,
+ F32 | F64 => true,
_ => false
}
}
tstr);
match self.expr_ty.kind {
ty::Ref(_, _, mt) => {
- let mtstr = match mt {
- hir::MutMutable => "mut ",
- hir::MutImmutable => "",
- };
+ let mtstr = mt.prefix_str();
if self.cast_ty.is_trait() {
match fcx.tcx.sess.source_map().span_to_snippet(self.cast_span) {
Ok(s) => {
cause.span,
target_id,
);
- let val = match ty.kind {
- ty::Bool => "true",
- ty::Char => "'a'",
- ty::Int(_) | ty::Uint(_) => "42",
- ty::Float(_) => "3.14159",
- ty::Error | ty::Never => return,
- _ => "value",
- };
- let msg = "give it a value of the expected type";
- let label = destination.label
- .map(|l| format!(" {}", l.ident))
- .unwrap_or_else(String::new);
- let sugg = format!("break{} {}", label, val);
- err.span_suggestion(expr.span, msg, sugg, Applicability::HasPlaceholders);
+ if let Some(val) = ty_kind_suggestion(ty) {
+ let label = destination.label
+ .map(|l| format!(" {}", l.ident))
+ .unwrap_or_else(String::new);
+ err.span_suggestion(
+ expr.span,
+ "give it a value of the expected type",
+ format!("break{} {}", label, val),
+ Applicability::HasPlaceholders,
+ );
+ }
}, false);
}
} else {
self.tcx.mk_unit()
}
}
+
+pub(super) fn ty_kind_suggestion(ty: Ty<'_>) -> Option<&'static str> {
+ Some(match ty.kind {
+ ty::Bool => "true",
+ ty::Char => "'a'",
+ ty::Int(_) | ty::Uint(_) => "42",
+ ty::Float(_) => "3.14159",
+ ty::Error | ty::Never => return None,
+ _ => "value",
+ })
+}
pub fn intrinsic_operation_unsafety(intrinsic: &str) -> hir::Unsafety {
match intrinsic {
"size_of" | "min_align_of" | "needs_drop" | "caller_location" |
+ "size_of_val" | "min_align_of_val" |
"add_with_overflow" | "sub_with_overflow" | "mul_with_overflow" |
"wrapping_add" | "wrapping_sub" | "wrapping_mul" |
"saturating_add" | "saturating_sub" |
"rotate_left" | "rotate_right" |
"ctpop" | "ctlz" | "cttz" | "bswap" | "bitreverse" |
+ "discriminant_value" | "type_id" | "likely" | "unlikely" |
"minnumf32" | "minnumf64" | "maxnumf32" | "maxnumf64" | "type_name"
=> hir::Unsafety::Normal,
_ => hir::Unsafety::Unsafe,
use syntax::attr;
use syntax::feature_gate::{GateIssue, emit_feature_err};
use syntax::source_map::{DUMMY_SP, original_sp};
-use syntax::symbol::{kw, sym};
+use syntax::symbol::{kw, sym, Ident};
use syntax::util::parser::ExprPrecedence;
use std::cell::{Cell, RefCell, Ref, RefMut};
fn check_impl_items_against_trait<'tcx>(
tcx: TyCtxt<'tcx>,
- impl_span: Span,
+ full_impl_span: Span,
impl_id: DefId,
impl_trait_ref: ty::TraitRef<'tcx>,
impl_item_refs: &[hir::ImplItemRef],
) {
- let impl_span = tcx.sess.source_map().def_span(impl_span);
+ let impl_span = tcx.sess.source_map().def_span(full_impl_span);
// If the trait reference itself is erroneous (so the compilation is going
// to fail), skip checking the items here -- the `impl_item` table in `tcx`
}
if !missing_items.is_empty() {
- let mut err = struct_span_err!(tcx.sess, impl_span, E0046,
- "not all trait items implemented, missing: `{}`",
- missing_items.iter()
- .map(|trait_item| trait_item.ident.to_string())
- .collect::<Vec<_>>().join("`, `"));
- err.span_label(impl_span, format!("missing `{}` in implementation",
- missing_items.iter()
- .map(|trait_item| trait_item.ident.to_string())
- .collect::<Vec<_>>().join("`, `")));
- for trait_item in missing_items {
- if let Some(span) = tcx.hir().span_if_local(trait_item.def_id) {
- err.span_label(span, format!("`{}` from trait", trait_item.ident));
- } else {
- err.note_trait_signature(trait_item.ident.to_string(),
- trait_item.signature(tcx));
- }
- }
- err.emit();
+ missing_items_err(tcx, impl_span, &missing_items, full_impl_span);
}
if !invalidated_items.is_empty() {
let invalidator = overridden_associated_type.unwrap();
- span_err!(tcx.sess, invalidator.span, E0399,
- "the following trait items need to be reimplemented \
- as `{}` was overridden: `{}`",
- invalidator.ident,
- invalidated_items.iter()
- .map(|name| name.to_string())
- .collect::<Vec<_>>().join("`, `"))
+ span_err!(
+ tcx.sess,
+ invalidator.span,
+ E0399,
+ "the following trait items need to be reimplemented as `{}` was overridden: `{}`",
+ invalidator.ident,
+ invalidated_items.iter()
+ .map(|name| name.to_string())
+ .collect::<Vec<_>>().join("`, `")
+ )
+ }
+}
+
+fn missing_items_err(
+ tcx: TyCtxt<'_>,
+ impl_span: Span,
+ missing_items: &[ty::AssocItem],
+ full_impl_span: Span,
+) {
+ let missing_items_msg = missing_items.iter()
+ .map(|trait_item| trait_item.ident.to_string())
+ .collect::<Vec<_>>().join("`, `");
+
+ let mut err = struct_span_err!(
+ tcx.sess,
+ impl_span,
+ E0046,
+ "not all trait items implemented, missing: `{}`",
+ missing_items_msg
+ );
+ err.span_label(impl_span, format!("missing `{}` in implementation", missing_items_msg));
+
+ // `Span` before impl block closing brace.
+ let hi = full_impl_span.hi() - BytePos(1);
+ // Point at the place right before the closing brace of the relevant `impl` to suggest
+ // adding the associated item at the end of its body.
+ let sugg_sp = full_impl_span.with_lo(hi).with_hi(hi);
+ // Obtain the level of indentation ending in `sugg_sp`.
+ let indentation = tcx.sess.source_map().span_to_margin(sugg_sp).unwrap_or(0);
+ // Make the whitespace that will make the suggestion have the right indentation.
+ let padding: String = (0..indentation).map(|_| " ").collect();
+
+ for trait_item in missing_items {
+ let snippet = suggestion_signature(&trait_item, tcx);
+ let code = format!("{}{}\n{}", padding, snippet, padding);
+ let msg = format!("implement the missing item: `{}`", snippet);
+ let appl = Applicability::HasPlaceholders;
+ if let Some(span) = tcx.hir().span_if_local(trait_item.def_id) {
+ err.span_label(span, format!("`{}` from trait", trait_item.ident));
+ err.tool_only_span_suggestion(sugg_sp, &msg, code, appl);
+ } else {
+ err.span_suggestion_hidden(sugg_sp, &msg, code, appl);
+ }
+ }
+ err.emit();
+}
+
+/// Return placeholder code for the given function.
+fn fn_sig_suggestion(sig: &ty::FnSig<'_>, ident: Ident) -> String {
+ let args = sig.inputs()
+ .iter()
+ .map(|ty| Some(match ty.kind {
+ ty::Param(param) if param.name == kw::SelfUpper => "self".to_string(),
+ ty::Ref(reg, ref_ty, mutability) => {
+ let reg = match &format!("{}", reg)[..] {
+ "'_" | "" => String::new(),
+ reg => format!("{} ", reg),
+ };
+ match ref_ty.kind {
+ ty::Param(param) if param.name == kw::SelfUpper => {
+ format!("&{}{}self", reg, mutability.prefix_str())
+ }
+ _ => format!("_: {:?}", ty),
+ }
+ }
+ _ => format!("_: {:?}", ty),
+ }))
+ .chain(std::iter::once(if sig.c_variadic {
+ Some("...".to_string())
+ } else {
+ None
+ }))
+ .filter_map(|arg| arg)
+ .collect::<Vec<String>>()
+ .join(", ");
+ let output = sig.output();
+ let output = if !output.is_unit() {
+ format!(" -> {:?}", output)
+ } else {
+ String::new()
+ };
+
+ let unsafety = sig.unsafety.prefix_str();
+ // FIXME: this is not entirely correct, as the lifetimes from borrowed params will
+ // not be present in the `fn` definition, not will we account for renamed
+ // lifetimes between the `impl` and the `trait`, but this should be good enough to
+ // fill in a significant portion of the missing code, and other subsequent
+ // suggestions can help the user fix the code.
+ format!("{}fn {}({}){} {{ unimplemented!() }}", unsafety, ident, args, output)
+}
+
+/// Return placeholder code for the given associated item.
+/// Similar to `ty::AssocItem::suggestion`, but appropriate for use as the code snippet of a
+/// structured suggestion.
+fn suggestion_signature(assoc: &ty::AssocItem, tcx: TyCtxt<'_>) -> String {
+ match assoc.kind {
+ ty::AssocKind::Method => {
+ // We skip the binder here because the binder would deanonymize all
+ // late-bound regions, and we don't want method signatures to show up
+ // `as for<'r> fn(&'r MyType)`. Pretty-printing handles late-bound
+ // regions just fine, showing `fn(&MyType)`.
+ fn_sig_suggestion(tcx.fn_sig(assoc.def_id).skip_binder(), assoc.ident)
+ }
+ ty::AssocKind::Type => format!("type {} = Type;", assoc.ident),
+ // FIXME(type_alias_impl_trait): we should print bounds here too.
+ ty::AssocKind::OpaqueTy => format!("type {} = Type;", assoc.ident),
+ ty::AssocKind::Const => {
+ let ty = tcx.type_of(assoc.def_id);
+ let val = expr::ty_kind_suggestion(ty).unwrap_or("value");
+ format!("const {}: {:?} = {};", assoc.ident, ty, val)
+ }
}
}
});
opt_ty.unwrap_or_else(|| self.next_int_var())
}
- ast::LitKind::Float(_, t) => tcx.mk_mach_float(t),
- ast::LitKind::FloatUnsuffixed(_) => {
+ ast::LitKind::Float(_, ast::LitFloatType::Suffixed(t)) => tcx.mk_mach_float(t),
+ ast::LitKind::Float(_, ast::LitFloatType::Unsuffixed) => {
let opt_ty = expected.to_option(self).and_then(|ty| {
match ty.kind {
ty::Float(_) => Some(ty),
);
}
-fn type_of(tcx: TyCtxt<'_>, def_id: DefId) -> Ty<'_> {
- checked_type_of(tcx, def_id, true).unwrap()
-}
-
fn infer_placeholder_type(
tcx: TyCtxt<'_>,
def_id: DefId,
ty
}
-/// Same as [`type_of`] but returns [`Option`] instead of failing.
-///
-/// If you want to fail anyway, you can set the `fail` parameter to true, but in this case,
-/// you'd better just call [`type_of`] directly.
-pub fn checked_type_of(tcx: TyCtxt<'_>, def_id: DefId, fail: bool) -> Option<Ty<'_>> {
+fn type_of(tcx: TyCtxt<'_>, def_id: DefId) -> Ty<'_> {
use rustc::hir::*;
- let hir_id = match tcx.hir().as_local_hir_id(def_id) {
- Some(hir_id) => hir_id,
- None => {
- if !fail {
- return None;
- }
- bug!("invalid node");
- }
- };
+ let hir_id = tcx.hir().as_local_hir_id(def_id).unwrap();
let icx = ItemCtxt::new(tcx, def_id);
- Some(match tcx.hir().get(hir_id) {
+ match tcx.hir().get(hir_id) {
Node::TraitItem(item) => match item.kind {
TraitItemKind::Method(..) => {
let substs = InternalSubsts::identity_for_item(tcx, def_id);
},
TraitItemKind::Type(_, Some(ref ty)) => icx.to_ty(ty),
TraitItemKind::Type(_, None) => {
- if !fail {
- return None;
- }
span_bug!(item.span, "associated type missing default");
}
},
| ItemKind::GlobalAsm(..)
| ItemKind::ExternCrate(..)
| ItemKind::Use(..) => {
- if !fail {
- return None;
- }
span_bug!(
item.span,
"compute_type_of_item: unexpected item type: {:?}",
..
}) => {
if gen.is_some() {
- return Some(tcx.typeck_tables_of(def_id).node_type(hir_id));
+ return tcx.typeck_tables_of(def_id).node_type(hir_id);
}
let substs = InternalSubsts::identity_for_item(tcx, def_id);
.map(|(index, _)| index)
.next()
})
- .or_else(|| {
- if !fail {
- None
- } else {
- bug!("no arg matching AnonConst in path")
- }
- })?;
+ .unwrap_or_else(|| {
+ bug!("no arg matching AnonConst in path");
+ });
// We've encountered an `AnonConst` in some path, so we need to
// figure out which generic parameter it corresponds to and return
tcx.generics_of(tcx.parent(def_id).unwrap())
}
Res::Def(_, def_id) => tcx.generics_of(def_id),
- Res::Err => return Some(tcx.types.err),
- _ if !fail => return None,
+ Res::Err => return tcx.types.err,
res => {
tcx.sess.delay_span_bug(
DUMMY_SP,
res,
),
);
- return Some(tcx.types.err);
+ return tcx.types.err;
}
};
// probably from an extra arg where one is not needed.
.unwrap_or(tcx.types.err)
} else {
- if !fail {
- return None;
- }
tcx.sess.delay_span_bug(
DUMMY_SP,
&format!(
parent_node,
),
);
- return Some(tcx.types.err);
+ return tcx.types.err;
}
}
x => {
- if !fail {
- return None;
- }
tcx.sess.delay_span_bug(
DUMMY_SP,
&format!(
}
ty
}
- x => {
- if !fail {
- return None;
- }
- bug!("unexpected non-type Node::GenericParam: {:?}", x)
- },
+ x => bug!("unexpected non-type Node::GenericParam: {:?}", x),
},
x => {
- if !fail {
- return None;
- }
bug!("unexpected sort of node in type_of_def_id(): {:?}", x);
}
- })
+ }
}
fn find_opaque_ty_constraints(tcx: TyCtxt<'_>, def_id: DefId) -> Ty<'_> {
}
}
- let hir_id = match tcx.hir().as_local_hir_id(def_id) {
- Some(hir_id) => hir_id,
- None => return tcx.predicates_of(def_id),
- };
+ let hir_id = tcx.hir().as_local_hir_id(def_id).unwrap();
let node = tcx.hir().get(hir_id);
let mut is_trait = None;
}
codegen_fn_attrs.inline = attrs.iter().fold(InlineAttr::None, |ia, attr| {
- if attr.path != sym::inline {
+ if !attr.has_name(sym::inline) {
return ia;
}
match attr.meta().map(|i| i.kind) {
});
codegen_fn_attrs.optimize = attrs.iter().fold(OptimizeAttr::None, |ia, attr| {
- if attr.path != sym::optimize {
+ if !attr.has_name(sym::optimize) {
return ia;
}
let err = |sp, s| span_err!(tcx.sess.diagnostic(), sp, E0722, "{}", s);
use std::iter;
use astconv::{AstConv, Bounds};
-pub use collect::checked_type_of;
-
pub struct TypeAndSubsts<'tcx> {
substs: SubstsRef<'tcx>,
ty: Ty<'tcx>,
use rustc::ty::fold::TypeFolder;
use rustc::ty::layout::VariantIdx;
use rustc::util::nodemap::{FxHashMap, FxHashSet};
-use syntax::ast::{self, Attribute, AttrStyle, AttrItem, Ident};
+use syntax::ast::{self, Attribute, AttrStyle, AttrKind, Ident};
use syntax::attr;
-use syntax::parse::lexer::comments;
+use syntax::util::comments;
use syntax::source_map::DUMMY_SP;
use syntax_pos::symbol::{Symbol, kw, sym};
use syntax_pos::hygiene::MacroKind;
let mut cfg = Cfg::True;
let mut doc_line = 0;
- /// Converts `attr` to a normal `#[doc="foo"]` comment, if it is a
- /// comment like `///` or `/** */`. (Returns `attr` unchanged for
- /// non-sugared doc attributes.)
- pub fn with_desugared_doc<T>(attr: &Attribute, f: impl FnOnce(&Attribute) -> T) -> T {
- if attr.is_sugared_doc {
- let comment = attr.value_str().unwrap();
- let meta = attr::mk_name_value_item_str(
- Ident::with_dummy_span(sym::doc),
- Symbol::intern(&comments::strip_doc_comment_decoration(&comment.as_str())),
- DUMMY_SP,
- );
- f(&Attribute {
- item: AttrItem { path: meta.path, tokens: meta.kind.tokens(meta.span) },
- id: attr.id,
- style: attr.style,
- is_sugared_doc: true,
- span: attr.span,
- })
- } else {
- f(attr)
+ /// If `attr` is a doc comment, strips the leading and (if present)
+ /// trailing comments symbols, e.g. `///`, `/**`, and `*/`. Otherwise,
+ /// returns `attr` unchanged.
+ pub fn with_doc_comment_markers_stripped<T>(
+ attr: &Attribute,
+ f: impl FnOnce(&Attribute) -> T
+ ) -> T {
+ match attr.kind {
+ AttrKind::Normal(_) => {
+ f(attr)
+ }
+ AttrKind::DocComment(comment) => {
+ let comment =
+ Symbol::intern(&comments::strip_doc_comment_decoration(&comment.as_str()));
+ f(&Attribute {
+ kind: AttrKind::DocComment(comment),
+ id: attr.id,
+ style: attr.style,
+ span: attr.span,
+ })
+ }
}
}
let other_attrs = attrs.iter().filter_map(|attr| {
- with_desugared_doc(attr, |attr| {
+ with_doc_comment_markers_stripped(attr, |attr| {
if attr.check_name(sym::doc) {
if let Some(mi) = attr.meta() {
if let Some(value) = mi.value_str() {
let line = doc_line;
doc_line += value.lines().count();
- if attr.is_sugared_doc {
+ if attr.is_doc_comment() {
doc_strings.push(DocFragment::SugaredDoc(line, attr.span, value));
} else {
doc_strings.push(DocFragment::RawDoc(line, attr.span, value));
}
}
- pub fn get_type(&self, cx: &DocContext<'_>) -> Option<Type> {
- match *self {
- GenericParamDefKind::Type { did, .. } => {
- rustc_typeck::checked_type_of(cx.tcx, did, false).map(|t| t.clean(cx))
- }
- GenericParamDefKind::Const { ref ty, .. } => Some(ty.clone()),
+ // FIXME(eddyb) this either returns the default of a type parameter, or the
+ // type of a `const` parameter. It seems that the intention is to *visit*
+ // any embedded types, but `get_type` seems to be the wrong name for that.
+ pub fn get_type(&self) -> Option<Type> {
+ match self {
+ GenericParamDefKind::Type { default, .. } => default.clone(),
+ GenericParamDefKind::Const { ty, .. } => Some(ty.clone()),
GenericParamDefKind::Lifetime => None,
}
}
self.kind.is_type()
}
- pub fn get_type(&self, cx: &DocContext<'_>) -> Option<Type> {
- self.kind.get_type(cx)
+ pub fn get_type(&self) -> Option<Type> {
+ self.kind.get_type()
}
pub fn get_bounds(&self) -> Option<&[GenericBound]> {
if !x.is_type() {
continue
}
- if let Some(ty) = x.get_type(cx) {
+ if let Some(ty) = x.get_type() {
let adds = get_real_types(generics, &ty, cx, recurse + 1);
if !adds.is_empty() {
res.extend(adds);
let mut parts = arg.splitn(2, '=');
let name = parts.next().ok_or("--extern value must not be empty".to_string())?;
let location = parts.next().map(|s| s.to_string());
- if location.is_none() && !nightly_options::is_unstable_enabled(matches) {
- return Err("the `-Z unstable-options` flag must also be passed to \
- enable `--extern crate_name` without `=path`".to_string());
- }
let name = name.to_string();
// For Rustdoc purposes, we can treat all externs as public
externs.entry(name)
use syntax::source_map::SourceMap;
use syntax::parse::lexer;
-use syntax::parse::token::{self, Token};
+use syntax::token::{self, Token};
use syntax::sess::ParseSess;
use syntax::symbol::{kw, sym};
use syntax_pos::{Span, FileName};
}
}
+#[derive(Debug)]
+enum Setting {
+ Section {
+ description: &'static str,
+ sub_settings: Vec<Setting>,
+ },
+ Entry {
+ js_data_name: &'static str,
+ description: &'static str,
+ default_value: bool,
+ }
+}
+
+impl Setting {
+ fn display(&self) -> String {
+ match *self {
+ Setting::Section { ref description, ref sub_settings } => {
+ format!(
+ "<div class='setting-line'>\
+ <div class='title'>{}</div>\
+ <div class='sub-settings'>{}</div>
+ </div>",
+ description,
+ sub_settings.iter().map(|s| s.display()).collect::<String>()
+ )
+ }
+ Setting::Entry { ref js_data_name, ref description, ref default_value } => {
+ format!(
+ "<div class='setting-line'>\
+ <label class='toggle'>\
+ <input type='checkbox' id='{}' {}>\
+ <span class='slider'></span>\
+ </label>\
+ <div>{}</div>\
+ </div>",
+ js_data_name,
+ if *default_value { " checked" } else { "" },
+ description,
+ )
+ }
+ }
+ }
+}
+
+impl From<(&'static str, &'static str, bool)> for Setting {
+ fn from(values: (&'static str, &'static str, bool)) -> Setting {
+ Setting::Entry {
+ js_data_name: values.0,
+ description: values.1,
+ default_value: values.2,
+ }
+ }
+}
+
+impl<T: Into<Setting>> From<(&'static str, Vec<T>)> for Setting {
+ fn from(values: (&'static str, Vec<T>)) -> Setting {
+ Setting::Section {
+ description: values.0,
+ sub_settings: values.1.into_iter().map(|v| v.into()).collect::<Vec<_>>(),
+ }
+ }
+}
+
fn settings(root_path: &str, suffix: &str) -> String {
// (id, explanation, default value)
- let settings = [
- ("item-declarations", "Auto-hide item declarations.", true),
- ("item-attributes", "Auto-hide item attributes.", true),
- ("trait-implementations", "Auto-hide trait implementations documentation",
- true),
- ("method-docs", "Auto-hide item methods' documentation", false),
+ let settings: &[Setting] = &[
+ ("Auto-hide item declarations", vec![
+ ("auto-hide-struct", "Auto-hide structs declaration", true),
+ ("auto-hide-enum", "Auto-hide enums declaration", false),
+ ("auto-hide-union", "Auto-hide unions declaration", true),
+ ("auto-hide-trait", "Auto-hide traits declaration", true),
+ ("auto-hide-macro", "Auto-hide macros declaration", false),
+ ]).into(),
+ ("auto-hide-attributes", "Auto-hide item attributes.", true).into(),
+ ("auto-hide-method-docs", "Auto-hide item methods' documentation", false).into(),
+ ("auto-hide-trait-implementations", "Auto-hide trait implementations documentation",
+ true).into(),
("go-to-only-result", "Directly go to item in search if there is only one result",
- false),
- ("line-numbers", "Show line numbers on code examples", false),
- ("disable-shortcuts", "Disable keyboard shortcuts", false),
+ false).into(),
+ ("line-numbers", "Show line numbers on code examples", false).into(),
+ ("disable-shortcuts", "Disable keyboard shortcuts", false).into(),
];
format!(
"<h1 class='fqn'>\
</h1>\
<div class='settings'>{}</div>\
<script src='{}settings{}.js'></script>",
- settings.iter()
- .map(|(id, text, enabled)| {
- format!("<div class='setting-line'>\
- <label class='toggle'>\
- <input type='checkbox' id='{}' {}>\
- <span class='slider'></span>\
- </label>\
- <div>{}</div>\
- </div>", id, if *enabled { " checked" } else { "" }, text)
- })
- .collect::<String>(),
+ settings.iter().map(|s| s.display()).collect::<String>(),
root_path,
suffix)
}
// Local js definitions:
/* global addClass, getCurrentValue, hasClass */
-/* global isHidden, onEach, removeClass, updateLocalStorage */
+/* global onEach, removeClass, updateLocalStorage */
if (!String.prototype.startsWith) {
String.prototype.startsWith = function(searchString, position) {
return window.history && typeof window.history.pushState === "function";
}
+ function isHidden(elem) {
+ return elem.offsetHeight === 0;
+ }
+
var main = document.getElementById("main");
+ var savedHash = "";
- function onHashChange(ev) {
- // If we're in mobile mode, we should hide the sidebar in any case.
- hideSidebar();
- var match = window.location.hash.match(/^#?(\d+)(?:-(\d+))?$/);
- if (match) {
- return highlightSourceLines(match, ev);
- }
+ function handleHashes(ev) {
var search = getSearchElement();
if (ev !== null && search && !hasClass(search, "hidden") && ev.newURL) {
+ // This block occurs when clicking on an element in the navbar while
+ // in a search.
addClass(search, "hidden");
removeClass(main, "hidden");
var hash = ev.newURL.slice(ev.newURL.indexOf("#") + 1);
elem.scrollIntoView();
}
}
+ // This part is used in case an element is not visible.
+ if (savedHash !== window.location.hash) {
+ savedHash = window.location.hash;
+ if (savedHash.length === 0) {
+ return;
+ }
+ var elem = document.getElementById(savedHash.slice(1)); // we remove the '#'
+ if (!elem || !isHidden(elem)) {
+ return;
+ }
+ var parent = elem.parentNode;
+ if (parent && hasClass(parent, "impl-items")) {
+ // In case this is a trait implementation item, we first need to toggle
+ // the "Show hidden undocumented items".
+ onEachLazy(parent.getElementsByClassName("collapsed"), function(e) {
+ if (e.parentNode === parent) {
+ // Only click on the toggle we're looking for.
+ e.click();
+ return true;
+ }
+ });
+ if (isHidden(elem)) {
+ // The whole parent is collapsed. We need to click on its toggle as well!
+ if (hasClass(parent.lastElementChild, "collapse-toggle")) {
+ parent.lastElementChild.click();
+ }
+ }
+ }
+ }
}
function highlightSourceLines(match, ev) {
}
}
+ function onHashChange(ev) {
+ // If we're in mobile mode, we should hide the sidebar in any case.
+ hideSidebar();
+ var match = window.location.hash.match(/^#?(\d+)(?:-(\d+))?$/);
+ if (match) {
+ return highlightSourceLines(match, ev);
+ }
+ handleHashes(ev);
+ }
+
function expandSection(id) {
var elem = document.getElementById(id);
if (elem && isHidden(elem)) {
}
}
- highlightSourceLines();
- window.onhashchange = onHashChange;
-
// Gets the human-readable string for the virtual-key code of the
// given KeyboardEvent, ev.
//
function autoCollapse(pageId, collapse) {
if (collapse) {
toggleAllDocs(pageId, true);
- } else if (getCurrentValue("rustdoc-trait-implementations") !== "false") {
+ } else if (getCurrentValue("rustdoc-auto-hide-trait-implementations") !== "false") {
var impl_list = document.getElementById("implementations-list");
if (impl_list !== null) {
}
var toggle = createSimpleToggle(false);
- var hideMethodDocs = getCurrentValue("rustdoc-method-docs") === "true";
+ var hideMethodDocs = getCurrentValue("rustdoc-auto-hide-method-docs") === "true";
var pageId = getPageId();
var func = function(e) {
return wrapper;
}
- var showItemDeclarations = getCurrentValue("rustdoc-item-declarations") === "false";
+ var currentType = document.getElementsByClassName("type-decl")[0];
+ var className = null;
+ if (currentType) {
+ currentType = currentType.getElementsByClassName("rust")[0];
+ if (currentType) {
+ currentType.classList.forEach(function(item) {
+ if (item !== "main") {
+ className = item;
+ return true;
+ }
+ });
+ }
+ }
+ var showItemDeclarations = getCurrentValue("rustdoc-auto-hide-" + className);
+ if (showItemDeclarations === null) {
+ if (className === "enum" || className === "macro") {
+ showItemDeclarations = "false";
+ } else if (className === "struct" || className === "union" || className === "trait") {
+ showItemDeclarations = "true";
+ } else {
+ // In case we found an unknown type, we just use the "parent" value.
+ showItemDeclarations = getCurrentValue("rustdoc-auto-hide-declarations");
+ }
+ }
+ showItemDeclarations = showItemDeclarations === "false";
function buildToggleWrapper(e) {
if (hasClass(e, "autohide")) {
var wrap = e.previousElementSibling;
// To avoid checking on "rustdoc-item-attributes" value on every loop...
var itemAttributesFunc = function() {};
- if (getCurrentValue("rustdoc-item-attributes") !== "false") {
+ if (getCurrentValue("rustdoc-auto-hide-attributes") !== "false") {
itemAttributesFunc = function(x) {
collapseDocs(x.previousSibling.childNodes[0], "toggle");
};
insertAfter(popup, getSearchElement());
}
+ onHashChange();
+ window.onhashchange = onHashChange;
+
buildHelperPopup();
}());
.setting-line {
padding: 5px;
+ position: relative;
}
.setting-line > div {
padding-top: 2px;
}
+.setting-line > .title {
+ font-size: 19px;
+ width: 100%;
+ max-width: none;
+ border-bottom: 1px solid;
+}
+
.toggle {
position: relative;
display: inline-block;
-ms-transform: translateX(19px);
transform: translateX(19px);
}
+
+.setting-line > .sub-settings {
+ padding-left: 42px;
+ width: 100%;
+ display: block;
+}
elem.classList.remove(className);
}
-function isHidden(elem) {
- return elem.offsetParent === null;
-}
-
function onEach(arr, func, reversed) {
if (arr && arr.length > 0 && func) {
var length = arr.length;
div.files > .selected {
background-color: #333;
}
+.setting-line > .title {
+ border-bottom-color: #ddd;
+}
div.files > .selected {
background-color: #fff;
}
+.setting-line > .title {
+ border-bottom-color: #D5D5D5;
+}
}),
stable("cfg", |o| o.optmulti("", "cfg", "pass a --cfg to rustc", "")),
stable("extern", |o| {
- o.optmulti("", "extern", "pass an --extern to rustc", "NAME=PATH")
+ o.optmulti("", "extern", "pass an --extern to rustc", "NAME[=PATH]")
}),
unstable("extern-html-root-url", |o| {
o.optmulti("", "extern-html-root-url",
use errors::Applicability;
use syntax::parse::lexer::{StringReader as Lexer};
-use syntax::parse::token;
+use syntax::token;
use syntax::sess::ParseSess;
use syntax::source_map::FilePathMapping;
use syntax_pos::{InnerSpan, FileName};
/// # Examples
///
/// ```
- /// let five = 5.0_f64;
+ /// let twenty_five = 25.0_f64;
///
- /// // log5(5) - 1 == 0
- /// let abs_difference = (five.log(5.0) - 1.0).abs();
+ /// // log5(25) - 2 == 0
+ /// let abs_difference = (twenty_five.log(5.0) - 2.0).abs();
///
/// assert!(abs_difference < 1e-10);
/// ```
/// # Examples
///
/// ```
- /// let two = 2.0_f64;
+ /// let four = 4.0_f64;
///
- /// // log2(2) - 1 == 0
- /// let abs_difference = (two.log2() - 1.0).abs();
+ /// // log2(4) - 2 == 0
+ /// let abs_difference = (four.log2() - 2.0).abs();
///
/// assert!(abs_difference < 1e-10);
/// ```
/// # Examples
///
/// ```
- /// let ten = 10.0_f64;
+ /// let hundred = 100.0_f64;
///
- /// // log10(10) - 1 == 0
- /// let abs_difference = (ten.log10() - 1.0).abs();
+ /// // log10(100) - 2 == 0
+ /// let abs_difference = (hundred.log10() - 2.0).abs();
///
/// assert!(abs_difference < 1e-10);
/// ```
///
/// When the `BufReader<R>` is dropped, the contents of its buffer will be
/// discarded. Creating multiple instances of a `BufReader<R>` on the same
-/// stream can cause data loss.
+/// stream can cause data loss. Reading from the underlying reader after
+/// unwrapping the `BufReader<R>` with `BufReader::into_inner` can also cause
+/// data loss.
///
/// [`Read`]: ../../std/io/trait.Read.html
/// [`TcpStream::read`]: ../../std/net/struct.TcpStream.html#method.read
/// Unwraps this `BufReader<R>`, returning the underlying reader.
///
- /// Note that any leftover data in the internal buffer is lost.
+ /// Note that any leftover data in the internal buffer is lost. Therefore,
+ /// a following read from the underlying reader may lead to data loss.
///
/// # Examples
///
#![feature(never_type)]
#![feature(nll)]
#![cfg_attr(bootstrap, feature(non_exhaustive))]
-#![feature(on_unimplemented)]
+#![cfg_attr(bootstrap, feature(on_unimplemented))]
#![feature(optin_builtin_traits)]
#![feature(panic_info_message)]
#![feature(panic_internals)]
// FIXME(#10380) these tests should not all be ignored on android.
#[test]
- #[cfg_attr(target_os = "android", ignore)]
+ #[cfg_attr(any(target_os = "vxworks", target_os = "android"), ignore)]
fn smoke() {
let p = if cfg!(target_os = "windows") {
Command::new("cmd").args(&["/C", "exit 0"]).spawn()
}
#[test]
- #[cfg_attr(target_os = "android", ignore)]
+ #[cfg_attr(any(target_os = "vxworks", target_os = "android"), ignore)]
fn exit_reported_right() {
let p = if cfg!(target_os = "windows") {
Command::new("cmd").args(&["/C", "exit 1"]).spawn()
#[test]
#[cfg(unix)]
- #[cfg_attr(target_os = "android", ignore)]
+ #[cfg_attr(any(target_os = "vxworks", target_os = "android"), ignore)]
fn signal_reported_right() {
use crate::os::unix::process::ExitStatusExt;
}
#[test]
- #[cfg_attr(target_os = "android", ignore)]
+ #[cfg_attr(any(target_os = "vxworks", target_os = "android"), ignore)]
fn stdout_works() {
if cfg!(target_os = "windows") {
let mut cmd = Command::new("cmd");
}
#[test]
- #[cfg_attr(any(windows, target_os = "android"), ignore)]
+ #[cfg_attr(any(windows, target_os = "android", target_os = "vxworks"), ignore)]
fn set_current_dir_works() {
let mut cmd = Command::new("/bin/sh");
cmd.arg("-c").arg("pwd")
}
#[test]
- #[cfg_attr(any(windows, target_os = "android"), ignore)]
+ #[cfg_attr(any(windows, target_os = "android", target_os = "vxworks"), ignore)]
fn stdin_works() {
let mut p = Command::new("/bin/sh")
.arg("-c").arg("read line; echo $line")
}
#[test]
- #[cfg_attr(target_os = "android", ignore)]
+ #[cfg_attr(any(target_os = "vxworks", target_os = "android"), ignore)]
fn test_process_status() {
let mut status = if cfg!(target_os = "windows") {
Command::new("cmd").args(&["/C", "exit 1"]).status().unwrap()
}
#[test]
- #[cfg_attr(target_os = "android", ignore)]
+ #[cfg_attr(any(target_os = "vxworks", target_os = "android"), ignore)]
fn test_process_output_output() {
let Output {status, stdout, stderr}
= if cfg!(target_os = "windows") {
}
#[test]
- #[cfg_attr(target_os = "android", ignore)]
+ #[cfg_attr(any(target_os = "vxworks", target_os = "android"), ignore)]
fn test_process_output_error() {
let Output {status, stdout, stderr}
= if cfg!(target_os = "windows") {
}
#[test]
- #[cfg_attr(target_os = "android", ignore)]
+ #[cfg_attr(any(target_os = "vxworks", target_os = "android"), ignore)]
fn test_finish_once() {
let mut prog = if cfg!(target_os = "windows") {
Command::new("cmd").args(&["/C", "exit 1"]).spawn().unwrap()
}
#[test]
- #[cfg_attr(target_os = "android", ignore)]
+ #[cfg_attr(any(target_os = "vxworks", target_os = "android"), ignore)]
fn test_finish_twice() {
let mut prog = if cfg!(target_os = "windows") {
Command::new("cmd").args(&["/C", "exit 1"]).spawn().unwrap()
}
#[test]
- #[cfg_attr(target_os = "android", ignore)]
+ #[cfg_attr(any(target_os = "vxworks", target_os = "android"), ignore)]
fn test_wait_with_output_once() {
let prog = if cfg!(target_os = "windows") {
Command::new("cmd").args(&["/C", "echo hello"]).stdout(Stdio::piped()).spawn().unwrap()
}
#[test]
+ #[cfg_attr(target_os = "vxworks", ignore)]
fn test_override_env() {
use crate::env;
}
#[test]
+ #[cfg_attr(target_os = "vxworks", ignore)]
fn test_add_to_env() {
let result = env_cmd().env("RUN_TEST_NEW_ENV", "123").output().unwrap();
let output = String::from_utf8_lossy(&result.stdout).to_string();
}
#[test]
+ #[cfg_attr(target_os = "vxworks", ignore)]
fn test_capture_env_at_spawn() {
use crate::env;
// Regression tests for #30862.
#[test]
+ #[cfg_attr(target_os = "vxworks", ignore)]
fn test_interior_nul_in_env_key_is_error() {
match env_cmd().env("has-some-\0\0s-inside", "value").spawn() {
Err(e) => assert_eq!(e.kind(), ErrorKind::InvalidInput),
}
#[test]
+ #[cfg_attr(target_os = "vxworks", ignore)]
fn test_interior_nul_in_env_value_is_error() {
match env_cmd().env("key", "has-some-\0\0s-inside").spawn() {
Err(e) => assert_eq!(e.kind(), ErrorKind::InvalidInput),
// (main function exists). If this is a library, the crate author should be
// able to specify this
#[cfg(not(test))]
+#[allow(improper_ctypes)]
#[no_mangle]
extern "C" fn entry(p1: u64, p2: u64, p3: u64, secondary: bool, p4: u64, p5: u64) -> (u64, u64) {
// FIXME: how to support TLS in library mode?
#[cfg(not(test))]
#[no_mangle]
+#[allow(improper_ctypes)]
pub unsafe extern "C" fn __rust_rwlock_rdlock(p: *mut RWLock) -> i32 {
if p.is_null() {
return EINVAL;
}
#[cfg(not(test))]
+#[allow(improper_ctypes)]
#[no_mangle]
pub unsafe extern "C" fn __rust_rwlock_wrlock(p: *mut RWLock) -> i32 {
if p.is_null() {
return 0;
}
#[cfg(not(test))]
+#[allow(improper_ctypes)]
#[no_mangle]
pub unsafe extern "C" fn __rust_rwlock_unlock(p: *mut RWLock) -> i32 {
if p.is_null() {
#[allow_internal_unstable(thread_local_internals, cfg_target_thread_local, thread_local)]
#[allow_internal_unsafe]
macro_rules! __thread_local_inner {
- (@key $(#[$attr:meta])* $vis:vis $name:ident, $t:ty, $init:expr) => {
+ (@key $t:ty, $init:expr) => {
{
#[inline]
fn __init() -> $t { $init }
};
($(#[$attr:meta])* $vis:vis $name:ident, $t:ty, $init:expr) => {
$(#[$attr])* $vis const $name: $crate::thread::LocalKey<$t> =
- $crate::__thread_local_inner!(@key $(#[$attr])* $vis $name, $t, $init);
+ $crate::__thread_local_inner!(@key $t, $init);
}
}
///
/// Indicates the manner in which a thread exited.
///
+/// The value contained in the `Result::Err` variant
+/// is the value the thread panicked with;
+/// that is, the argument the `panic!` macro was called with.
+/// Unlike with normal errors, this value doesn't implement
+/// the [`Error`](crate::error::Error) trait.
+///
+/// Thus, a sensible way to handle a thread panic is to either:
+/// 1. `unwrap` the `Result<T>`, propagating the panic
+/// 2. or in case the thread is intended to be a subsystem boundary
+/// that is supposed to isolate system-level failures,
+/// match on the `Err` variant and handle the panic in an appropriate way.
+///
/// A thread that completes without panicking is considered to exit successfully.
///
/// # Examples
rustc_data_structures = { path = "../librustc_data_structures" }
rustc_index = { path = "../librustc_index" }
rustc_lexer = { path = "../librustc_lexer" }
-rustc_target = { path = "../librustc_target" }
smallvec = { version = "1.0", features = ["union", "may_dangle"] }
pub use UnsafeSource::*;
pub use crate::util::parser::ExprPrecedence;
-pub use rustc_target::abi::FloatTy;
pub use syntax_pos::symbol::{Ident, Symbol as Name};
-use crate::parse::token::{self, DelimToken};
use crate::ptr::P;
use crate::source_map::{dummy_spanned, respan, Spanned};
+use crate::token::{self, DelimToken};
use crate::tokenstream::TokenStream;
use syntax_pos::symbol::{kw, sym, Symbol};
use rustc_data_structures::thin_vec::ThinVec;
use rustc_index::vec::Idx;
use rustc_serialize::{self, Decoder, Encoder};
-use rustc_target::spec::abi::Abi;
#[cfg(target_arch = "x86_64")]
use rustc_data_structures::static_assert_size;
// Clippy uses Hash and PartialEq
/// Type of the integer literal based on provided suffix.
-#[derive(Clone, RustcEncodable, RustcDecodable, Debug, Copy, Hash, PartialEq)]
+#[derive(Clone, Copy, RustcEncodable, RustcDecodable, Debug, Hash, PartialEq)]
pub enum LitIntType {
/// e.g. `42_i32`.
Signed(IntTy),
Unsuffixed,
}
+/// Type of the float literal based on provided suffix.
+#[derive(Clone, Copy, RustcEncodable, RustcDecodable, Debug, Hash, PartialEq)]
+pub enum LitFloatType {
+ /// A float literal with a suffix (`1f32` or `1E10f32`).
+ Suffixed(FloatTy),
+ /// A float literal without a suffix (`1.0 or 1.0E10`).
+ Unsuffixed,
+}
+
/// Literal kind.
///
/// E.g., `"foo"`, `42`, `12.34`, or `bool`.
/// An integer literal (`1`).
Int(u128, LitIntType),
/// A float literal (`1f64` or `1E10f64`).
- Float(Symbol, FloatTy),
- /// A float literal without a suffix (`1.0 or 1.0E10`).
- FloatUnsuffixed(Symbol),
+ Float(Symbol, LitFloatType),
/// A boolean literal.
Bool(bool),
/// Placeholder for a literal that wasn't well-formed in some way.
/// Returns `true` if this is a numeric literal.
pub fn is_numeric(&self) -> bool {
match *self {
- LitKind::Int(..) | LitKind::Float(..) | LitKind::FloatUnsuffixed(..) => true,
+ LitKind::Int(..) | LitKind::Float(..) => true,
_ => false,
}
}
// suffixed variants
LitKind::Int(_, LitIntType::Signed(..))
| LitKind::Int(_, LitIntType::Unsigned(..))
- | LitKind::Float(..) => true,
+ | LitKind::Float(_, LitFloatType::Suffixed(..)) => true,
// unsuffixed variants
LitKind::Str(..)
| LitKind::ByteStr(..)
| LitKind::Byte(..)
| LitKind::Char(..)
| LitKind::Int(_, LitIntType::Unsuffixed)
- | LitKind::FloatUnsuffixed(..)
+ | LitKind::Float(_, LitFloatType::Unsuffixed)
| LitKind::Bool(..)
| LitKind::Err(..) => false,
}
Macro(Mac),
}
-#[derive(Clone, PartialEq, Eq, PartialOrd, Ord, Hash, RustcEncodable, RustcDecodable, Copy)]
+#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, RustcEncodable, RustcDecodable, Debug)]
+pub enum FloatTy {
+ F32,
+ F64,
+}
+
+impl FloatTy {
+ pub fn name_str(self) -> &'static str {
+ match self {
+ FloatTy::F32 => "f32",
+ FloatTy::F64 => "f64",
+ }
+ }
+
+ pub fn name(self) -> Symbol {
+ match self {
+ FloatTy::F32 => sym::f32,
+ FloatTy::F64 => sym::f64,
+ }
+ }
+
+ pub fn bit_width(self) -> usize {
+ match self {
+ FloatTy::F32 => 32,
+ FloatTy::F64 => 64,
+ }
+ }
+}
+
+#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, RustcEncodable, RustcDecodable, Debug)]
pub enum IntTy {
Isize,
I8,
I128,
}
-impl fmt::Debug for IntTy {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- fmt::Display::fmt(self, f)
- }
-}
-
-impl fmt::Display for IntTy {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- write!(f, "{}", self.ty_to_string())
- }
-}
-
impl IntTy {
- pub fn ty_to_string(&self) -> &'static str {
+ pub fn name_str(&self) -> &'static str {
match *self {
IntTy::Isize => "isize",
IntTy::I8 => "i8",
}
}
- pub fn to_symbol(&self) -> Symbol {
+ pub fn name(&self) -> Symbol {
match *self {
IntTy::Isize => sym::isize,
IntTy::I8 => sym::i8,
// Cast to a `u128` so we can correctly print `INT128_MIN`. All integral types
// are parsed as `u128`, so we wouldn't want to print an extra negative
// sign.
- format!("{}{}", val as u128, self.ty_to_string())
+ format!("{}{}", val as u128, self.name_str())
}
pub fn bit_width(&self) -> Option<usize> {
}
}
-#[derive(Clone, PartialEq, Eq, PartialOrd, Ord, Hash, RustcEncodable, RustcDecodable, Copy)]
+#[derive(Clone, PartialEq, Eq, PartialOrd, Ord, Hash, RustcEncodable, RustcDecodable, Copy, Debug)]
pub enum UintTy {
Usize,
U8,
}
impl UintTy {
- pub fn ty_to_string(&self) -> &'static str {
+ pub fn name_str(&self) -> &'static str {
match *self {
UintTy::Usize => "usize",
UintTy::U8 => "u8",
}
}
- pub fn to_symbol(&self) -> Symbol {
+ pub fn name(&self) -> Symbol {
match *self {
UintTy::Usize => sym::usize,
UintTy::U8 => sym::u8,
}
pub fn val_to_string(&self, val: u128) -> String {
- format!("{}{}", val, self.ty_to_string())
+ format!("{}{}", val, self.name_str())
}
pub fn bit_width(&self) -> Option<usize> {
}
}
-impl fmt::Debug for UintTy {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- fmt::Display::fmt(self, f)
- }
-}
-
-impl fmt::Display for UintTy {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- write!(f, "{}", self.ty_to_string())
- }
-}
-
/// A constraint on an associated type (e.g., `A = Bar` in `Foo<A = Bar>` or
/// `A: TraitA + TraitB` in `Foo<A: TraitA + TraitB>`).
#[derive(Clone, RustcEncodable, RustcDecodable, Debug)]
}
/// Metadata associated with an item.
-/// Doc-comments are promoted to attributes that have `is_sugared_doc = true`.
#[derive(Clone, RustcEncodable, RustcDecodable, Debug)]
pub struct Attribute {
- pub item: AttrItem,
+ pub kind: AttrKind,
pub id: AttrId,
/// Denotes if the attribute decorates the following construct (outer)
/// or the construct this attribute is contained within (inner).
pub style: AttrStyle,
- pub is_sugared_doc: bool,
pub span: Span,
}
-// Compatibility impl to avoid churn, consider removing.
-impl std::ops::Deref for Attribute {
- type Target = AttrItem;
- fn deref(&self) -> &Self::Target { &self.item }
+#[derive(Clone, RustcEncodable, RustcDecodable, Debug)]
+pub enum AttrKind {
+ /// A normal attribute.
+ Normal(AttrItem),
+
+ /// A doc comment (e.g. `/// ...`, `//! ...`, `/** ... */`, `/*! ... */`).
+ /// Doc attributes (e.g. `#[doc="..."]`) are represented with the `Normal`
+ /// variant (which is much less compact and thus more expensive).
+ ///
+ /// Note: `self.has_name(sym::doc)` and `self.check_name(sym::doc)` succeed
+ /// for this variant, but this may change in the future.
+ /// ```
+ DocComment(Symbol),
}
/// `TraitRef`s appear in impls.
}
}
+/// A reference to an ABI.
+///
+/// In AST our notion of an ABI is still syntactic unlike in `rustc_target::spec::abi::Abi`.
+#[derive(Clone, Copy, RustcEncodable, RustcDecodable, Debug, PartialEq)]
+pub struct Abi {
+ pub symbol: Symbol,
+ pub span: Span,
+}
+
+impl Abi {
+ pub fn new(symbol: Symbol, span: Span) -> Self {
+ Self { symbol, span }
+ }
+}
+
+impl Default for Abi {
+ fn default() -> Self {
+ Self::new(sym::Rust, DUMMY_SP)
+ }
+}
+
/// A function header.
///
/// All the information between the visibility and the name of the function is
unsafety: Unsafety::Normal,
asyncness: dummy_spanned(IsAsync::NotAsync),
constness: dummy_spanned(Constness::NotConst),
- abi: Abi::Rust,
+ abi: Abi::default(),
}
}
}
sym::stable,
sym::rustc_promotable,
sym::rustc_allow_const_fn_ptr,
- ].iter().any(|&s| attr.path == s) {
+ ].iter().any(|&s| attr.has_name(s)) {
continue // not a stability level
}
let meta = attr.meta();
- if attr.path == sym::rustc_promotable {
+ if attr.has_name(sym::rustc_promotable) {
promotable = true;
}
- if attr.path == sym::rustc_allow_const_fn_ptr {
+ if attr.has_name(sym::rustc_allow_const_fn_ptr) {
allow_const_fn_ptr = true;
}
// attributes with data
let mut acc = Vec::new();
let diagnostic = &sess.span_diagnostic;
- if attr.path == sym::repr {
+ if attr.has_name(sym::repr) {
if let Some(items) = attr.meta_item_list() {
mark_used(attr);
for item in items {
pub use crate::ast::Attribute;
use crate::ast;
-use crate::ast::{AttrItem, AttrId, AttrStyle, Name, Ident, Path, PathSegment};
+use crate::ast::{AttrItem, AttrId, AttrKind, AttrStyle, Name, Ident, Path, PathSegment};
use crate::ast::{MetaItem, MetaItemKind, NestedMetaItem};
use crate::ast::{Lit, LitKind, Expr, Item, Local, Stmt, StmtKind, GenericParam};
use crate::mut_visit::visit_clobber;
use crate::source_map::{BytePos, Spanned};
-use crate::parse::lexer::comments::doc_comment_style;
use crate::parse;
-use crate::parse::PResult;
-use crate::parse::token::{self, Token};
+use crate::token::{self, Token};
use crate::ptr::P;
use crate::sess::ParseSess;
use crate::symbol::{sym, Symbol};
use crate::tokenstream::{DelimSpan, TokenStream, TokenTree, TreeAndJoint};
use crate::GLOBALS;
+use errors::PResult;
+
use log::debug;
use syntax_pos::Span;
}
impl Attribute {
+ pub fn has_name(&self, name: Symbol) -> bool {
+ match self.kind {
+ AttrKind::Normal(ref item) => item.path == name,
+ AttrKind::DocComment(_) => name == sym::doc,
+ }
+ }
+
/// Returns `true` if the attribute's path matches the argument. If it matches, then the
/// attribute is marked as used.
- ///
- /// To check the attribute name without marking it used, use the `path` field directly.
pub fn check_name(&self, name: Symbol) -> bool {
- let matches = self.path == name;
+ let matches = self.has_name(name);
if matches {
mark_used(self);
}
/// For a single-segment attribute, returns its name; otherwise, returns `None`.
pub fn ident(&self) -> Option<Ident> {
- if self.path.segments.len() == 1 {
- Some(self.path.segments[0].ident)
- } else {
- None
+ match self.kind {
+ AttrKind::Normal(ref item) => {
+ if item.path.segments.len() == 1 {
+ Some(item.path.segments[0].ident)
+ } else {
+ None
+ }
+ }
+ AttrKind::DocComment(_) => Some(Ident::new(sym::doc, self.span)),
}
}
pub fn name_or_empty(&self) -> Symbol {
}
pub fn value_str(&self) -> Option<Symbol> {
- self.meta().and_then(|meta| meta.value_str())
+ match self.kind {
+ AttrKind::Normal(ref item) => {
+ item.meta(self.span).and_then(|meta| meta.value_str())
+ }
+ AttrKind::DocComment(comment) => Some(comment),
+ }
}
pub fn meta_item_list(&self) -> Option<Vec<NestedMetaItem>> {
- match self.meta() {
- Some(MetaItem { kind: MetaItemKind::List(list), .. }) => Some(list),
- _ => None
+ match self.kind {
+ AttrKind::Normal(ref item) => {
+ match item.meta(self.span) {
+ Some(MetaItem { kind: MetaItemKind::List(list), .. }) => Some(list),
+ _ => None
+ }
+ }
+ AttrKind::DocComment(_) => None,
}
}
pub fn is_word(&self) -> bool {
- self.tokens.is_empty()
+ if let AttrKind::Normal(item) = &self.kind {
+ item.tokens.is_empty()
+ } else {
+ false
+ }
}
pub fn is_meta_item_list(&self) -> bool {
}
impl Attribute {
+ pub fn is_doc_comment(&self) -> bool {
+ match self.kind {
+ AttrKind::Normal(_) => false,
+ AttrKind::DocComment(_) => true,
+ }
+ }
+
+ pub fn get_normal_item(&self) -> &AttrItem {
+ match self.kind {
+ AttrKind::Normal(ref item) => item,
+ AttrKind::DocComment(_) => panic!("unexpected sugared doc"),
+ }
+ }
+
+ pub fn unwrap_normal_item(self) -> AttrItem {
+ match self.kind {
+ AttrKind::Normal(item) => item,
+ AttrKind::DocComment(_) => panic!("unexpected sugared doc"),
+ }
+ }
+
/// Extracts the MetaItem from inside this Attribute.
pub fn meta(&self) -> Option<MetaItem> {
- self.item.meta(self.span)
+ match self.kind {
+ AttrKind::Normal(ref item) => item.meta(self.span),
+ AttrKind::DocComment(comment) =>
+ Some(mk_name_value_item_str(Ident::new(sym::doc, self.span), comment, self.span)),
+ }
}
pub fn parse_meta<'a>(&self, sess: &'a ParseSess) -> PResult<'a, MetaItem> {
- Ok(MetaItem {
- path: self.path.clone(),
- kind: parse::parse_in_attr(sess, self, |p| p.parse_meta_item_kind())?,
- span: self.span,
- })
+ match self.kind {
+ AttrKind::Normal(ref item) => {
+ Ok(MetaItem {
+ path: item.path.clone(),
+ kind: parse::parse_in_attr(sess, self, |parser| parser.parse_meta_item_kind())?,
+ span: self.span,
+ })
+ }
+ AttrKind::DocComment(comment) => {
+ Ok(mk_name_value_item_str(Ident::new(sym::doc, self.span), comment, self.span))
+ }
+ }
}
}
pub fn mk_attr(style: AttrStyle, path: Path, tokens: TokenStream, span: Span) -> Attribute {
Attribute {
- item: AttrItem { path, tokens },
+ kind: AttrKind::Normal(AttrItem { path, tokens }),
id: mk_attr_id(),
style,
- is_sugared_doc: false,
span,
}
}
mk_attr(AttrStyle::Outer, item.path, item.kind.tokens(item.span), item.span)
}
-pub fn mk_sugared_doc_attr(text: Symbol, span: Span) -> Attribute {
- let style = doc_comment_style(&text.as_str());
- let lit_kind = LitKind::Str(text, ast::StrStyle::Cooked);
- let lit = Lit::from_lit_kind(lit_kind, span);
+pub fn mk_doc_comment(style: AttrStyle, comment: Symbol, span: Span) -> Attribute {
Attribute {
- item: AttrItem {
- path: Path::from_ident(Ident::with_dummy_span(sym::doc).with_span_pos(span)),
- tokens: MetaItemKind::NameValue(lit).tokens(span),
- },
+ kind: AttrKind::DocComment(comment),
id: mk_attr_id(),
style,
- is_sugared_doc: true,
span,
}
}
/// is in the original source file. Gives a compiler error if the syntax of
/// the attribute is incorrect.
fn process_cfg_attr(&mut self, attr: ast::Attribute) -> Vec<ast::Attribute> {
- if attr.path != sym::cfg_attr {
+ if !attr.has_name(sym::cfg_attr) {
return vec![attr];
}
- if attr.tokens.is_empty() {
+ if attr.get_normal_item().tokens.is_empty() {
self.sess.span_diagnostic
.struct_span_err(
attr.span,
// `#[cfg_attr(false, cfg_attr(true, some_attr))]`.
expanded_attrs.into_iter()
.flat_map(|(item, span)| self.process_cfg_attr(ast::Attribute {
- item,
+ kind: ast::AttrKind::Normal(item),
id: attr::mk_attr_id(),
style: attr.style,
- is_sugared_doc: false,
span,
}))
.collect()
GateIssue::Language,
EXPLAIN_STMT_ATTR_SYNTAX);
- if attr.is_sugared_doc {
+ if attr.is_doc_comment() {
err.help("`///` is for documentation comments. For a plain comment, use `//`.");
}
E0630,
E0693, // incorrect `repr(align)` attribute format
// E0694, // an unknown tool name found in scoped attributes
- E0703, // invalid ABI
E0717, // rustc_promotable without stability attribute
}
/// Allows using `rustc_*` attributes (RFC 572).
(active, rustc_attrs, "1.0.0", Some(29642), None),
- /// Allows using `#[on_unimplemented(..)]` on traits.
- (active, on_unimplemented, "1.0.0", Some(29628), None),
-
/// Allows using the `box $expr` syntax.
(active, box_syntax, "1.0.0", Some(49733), None),
}
const IMPL_DETAIL: &str = "internal implementation detail";
-const INTERAL_UNSTABLE: &str = "this is an internal attribute that will never be stable";
+const INTERNAL_UNSTABLE: &str = "this is an internal attribute that will never be stable";
pub type BuiltinAttribute = (Symbol, AttributeType, AttributeTemplate, AttributeGate);
linkage, Whitelisted, template!(NameValueStr: "external|internal|..."),
"the `linkage` attribute is experimental and not portable across platforms",
),
- rustc_attr!(rustc_std_internal_symbol, Whitelisted, template!(Word), INTERAL_UNSTABLE),
+ rustc_attr!(rustc_std_internal_symbol, Whitelisted, template!(Word), INTERNAL_UNSTABLE),
// ==========================================================================
// Internal attributes, Macro related:
// ==========================================================================
rustc_attr!(rustc_builtin_macro, Whitelisted, template!(Word), IMPL_DETAIL),
- rustc_attr!(rustc_proc_macro_decls, Normal, template!(Word), INTERAL_UNSTABLE),
+ rustc_attr!(rustc_proc_macro_decls, Normal, template!(Word), INTERNAL_UNSTABLE),
rustc_attr!(
rustc_macro_transparency, Whitelisted,
template!(NameValueStr: "transparent|semitransparent|opaque"),
// Internal attributes, Diagnostics related:
// ==========================================================================
- gated!(
+ rustc_attr!(
rustc_on_unimplemented, Whitelisted,
template!(
List: r#"/*opt*/ message = "...", /*opt*/ label = "...", /*opt*/ note = "...""#,
NameValueStr: "message"
),
- on_unimplemented,
- experimental!(rustc_on_unimplemented),
+ INTERNAL_UNSTABLE
),
// Whitelists "identity-like" conversion methods to suggest on type mismatch.
- rustc_attr!(rustc_conversion_suggestion, Whitelisted, template!(Word), INTERAL_UNSTABLE),
+ rustc_attr!(rustc_conversion_suggestion, Whitelisted, template!(Word), INTERNAL_UNSTABLE),
// ==========================================================================
// Internal attributes, Const related:
rustc_attr!(rustc_promotable, Whitelisted, template!(Word), IMPL_DETAIL),
rustc_attr!(rustc_allow_const_fn_ptr, Whitelisted, template!(Word), IMPL_DETAIL),
- rustc_attr!(rustc_args_required_const, Whitelisted, template!(List: "N"), INTERAL_UNSTABLE),
+ rustc_attr!(rustc_args_required_const, Whitelisted, template!(List: "N"), INTERNAL_UNSTABLE),
// ==========================================================================
// Internal attributes, Layout related:
use crate::source_map::Spanned;
use crate::edition::{ALL_EDITIONS, Edition};
use crate::visit::{self, FnKind, Visitor};
-use crate::parse::token;
+use crate::token;
use crate::sess::ParseSess;
use crate::symbol::{Symbol, sym};
use crate::tokenstream::TokenTree;
use errors::{Applicability, DiagnosticBuilder, Handler};
use rustc_data_structures::fx::FxHashMap;
-use rustc_target::spec::abi::Abi;
use syntax_pos::{Span, DUMMY_SP, MultiSpan};
use log::debug;
}
impl<'a> PostExpansionVisitor<'a> {
- fn check_abi(&self, abi: Abi, span: Span) {
- match abi {
- Abi::RustIntrinsic => {
+ fn check_abi(&self, abi: ast::Abi) {
+ let ast::Abi { symbol, span } = abi;
+
+ match &*symbol.as_str() {
+ // Stable
+ "Rust" |
+ "C" |
+ "cdecl" |
+ "stdcall" |
+ "fastcall" |
+ "aapcs" |
+ "win64" |
+ "sysv64" |
+ "system" => {}
+ "rust-intrinsic" => {
gate_feature_post!(&self, intrinsics, span,
"intrinsics are subject to change");
},
- Abi::PlatformIntrinsic => {
+ "platform-intrinsic" => {
gate_feature_post!(&self, platform_intrinsics, span,
"platform intrinsics are experimental and possibly buggy");
},
- Abi::Vectorcall => {
+ "vectorcall" => {
gate_feature_post!(&self, abi_vectorcall, span,
"vectorcall is experimental and subject to change");
},
- Abi::Thiscall => {
+ "thiscall" => {
gate_feature_post!(&self, abi_thiscall, span,
"thiscall is experimental and subject to change");
},
- Abi::RustCall => {
+ "rust-call" => {
gate_feature_post!(&self, unboxed_closures, span,
"rust-call ABI is subject to change");
},
- Abi::PtxKernel => {
+ "ptx-kernel" => {
gate_feature_post!(&self, abi_ptx, span,
"PTX ABIs are experimental and subject to change");
},
- Abi::Unadjusted => {
+ "unadjusted" => {
gate_feature_post!(&self, abi_unadjusted, span,
"unadjusted ABI is an implementation detail and perma-unstable");
},
- Abi::Msp430Interrupt => {
+ "msp430-interrupt" => {
gate_feature_post!(&self, abi_msp430_interrupt, span,
"msp430-interrupt ABI is experimental and subject to change");
},
- Abi::X86Interrupt => {
+ "x86-interrupt" => {
gate_feature_post!(&self, abi_x86_interrupt, span,
"x86-interrupt ABI is experimental and subject to change");
},
- Abi::AmdGpuKernel => {
+ "amdgpu-kernel" => {
gate_feature_post!(&self, abi_amdgpu_kernel, span,
"amdgpu-kernel ABI is experimental and subject to change");
},
- Abi::EfiApi => {
+ "efiapi" => {
gate_feature_post!(&self, abi_efiapi, span,
"efiapi ABI is experimental and subject to change");
},
- // Stable
- Abi::Cdecl |
- Abi::Stdcall |
- Abi::Fastcall |
- Abi::Aapcs |
- Abi::Win64 |
- Abi::SysV64 |
- Abi::Rust |
- Abi::C |
- Abi::System => {}
+ abi => {
+ self.parse_sess.span_diagnostic.delay_span_bug(
+ span,
+ &format!("unrecognized ABI not caught in lowering: {}", abi),
+ )
+ }
}
}
// `rustc_dummy` doesn't have any restrictions specific to built-in attributes.
Some((name, _, template, _)) if name != sym::rustc_dummy =>
check_builtin_attribute(self.parse_sess, attr, name, template),
- _ => if let Some(TokenTree::Token(token)) = attr.tokens.trees().next() {
+ _ => if let Some(TokenTree::Token(token)) =
+ attr.get_normal_item().tokens.trees().next() {
if token == token::Eq {
// All key-value attributes are restricted to meta-item syntax.
attr.parse_meta(self.parse_sess).map_err(|mut err| err.emit()).ok();
fn visit_item(&mut self, i: &'a ast::Item) {
match i.kind {
ast::ItemKind::ForeignMod(ref foreign_module) => {
- self.check_abi(foreign_module.abi, i.span);
+ self.check_abi(foreign_module.abi);
}
ast::ItemKind::Fn(..) => {
fn visit_ty(&mut self, ty: &'a ast::Ty) {
match ty.kind {
ast::TyKind::BareFn(ref bare_fn_ty) => {
- self.check_abi(bare_fn_ty.abi, ty.span);
+ self.check_abi(bare_fn_ty.abi);
}
ast::TyKind::Never => {
gate_feature_post!(&self, never_type, ty.span,
// Stability of const fn methods are covered in
// `visit_trait_item` and `visit_impl_item` below; this is
// because default methods don't pass through this point.
- self.check_abi(header.abi, span);
+ self.check_abi(header.abi);
}
if fn_decl.c_variadic() {
match ti.kind {
ast::TraitItemKind::Method(ref sig, ref block) => {
if block.is_none() {
- self.check_abi(sig.header.abi, ti.span);
+ self.check_abi(sig.header.abi);
}
if sig.decl.c_variadic() {
gate_feature_post!(&self, c_variadic, ti.span,
maybe_stage_features(&parse_sess.span_diagnostic, krate, unstable);
let mut visitor = PostExpansionVisitor { parse_sess, features };
+ let spans = parse_sess.gated_spans.spans.borrow();
macro_rules! gate_all {
- ($gate:ident, $msg:literal) => { gate_all!($gate, $gate, $msg); };
- ($spans:ident, $gate:ident, $msg:literal) => {
- for span in &*parse_sess.gated_spans.$spans.borrow() {
+ ($gate:ident, $msg:literal) => {
+ for span in spans.get(&sym::$gate).unwrap_or(&vec![]) {
gate_feature!(&visitor, $gate, *span, $msg);
}
}
}
-
gate_all!(let_chains, "`let` expressions in this position are experimental");
gate_all!(async_closure, "async closures are unstable");
- gate_all!(yields, generators, "yield syntax is experimental");
+ gate_all!(generators, "yield syntax is experimental");
gate_all!(or_patterns, "or-patterns syntax is experimental");
gate_all!(const_extern_fn, "`const extern fn` definitions are unstable");
// FIXME(eddyb) do something more useful than always
// disabling these uses of early feature-gatings.
if false {
- for span in &*parse_sess.gated_spans.$gate.borrow() {
+ for span in spans.get(&sym::$gate).unwrap_or(&vec![]) {
gate_feature!(&visitor, $gate, *span, $msg);
}
}
gate_all!(try_blocks, "`try` blocks are unstable");
gate_all!(label_break_value, "labels on blocks are unstable");
gate_all!(box_syntax, "box expression syntax is experimental; you can call `Box::new` instead");
-
// To avoid noise about type ascription in common syntax errors,
// only emit if it is the *only* error. (Also check it last.)
if parse_sess.span_diagnostic.err_count() == 0 {
/// + `__register_diagnostic`
/// +`__build_diagnostic_array`
(removed, rustc_diagnostic_macros, "1.38.0", None, None, None),
+ /// Allows using `#[on_unimplemented(..)]` on traits.
+ /// (Moved to `rustc_attrs`.)
+ (removed, on_unimplemented, "1.40.0", None, None, None),
// -------------------------------------------------------------------------
// feature-group-end: removed features
pub mod error_codes;
pub mod util {
+ crate mod classify;
+ pub mod comments;
pub mod lev_distance;
+ crate mod literal;
pub mod node_count;
pub mod parser;
pub mod map_in_place;
pub use syntax_pos::edition;
pub use syntax_pos::symbol;
pub mod sess;
+pub mod token;
pub mod tokenstream;
pub mod visit;
use crate::ast::*;
use crate::source_map::{Spanned, respan};
-use crate::parse::token::{self, Token};
+use crate::token::{self, Token};
use crate::ptr::P;
use crate::ThinVec;
use crate::tokenstream::*;
}
pub fn noop_visit_attribute<T: MutVisitor>(attr: &mut Attribute, vis: &mut T) {
- let Attribute { item: AttrItem { path, tokens }, id: _, style: _, is_sugared_doc: _, span }
- = attr;
- vis.visit_path(path);
- vis.visit_tts(tokens);
+ let Attribute { kind, id: _, style: _, span } = attr;
+ match kind {
+ AttrKind::Normal(AttrItem { path, tokens }) => {
+ vis.visit_path(path);
+ vis.visit_tts(tokens);
+ }
+ AttrKind::DocComment(_) => {}
+ }
vis.visit_span(span);
}
+++ /dev/null
-//! Routines the parser uses to classify AST nodes
-
-// Predicates on exprs and stmts that the pretty-printer and parser use
-
-use crate::ast;
-
-/// Does this expression require a semicolon to be treated
-/// as a statement? The negation of this: 'can this expression
-/// be used as a statement without a semicolon' -- is used
-/// as an early-bail-out in the parser so that, for instance,
-/// if true {...} else {...}
-/// |x| 5
-/// isn't parsed as (if true {...} else {...} | x) | 5
-pub fn expr_requires_semi_to_be_stmt(e: &ast::Expr) -> bool {
- match e.kind {
- ast::ExprKind::If(..) |
- ast::ExprKind::Match(..) |
- ast::ExprKind::Block(..) |
- ast::ExprKind::While(..) |
- ast::ExprKind::Loop(..) |
- ast::ExprKind::ForLoop(..) |
- ast::ExprKind::TryBlock(..) => false,
- _ => true,
- }
-}
+++ /dev/null
-pub use CommentStyle::*;
-
-use crate::ast;
-use crate::source_map::SourceMap;
-use crate::parse::lexer::is_block_doc_comment;
-use crate::parse::lexer::ParseSess;
-
-use syntax_pos::{BytePos, CharPos, Pos, FileName};
-
-use std::usize;
-
-#[cfg(test)]
-mod tests;
-
-#[derive(Clone, Copy, PartialEq, Debug)]
-pub enum CommentStyle {
- /// No code on either side of each line of the comment
- Isolated,
- /// Code exists to the left of the comment
- Trailing,
- /// Code before /* foo */ and after the comment
- Mixed,
- /// Just a manual blank line "\n\n", for layout
- BlankLine,
-}
-
-#[derive(Clone)]
-pub struct Comment {
- pub style: CommentStyle,
- pub lines: Vec<String>,
- pub pos: BytePos,
-}
-
-fn is_doc_comment(s: &str) -> bool {
- (s.starts_with("///") && super::is_doc_comment(s)) || s.starts_with("//!") ||
- (s.starts_with("/**") && is_block_doc_comment(s)) || s.starts_with("/*!")
-}
-
-pub fn doc_comment_style(comment: &str) -> ast::AttrStyle {
- assert!(is_doc_comment(comment));
- if comment.starts_with("//!") || comment.starts_with("/*!") {
- ast::AttrStyle::Inner
- } else {
- ast::AttrStyle::Outer
- }
-}
-
-pub fn strip_doc_comment_decoration(comment: &str) -> String {
- /// remove whitespace-only lines from the start/end of lines
- fn vertical_trim(lines: Vec<String>) -> Vec<String> {
- let mut i = 0;
- let mut j = lines.len();
- // first line of all-stars should be omitted
- if !lines.is_empty() && lines[0].chars().all(|c| c == '*') {
- i += 1;
- }
-
- while i < j && lines[i].trim().is_empty() {
- i += 1;
- }
- // like the first, a last line of all stars should be omitted
- if j > i &&
- lines[j - 1]
- .chars()
- .skip(1)
- .all(|c| c == '*') {
- j -= 1;
- }
-
- while j > i && lines[j - 1].trim().is_empty() {
- j -= 1;
- }
-
- lines[i..j].to_vec()
- }
-
- /// remove a "[ \t]*\*" block from each line, if possible
- fn horizontal_trim(lines: Vec<String>) -> Vec<String> {
- let mut i = usize::MAX;
- let mut can_trim = true;
- let mut first = true;
-
- for line in &lines {
- for (j, c) in line.chars().enumerate() {
- if j > i || !"* \t".contains(c) {
- can_trim = false;
- break;
- }
- if c == '*' {
- if first {
- i = j;
- first = false;
- } else if i != j {
- can_trim = false;
- }
- break;
- }
- }
- if i >= line.len() {
- can_trim = false;
- }
- if !can_trim {
- break;
- }
- }
-
- if can_trim {
- lines.iter()
- .map(|line| (&line[i + 1..line.len()]).to_string())
- .collect()
- } else {
- lines
- }
- }
-
- // one-line comments lose their prefix
- const ONELINERS: &[&str] = &["///!", "///", "//!", "//"];
-
- for prefix in ONELINERS {
- if comment.starts_with(*prefix) {
- return (&comment[prefix.len()..]).to_string();
- }
- }
-
- if comment.starts_with("/*") {
- let lines = comment[3..comment.len() - 2]
- .lines()
- .map(|s| s.to_string())
- .collect::<Vec<String>>();
-
- let lines = vertical_trim(lines);
- let lines = horizontal_trim(lines);
-
- return lines.join("\n");
- }
-
- panic!("not a doc-comment: {}", comment);
-}
-
-/// Returns `None` if the first `col` chars of `s` contain a non-whitespace char.
-/// Otherwise returns `Some(k)` where `k` is first char offset after that leading
-/// whitespace. Note that `k` may be outside bounds of `s`.
-fn all_whitespace(s: &str, col: CharPos) -> Option<usize> {
- let mut idx = 0;
- for (i, ch) in s.char_indices().take(col.to_usize()) {
- if !ch.is_whitespace() {
- return None;
- }
- idx = i + ch.len_utf8();
- }
- Some(idx)
-}
-
-fn trim_whitespace_prefix(s: &str, col: CharPos) -> &str {
- let len = s.len();
- match all_whitespace(&s, col) {
- Some(col) => if col < len { &s[col..] } else { "" },
- None => s,
- }
-}
-
-fn split_block_comment_into_lines(
- text: &str,
- col: CharPos,
-) -> Vec<String> {
- let mut res: Vec<String> = vec![];
- let mut lines = text.lines();
- // just push the first line
- res.extend(lines.next().map(|it| it.to_string()));
- // for other lines, strip common whitespace prefix
- for line in lines {
- res.push(trim_whitespace_prefix(line, col).to_string())
- }
- res
-}
-
-// it appears this function is called only from pprust... that's
-// probably not a good thing.
-crate fn gather_comments(sess: &ParseSess, path: FileName, src: String) -> Vec<Comment> {
- let cm = SourceMap::new(sess.source_map().path_mapping().clone());
- let source_file = cm.new_source_file(path, src);
- let text = (*source_file.src.as_ref().unwrap()).clone();
-
- let text: &str = text.as_str();
- let start_bpos = source_file.start_pos;
- let mut pos = 0;
- let mut comments: Vec<Comment> = Vec::new();
- let mut code_to_the_left = false;
-
- if let Some(shebang_len) = rustc_lexer::strip_shebang(text) {
- comments.push(Comment {
- style: Isolated,
- lines: vec![text[..shebang_len].to_string()],
- pos: start_bpos,
- });
- pos += shebang_len;
- }
-
- for token in rustc_lexer::tokenize(&text[pos..]) {
- let token_text = &text[pos..pos + token.len];
- match token.kind {
- rustc_lexer::TokenKind::Whitespace => {
- if let Some(mut idx) = token_text.find('\n') {
- code_to_the_left = false;
- while let Some(next_newline) = &token_text[idx + 1..].find('\n') {
- idx = idx + 1 + next_newline;
- comments.push(Comment {
- style: BlankLine,
- lines: vec![],
- pos: start_bpos + BytePos((pos + idx) as u32),
- });
- }
- }
- }
- rustc_lexer::TokenKind::BlockComment { terminated: _ } => {
- if !is_block_doc_comment(token_text) {
- let code_to_the_right = match text[pos + token.len..].chars().next() {
- Some('\r') | Some('\n') => false,
- _ => true,
- };
- let style = match (code_to_the_left, code_to_the_right) {
- (true, true) | (false, true) => Mixed,
- (false, false) => Isolated,
- (true, false) => Trailing,
- };
-
- // Count the number of chars since the start of the line by rescanning.
- let pos_in_file = start_bpos + BytePos(pos as u32);
- let line_begin_in_file = source_file.line_begin_pos(pos_in_file);
- let line_begin_pos = (line_begin_in_file - start_bpos).to_usize();
- let col = CharPos(text[line_begin_pos..pos].chars().count());
-
- let lines = split_block_comment_into_lines(token_text, col);
- comments.push(Comment { style, lines, pos: pos_in_file })
- }
- }
- rustc_lexer::TokenKind::LineComment => {
- if !is_doc_comment(token_text) {
- comments.push(Comment {
- style: if code_to_the_left { Trailing } else { Isolated },
- lines: vec![token_text.to_string()],
- pos: start_bpos + BytePos(pos as u32),
- })
- }
- }
- _ => {
- code_to_the_left = true;
- }
- }
- pos += token.len;
- }
-
- comments
-}
+++ /dev/null
-use super::*;
-
-#[test]
-fn test_block_doc_comment_1() {
- let comment = "/**\n * Test \n ** Test\n * Test\n*/";
- let stripped = strip_doc_comment_decoration(comment);
- assert_eq!(stripped, " Test \n* Test\n Test");
-}
-
-#[test]
-fn test_block_doc_comment_2() {
- let comment = "/**\n * Test\n * Test\n*/";
- let stripped = strip_doc_comment_decoration(comment);
- assert_eq!(stripped, " Test\n Test");
-}
-
-#[test]
-fn test_block_doc_comment_3() {
- let comment = "/**\n let a: *i32;\n *a = 5;\n*/";
- let stripped = strip_doc_comment_decoration(comment);
- assert_eq!(stripped, " let a: *i32;\n *a = 5;");
-}
-
-#[test]
-fn test_block_doc_comment_4() {
- let comment = "/*******************\n test\n *********************/";
- let stripped = strip_doc_comment_decoration(comment);
- assert_eq!(stripped, " test");
-}
-
-#[test]
-fn test_line_doc_comment() {
- let stripped = strip_doc_comment_decoration("/// test");
- assert_eq!(stripped, " test");
- let stripped = strip_doc_comment_decoration("///! test");
- assert_eq!(stripped, " test");
- let stripped = strip_doc_comment_decoration("// test");
- assert_eq!(stripped, " test");
- let stripped = strip_doc_comment_decoration("// test");
- assert_eq!(stripped, " test");
- let stripped = strip_doc_comment_decoration("///test");
- assert_eq!(stripped, "test");
- let stripped = strip_doc_comment_decoration("///!test");
- assert_eq!(stripped, "test");
- let stripped = strip_doc_comment_decoration("//test");
- assert_eq!(stripped, "test");
-}
-use crate::parse::token::{self, Token, TokenKind};
+use crate::token::{self, Token, TokenKind};
use crate::sess::ParseSess;
use crate::symbol::{sym, Symbol};
-use crate::parse::unescape_error_reporting::{emit_unescape_error, push_escaped_char};
+use crate::util::comments;
use errors::{FatalError, DiagnosticBuilder};
use syntax_pos::{BytePos, Pos, Span};
#[cfg(test)]
mod tests;
-pub mod comments;
mod tokentrees;
mod unicode_chars;
+mod unescape_error_reporting;
+use unescape_error_reporting::{emit_unescape_error, push_escaped_char};
#[derive(Clone, Debug)]
pub struct UnmatchedBrace {
rustc_lexer::TokenKind::LineComment => {
let string = self.str_from(start);
// comments with only more "/"s are not doc comments
- let tok = if is_doc_comment(string) {
+ let tok = if comments::is_line_doc_comment(string) {
self.forbid_bare_cr(start, string, "bare CR not allowed in doc-comment");
token::DocComment(Symbol::intern(string))
} else {
let string = self.str_from(start);
// block comments starting with "/**" or "/*!" are doc-comments
// but comments with only "*"s between two "/"s are not
- let is_doc_comment = is_block_doc_comment(string);
+ let is_doc_comment = comments::is_block_doc_comment(string);
if !terminated {
let msg = if is_doc_comment {
}
}
}
-
-fn is_doc_comment(s: &str) -> bool {
- let res = (s.starts_with("///") && *s.as_bytes().get(3).unwrap_or(&b' ') != b'/') ||
- s.starts_with("//!");
- debug!("is {:?} a doc comment? {}", s, res);
- res
-}
-
-fn is_block_doc_comment(s: &str) -> bool {
- // Prevent `/**/` from being parsed as a doc comment
- let res = ((s.starts_with("/**") && *s.as_bytes().get(3).unwrap_or(&b' ') != b'*') ||
- s.starts_with("/*!")) && s.len() >= 5;
- debug!("is {:?} a doc comment? {}", s, res);
- res
-}
use crate::symbol::Symbol;
use crate::source_map::{SourceMap, FilePathMapping};
-use crate::parse::token;
+use crate::token;
+use crate::util::comments::is_doc_comment;
use crate::with_default_globals;
use errors::{Handler, emitter::EmitterWriter};
use super::{StringReader, UnmatchedBrace};
use crate::print::pprust::token_to_string;
-use crate::parse::token::{self, Token};
-use crate::parse::PResult;
+use crate::token::{self, Token};
use crate::tokenstream::{DelimSpan, IsJoint::{self, *}, TokenStream, TokenTree, TreeAndJoint};
+use errors::PResult;
+
impl<'a> StringReader<'a> {
crate fn into_token_trees(self) -> (PResult<'a, TokenStream>, Vec<UnmatchedBrace>) {
let mut tt_reader = TokenTreesReader {
--- /dev/null
+//! Utilities for rendering escape sequence errors as diagnostics.
+
+use std::ops::Range;
+use std::iter::once;
+
+use rustc_lexer::unescape::{EscapeError, Mode};
+use syntax_pos::{Span, BytePos};
+
+use crate::errors::{Handler, Applicability};
+
+pub(crate) fn emit_unescape_error(
+ handler: &Handler,
+ // interior part of the literal, without quotes
+ lit: &str,
+ // full span of the literal, including quotes
+ span_with_quotes: Span,
+ mode: Mode,
+ // range of the error inside `lit`
+ range: Range<usize>,
+ error: EscapeError,
+) {
+ log::debug!("emit_unescape_error: {:?}, {:?}, {:?}, {:?}, {:?}",
+ lit, span_with_quotes, mode, range, error);
+ let span = {
+ let Range { start, end } = range;
+ let (start, end) = (start as u32, end as u32);
+ let lo = span_with_quotes.lo() + BytePos(start + 1);
+ let hi = lo + BytePos(end - start);
+ span_with_quotes
+ .with_lo(lo)
+ .with_hi(hi)
+ };
+ let last_char = || {
+ let c = lit[range.clone()].chars().rev().next().unwrap();
+ let span = span.with_lo(span.hi() - BytePos(c.len_utf8() as u32));
+ (c, span)
+ };
+ match error {
+ EscapeError::LoneSurrogateUnicodeEscape => {
+ handler.struct_span_err(span, "invalid unicode character escape")
+ .help("unicode escape must not be a surrogate")
+ .emit();
+ }
+ EscapeError::OutOfRangeUnicodeEscape => {
+ handler.struct_span_err(span, "invalid unicode character escape")
+ .help("unicode escape must be at most 10FFFF")
+ .emit();
+ }
+ EscapeError::MoreThanOneChar => {
+ let msg = if mode.is_bytes() {
+ "if you meant to write a byte string literal, use double quotes"
+ } else {
+ "if you meant to write a `str` literal, use double quotes"
+ };
+
+ handler
+ .struct_span_err(
+ span_with_quotes,
+ "character literal may only contain one codepoint",
+ )
+ .span_suggestion(
+ span_with_quotes,
+ msg,
+ format!("\"{}\"", lit),
+ Applicability::MachineApplicable,
+ ).emit()
+ }
+ EscapeError::EscapeOnlyChar => {
+ let (c, _span) = last_char();
+
+ let mut msg = if mode.is_bytes() {
+ "byte constant must be escaped: "
+ } else {
+ "character constant must be escaped: "
+ }.to_string();
+ push_escaped_char(&mut msg, c);
+
+ handler.span_err(span, msg.as_str())
+ }
+ EscapeError::BareCarriageReturn => {
+ let msg = if mode.in_double_quotes() {
+ "bare CR not allowed in string, use \\r instead"
+ } else {
+ "character constant must be escaped: \\r"
+ };
+ handler.span_err(span, msg);
+ }
+ EscapeError::BareCarriageReturnInRawString => {
+ assert!(mode.in_double_quotes());
+ let msg = "bare CR not allowed in raw string";
+ handler.span_err(span, msg);
+ }
+ EscapeError::InvalidEscape => {
+ let (c, span) = last_char();
+
+ let label = if mode.is_bytes() {
+ "unknown byte escape"
+ } else {
+ "unknown character escape"
+ };
+ let mut msg = label.to_string();
+ msg.push_str(": ");
+ push_escaped_char(&mut msg, c);
+
+ let mut diag = handler.struct_span_err(span, msg.as_str());
+ diag.span_label(span, label);
+ if c == '{' || c == '}' && !mode.is_bytes() {
+ diag.help("if used in a formatting string, \
+ curly braces are escaped with `{{` and `}}`");
+ } else if c == '\r' {
+ diag.help("this is an isolated carriage return; \
+ consider checking your editor and version control settings");
+ }
+ diag.emit();
+ }
+ EscapeError::TooShortHexEscape => {
+ handler.span_err(span, "numeric character escape is too short")
+ }
+ EscapeError::InvalidCharInHexEscape | EscapeError::InvalidCharInUnicodeEscape => {
+ let (c, span) = last_char();
+
+ let mut msg = if error == EscapeError::InvalidCharInHexEscape {
+ "invalid character in numeric character escape: "
+ } else {
+ "invalid character in unicode escape: "
+ }.to_string();
+ push_escaped_char(&mut msg, c);
+
+ handler.span_err(span, msg.as_str())
+ }
+ EscapeError::NonAsciiCharInByte => {
+ assert!(mode.is_bytes());
+ let (_c, span) = last_char();
+ handler.span_err(span, "byte constant must be ASCII. \
+ Use a \\xHH escape for a non-ASCII byte")
+ }
+ EscapeError::NonAsciiCharInByteString => {
+ assert!(mode.is_bytes());
+ let (_c, span) = last_char();
+ handler.span_err(span, "raw byte string must be ASCII")
+ }
+ EscapeError::OutOfRangeHexEscape => {
+ handler.span_err(span, "this form of character escape may only be used \
+ with characters in the range [\\x00-\\x7f]")
+ }
+ EscapeError::LeadingUnderscoreUnicodeEscape => {
+ let (_c, span) = last_char();
+ handler.span_err(span, "invalid start of unicode escape")
+ }
+ EscapeError::OverlongUnicodeEscape => {
+ handler.span_err(span, "overlong unicode escape (must have at most 6 hex digits)")
+ }
+ EscapeError::UnclosedUnicodeEscape => {
+ handler.span_err(span, "unterminated unicode escape (needed a `}`)")
+ }
+ EscapeError::NoBraceInUnicodeEscape => {
+ let msg = "incorrect unicode escape sequence";
+ let mut diag = handler.struct_span_err(span, msg);
+
+ let mut suggestion = "\\u{".to_owned();
+ let mut suggestion_len = 0;
+ let (c, char_span) = last_char();
+ let chars = once(c).chain(lit[range.end..].chars());
+ for c in chars.take(6).take_while(|c| c.is_digit(16)) {
+ suggestion.push(c);
+ suggestion_len += c.len_utf8();
+ }
+
+ if suggestion_len > 0 {
+ suggestion.push('}');
+ let lo = char_span.lo();
+ let hi = lo + BytePos(suggestion_len as u32);
+ diag.span_suggestion(
+ span.with_lo(lo).with_hi(hi),
+ "format of unicode escape sequences uses braces",
+ suggestion,
+ Applicability::MaybeIncorrect,
+ );
+ } else {
+ diag.span_label(span, msg);
+ diag.help(
+ "format of unicode escape sequences is `\\u{...}`",
+ );
+ }
+
+ diag.emit();
+ }
+ EscapeError::UnicodeEscapeInByte => {
+ handler.span_err(span, "unicode escape sequences cannot be used \
+ as a byte or in a byte string")
+ }
+ EscapeError::EmptyUnicodeEscape => {
+ handler.span_err(span, "empty unicode escape (must have at least 1 hex digit)")
+ }
+ EscapeError::ZeroChars => {
+ handler.span_err(span, "empty character literal")
+ }
+ EscapeError::LoneSlash => {
+ handler.span_err(span, "invalid trailing slash in literal")
+ }
+ }
+}
+
+/// Pushes a character to a message string for error reporting
+pub(crate) fn push_escaped_char(msg: &mut String, c: char) {
+ match c {
+ '\u{20}'..='\u{7e}' => {
+ // Don't escape \, ' or " for user-facing messages
+ msg.push(c);
+ }
+ _ => {
+ msg.extend(c.escape_default());
+ }
+ }
+}
use super::StringReader;
use errors::{Applicability, DiagnosticBuilder};
use syntax_pos::{BytePos, Pos, Span, symbol::kw};
-use crate::parse::token;
+use crate::token;
#[rustfmt::skip] // for line breaks
const UNICODE_ARRAY: &[(char, &str, char)] = &[
+++ /dev/null
-//! Code related to parsing literals.
-
-use crate::ast::{self, Lit, LitKind};
-use crate::parse::token::{self, Token};
-use crate::symbol::{kw, sym, Symbol};
-use crate::tokenstream::TokenTree;
-
-use log::debug;
-use rustc_data_structures::sync::Lrc;
-use syntax_pos::Span;
-use rustc_lexer::unescape::{unescape_char, unescape_byte};
-use rustc_lexer::unescape::{unescape_str, unescape_byte_str};
-use rustc_lexer::unescape::{unescape_raw_str, unescape_raw_byte_str};
-
-use std::ascii;
-
-crate enum LitError {
- NotLiteral,
- LexerError,
- InvalidSuffix,
- InvalidIntSuffix,
- InvalidFloatSuffix,
- NonDecimalFloat(u32),
- IntTooLarge,
-}
-
-impl LitKind {
- /// Converts literal token into a semantic literal.
- fn from_lit_token(lit: token::Lit) -> Result<LitKind, LitError> {
- let token::Lit { kind, symbol, suffix } = lit;
- if suffix.is_some() && !kind.may_have_suffix() {
- return Err(LitError::InvalidSuffix);
- }
-
- Ok(match kind {
- token::Bool => {
- assert!(symbol.is_bool_lit());
- LitKind::Bool(symbol == kw::True)
- }
- token::Byte => return unescape_byte(&symbol.as_str())
- .map(LitKind::Byte).map_err(|_| LitError::LexerError),
- token::Char => return unescape_char(&symbol.as_str())
- .map(LitKind::Char).map_err(|_| LitError::LexerError),
-
- // There are some valid suffixes for integer and float literals,
- // so all the handling is done internally.
- token::Integer => return integer_lit(symbol, suffix),
- token::Float => return float_lit(symbol, suffix),
-
- token::Str => {
- // If there are no characters requiring special treatment we can
- // reuse the symbol from the token. Otherwise, we must generate a
- // new symbol because the string in the LitKind is different to the
- // string in the token.
- let s = symbol.as_str();
- let symbol = if s.contains(&['\\', '\r'][..]) {
- let mut buf = String::with_capacity(s.len());
- let mut error = Ok(());
- unescape_str(&s, &mut |_, unescaped_char| {
- match unescaped_char {
- Ok(c) => buf.push(c),
- Err(_) => error = Err(LitError::LexerError),
- }
- });
- error?;
- Symbol::intern(&buf)
- } else {
- symbol
- };
- LitKind::Str(symbol, ast::StrStyle::Cooked)
- }
- token::StrRaw(n) => {
- // Ditto.
- let s = symbol.as_str();
- let symbol = if s.contains('\r') {
- let mut buf = String::with_capacity(s.len());
- let mut error = Ok(());
- unescape_raw_str(&s, &mut |_, unescaped_char| {
- match unescaped_char {
- Ok(c) => buf.push(c),
- Err(_) => error = Err(LitError::LexerError),
- }
- });
- error?;
- buf.shrink_to_fit();
- Symbol::intern(&buf)
- } else {
- symbol
- };
- LitKind::Str(symbol, ast::StrStyle::Raw(n))
- }
- token::ByteStr => {
- let s = symbol.as_str();
- let mut buf = Vec::with_capacity(s.len());
- let mut error = Ok(());
- unescape_byte_str(&s, &mut |_, unescaped_byte| {
- match unescaped_byte {
- Ok(c) => buf.push(c),
- Err(_) => error = Err(LitError::LexerError),
- }
- });
- error?;
- buf.shrink_to_fit();
- LitKind::ByteStr(Lrc::new(buf))
- }
- token::ByteStrRaw(_) => {
- let s = symbol.as_str();
- let bytes = if s.contains('\r') {
- let mut buf = Vec::with_capacity(s.len());
- let mut error = Ok(());
- unescape_raw_byte_str(&s, &mut |_, unescaped_byte| {
- match unescaped_byte {
- Ok(c) => buf.push(c),
- Err(_) => error = Err(LitError::LexerError),
- }
- });
- error?;
- buf.shrink_to_fit();
- buf
- } else {
- symbol.to_string().into_bytes()
- };
-
- LitKind::ByteStr(Lrc::new(bytes))
- },
- token::Err => LitKind::Err(symbol),
- })
- }
-
- /// Attempts to recover a token from semantic literal.
- /// This function is used when the original token doesn't exist (e.g. the literal is created
- /// by an AST-based macro) or unavailable (e.g. from HIR pretty-printing).
- pub fn to_lit_token(&self) -> token::Lit {
- let (kind, symbol, suffix) = match *self {
- LitKind::Str(symbol, ast::StrStyle::Cooked) => {
- // Don't re-intern unless the escaped string is different.
- let s = symbol.as_str();
- let escaped = s.escape_default().to_string();
- let symbol = if s == escaped { symbol } else { Symbol::intern(&escaped) };
- (token::Str, symbol, None)
- }
- LitKind::Str(symbol, ast::StrStyle::Raw(n)) => {
- (token::StrRaw(n), symbol, None)
- }
- LitKind::ByteStr(ref bytes) => {
- let string = bytes.iter().cloned().flat_map(ascii::escape_default)
- .map(Into::<char>::into).collect::<String>();
- (token::ByteStr, Symbol::intern(&string), None)
- }
- LitKind::Byte(byte) => {
- let string: String = ascii::escape_default(byte).map(Into::<char>::into).collect();
- (token::Byte, Symbol::intern(&string), None)
- }
- LitKind::Char(ch) => {
- let string: String = ch.escape_default().map(Into::<char>::into).collect();
- (token::Char, Symbol::intern(&string), None)
- }
- LitKind::Int(n, ty) => {
- let suffix = match ty {
- ast::LitIntType::Unsigned(ty) => Some(ty.to_symbol()),
- ast::LitIntType::Signed(ty) => Some(ty.to_symbol()),
- ast::LitIntType::Unsuffixed => None,
- };
- (token::Integer, sym::integer(n), suffix)
- }
- LitKind::Float(symbol, ty) => {
- (token::Float, symbol, Some(ty.to_symbol()))
- }
- LitKind::FloatUnsuffixed(symbol) => {
- (token::Float, symbol, None)
- }
- LitKind::Bool(value) => {
- let symbol = if value { kw::True } else { kw::False };
- (token::Bool, symbol, None)
- }
- LitKind::Err(symbol) => {
- (token::Err, symbol, None)
- }
- };
-
- token::Lit::new(kind, symbol, suffix)
- }
-}
-
-impl Lit {
- /// Converts literal token into an AST literal.
- crate fn from_lit_token(token: token::Lit, span: Span) -> Result<Lit, LitError> {
- Ok(Lit { token, kind: LitKind::from_lit_token(token)?, span })
- }
-
- /// Converts arbitrary token into an AST literal.
- crate fn from_token(token: &Token) -> Result<Lit, LitError> {
- let lit = match token.kind {
- token::Ident(name, false) if name.is_bool_lit() =>
- token::Lit::new(token::Bool, name, None),
- token::Literal(lit) =>
- lit,
- token::Interpolated(ref nt) => {
- if let token::NtExpr(expr) | token::NtLiteral(expr) = &**nt {
- if let ast::ExprKind::Lit(lit) = &expr.kind {
- return Ok(lit.clone());
- }
- }
- return Err(LitError::NotLiteral);
- }
- _ => return Err(LitError::NotLiteral)
- };
-
- Lit::from_lit_token(lit, token.span)
- }
-
- /// Attempts to recover an AST literal from semantic literal.
- /// This function is used when the original token doesn't exist (e.g. the literal is created
- /// by an AST-based macro) or unavailable (e.g. from HIR pretty-printing).
- pub fn from_lit_kind(kind: LitKind, span: Span) -> Lit {
- Lit { token: kind.to_lit_token(), kind, span }
- }
-
- /// Losslessly convert an AST literal into a token tree.
- crate fn token_tree(&self) -> TokenTree {
- let token = match self.token.kind {
- token::Bool => token::Ident(self.token.symbol, false),
- _ => token::Literal(self.token),
- };
- TokenTree::token(token, self.span)
- }
-}
-
-fn strip_underscores(symbol: Symbol) -> Symbol {
- // Do not allocate a new string unless necessary.
- let s = symbol.as_str();
- if s.contains('_') {
- let mut s = s.to_string();
- s.retain(|c| c != '_');
- return Symbol::intern(&s);
- }
- symbol
-}
-
-fn filtered_float_lit(symbol: Symbol, suffix: Option<Symbol>, base: u32)
- -> Result<LitKind, LitError> {
- debug!("filtered_float_lit: {:?}, {:?}, {:?}", symbol, suffix, base);
- if base != 10 {
- return Err(LitError::NonDecimalFloat(base));
- }
- Ok(match suffix {
- Some(suf) => match suf {
- sym::f32 => LitKind::Float(symbol, ast::FloatTy::F32),
- sym::f64 => LitKind::Float(symbol, ast::FloatTy::F64),
- _ => return Err(LitError::InvalidFloatSuffix),
- }
- None => LitKind::FloatUnsuffixed(symbol)
- })
-}
-
-fn float_lit(symbol: Symbol, suffix: Option<Symbol>) -> Result<LitKind, LitError> {
- debug!("float_lit: {:?}, {:?}", symbol, suffix);
- filtered_float_lit(strip_underscores(symbol), suffix, 10)
-}
-
-fn integer_lit(symbol: Symbol, suffix: Option<Symbol>) -> Result<LitKind, LitError> {
- debug!("integer_lit: {:?}, {:?}", symbol, suffix);
- let symbol = strip_underscores(symbol);
- let s = symbol.as_str();
-
- let base = match s.as_bytes() {
- [b'0', b'x', ..] => 16,
- [b'0', b'o', ..] => 8,
- [b'0', b'b', ..] => 2,
- _ => 10,
- };
-
- let ty = match suffix {
- Some(suf) => match suf {
- sym::isize => ast::LitIntType::Signed(ast::IntTy::Isize),
- sym::i8 => ast::LitIntType::Signed(ast::IntTy::I8),
- sym::i16 => ast::LitIntType::Signed(ast::IntTy::I16),
- sym::i32 => ast::LitIntType::Signed(ast::IntTy::I32),
- sym::i64 => ast::LitIntType::Signed(ast::IntTy::I64),
- sym::i128 => ast::LitIntType::Signed(ast::IntTy::I128),
- sym::usize => ast::LitIntType::Unsigned(ast::UintTy::Usize),
- sym::u8 => ast::LitIntType::Unsigned(ast::UintTy::U8),
- sym::u16 => ast::LitIntType::Unsigned(ast::UintTy::U16),
- sym::u32 => ast::LitIntType::Unsigned(ast::UintTy::U32),
- sym::u64 => ast::LitIntType::Unsigned(ast::UintTy::U64),
- sym::u128 => ast::LitIntType::Unsigned(ast::UintTy::U128),
- // `1f64` and `2f32` etc. are valid float literals, and
- // `fxxx` looks more like an invalid float literal than invalid integer literal.
- _ if suf.as_str().starts_with('f') => return filtered_float_lit(symbol, suffix, base),
- _ => return Err(LitError::InvalidIntSuffix),
- }
- _ => ast::LitIntType::Unsuffixed
- };
-
- let s = &s[if base != 10 { 2 } else { 0 } ..];
- u128::from_str_radix(s, base).map(|i| LitKind::Int(i, ty)).map_err(|_| {
- // Small bases are lexed as if they were base 10, e.g, the string
- // might be `0b10201`. This will cause the conversion above to fail,
- // but these kinds of errors are already reported by the lexer.
- let from_lexer =
- base < 10 && s.chars().any(|c| c.to_digit(10).map_or(false, |d| d >= base));
- if from_lexer { LitError::LexerError } else { LitError::IntTooLarge }
- })
-}
use crate::ast;
use crate::parse::parser::{Parser, emit_unclosed_delims, make_unclosed_delims_error};
-use crate::parse::token::Nonterminal;
+use crate::token::{self, Nonterminal};
use crate::tokenstream::{self, TokenStream, TokenTree};
use crate::print::pprust;
use crate::sess::ParseSess;
-use errors::{FatalError, Level, Diagnostic, DiagnosticBuilder};
-#[cfg(target_arch = "x86_64")]
-use rustc_data_structures::static_assert_size;
+use errors::{PResult, FatalError, Level, Diagnostic};
use rustc_data_structures::sync::Lrc;
use syntax_pos::{Span, SourceFile, FileName};
#[macro_use]
pub mod parser;
pub mod lexer;
-pub mod token;
-
-crate mod classify;
-crate mod literal;
-crate mod unescape_error_reporting;
-
-pub type PResult<'a, T> = Result<T, DiagnosticBuilder<'a>>;
-
-// `PResult` is used a lot. Make sure it doesn't unintentionally get bigger.
-// (See also the comment on `DiagnosticBuilderInner`.)
-#[cfg(target_arch = "x86_64")]
-static_assert_size!(PResult<'_, bool>, 16);
#[derive(Clone)]
pub struct Directory<'a> {
) -> PResult<'a, T> {
let mut parser = Parser::new(
sess,
- attr.tokens.clone(),
+ attr.get_normal_item().tokens.clone(),
None,
false,
false,
let source = pprust::attribute_to_string(attr);
let macro_filename = FileName::macro_expansion_source_code(&source);
- if attr.is_sugared_doc {
- let stream = parse_stream_from_source_str(macro_filename, source, sess, Some(span));
- builder.push(stream);
- continue
- }
+
+ let item = match attr.kind {
+ ast::AttrKind::Normal(ref item) => item,
+ ast::AttrKind::DocComment(_) => {
+ let stream = parse_stream_from_source_str(macro_filename, source, sess, Some(span));
+ builder.push(stream);
+ continue
+ }
+ };
// synthesize # [ $path $tokens ] manually here
let mut brackets = tokenstream::TokenStreamBuilder::new();
// For simple paths, push the identifier directly
- if attr.path.segments.len() == 1 && attr.path.segments[0].args.is_none() {
- let ident = attr.path.segments[0].ident;
+ if item.path.segments.len() == 1 && item.path.segments[0].args.is_none() {
+ let ident = item.path.segments[0].ident;
let token = token::Ident(ident.name, ident.as_str().starts_with("r#"));
brackets.push(tokenstream::TokenTree::token(token, ident.span));
brackets.push(stream);
}
- brackets.push(attr.tokens.clone());
+ brackets.push(item.tokens.clone());
// The span we list here for `#` and for `[ ... ]` are both wrong in
// that it encompasses more than each token, but it hopefully is "good
+++ /dev/null
-pub mod attr;
-mod expr;
-mod pat;
-mod item;
-mod module;
-mod ty;
-mod path;
-pub use path::PathStyle;
-mod stmt;
-mod generics;
-mod diagnostics;
-use diagnostics::Error;
-
-use crate::ast::{
- self, DUMMY_NODE_ID, AttrStyle, Attribute, CrateSugar, Ident,
- IsAsync, MacDelimiter, Mutability, StrStyle, Visibility, VisibilityKind, Unsafety,
-};
-use crate::parse::{PResult, Directory, DirectoryOwnership};
-use crate::parse::lexer::UnmatchedBrace;
-use crate::parse::lexer::comments::{doc_comment_style, strip_doc_comment_decoration};
-use crate::parse::token::{self, Token, TokenKind, DelimToken};
-use crate::print::pprust;
-use crate::ptr::P;
-use crate::sess::ParseSess;
-use crate::source_map::respan;
-use crate::symbol::{kw, sym, Symbol};
-use crate::tokenstream::{self, DelimSpan, TokenTree, TokenStream, TreeAndJoint};
-use crate::ThinVec;
-
-use errors::{Applicability, DiagnosticBuilder, DiagnosticId, FatalError};
-use rustc_target::spec::abi::{self, Abi};
-use syntax_pos::{Span, BytePos, DUMMY_SP, FileName};
-use log::debug;
-
-use std::borrow::Cow;
-use std::{cmp, mem, slice};
-use std::path::PathBuf;
-
-bitflags::bitflags! {
- struct Restrictions: u8 {
- const STMT_EXPR = 1 << 0;
- const NO_STRUCT_LITERAL = 1 << 1;
- }
-}
-
-#[derive(Clone, Copy, PartialEq, Debug)]
-enum SemiColonMode {
- Break,
- Ignore,
- Comma,
-}
-
-#[derive(Clone, Copy, PartialEq, Debug)]
-enum BlockMode {
- Break,
- Ignore,
-}
-
-/// Like `maybe_whole_expr`, but for things other than expressions.
-#[macro_export]
-macro_rules! maybe_whole {
- ($p:expr, $constructor:ident, |$x:ident| $e:expr) => {
- if let token::Interpolated(nt) = &$p.token.kind {
- if let token::$constructor(x) = &**nt {
- let $x = x.clone();
- $p.bump();
- return Ok($e);
- }
- }
- };
-}
-
-/// If the next tokens are ill-formed `$ty::` recover them as `<$ty>::`.
-#[macro_export]
-macro_rules! maybe_recover_from_interpolated_ty_qpath {
- ($self: expr, $allow_qpath_recovery: expr) => {
- if $allow_qpath_recovery && $self.look_ahead(1, |t| t == &token::ModSep) {
- if let token::Interpolated(nt) = &$self.token.kind {
- if let token::NtTy(ty) = &**nt {
- let ty = ty.clone();
- $self.bump();
- return $self.maybe_recover_from_bad_qpath_stage_2($self.prev_span, ty);
- }
- }
- }
- }
-}
-
-#[derive(Debug, Clone, Copy, PartialEq)]
-enum PrevTokenKind {
- DocComment,
- Comma,
- Plus,
- Interpolated,
- Eof,
- Ident,
- BitOr,
- Other,
-}
-
-// NOTE: `Ident`s are handled by `common.rs`.
-
-#[derive(Clone)]
-pub struct Parser<'a> {
- pub sess: &'a ParseSess,
- /// The current normalized token.
- /// "Normalized" means that some interpolated tokens
- /// (`$i: ident` and `$l: lifetime` meta-variables) are replaced
- /// with non-interpolated identifier and lifetime tokens they refer to.
- /// Perhaps the normalized / non-normalized setup can be simplified somehow.
- pub token: Token,
- /// The span of the current non-normalized token.
- meta_var_span: Option<Span>,
- /// The span of the previous non-normalized token.
- pub prev_span: Span,
- /// The kind of the previous normalized token (in simplified form).
- prev_token_kind: PrevTokenKind,
- restrictions: Restrictions,
- /// Used to determine the path to externally loaded source files.
- pub(super) directory: Directory<'a>,
- /// `true` to parse sub-modules in other files.
- pub(super) recurse_into_file_modules: bool,
- /// Name of the root module this parser originated from. If `None`, then the
- /// name is not known. This does not change while the parser is descending
- /// into modules, and sub-parsers have new values for this name.
- pub root_module_name: Option<String>,
- expected_tokens: Vec<TokenType>,
- token_cursor: TokenCursor,
- desugar_doc_comments: bool,
- /// `true` we should configure out of line modules as we parse.
- cfg_mods: bool,
- /// This field is used to keep track of how many left angle brackets we have seen. This is
- /// required in order to detect extra leading left angle brackets (`<` characters) and error
- /// appropriately.
- ///
- /// See the comments in the `parse_path_segment` function for more details.
- unmatched_angle_bracket_count: u32,
- max_angle_bracket_count: u32,
- /// A list of all unclosed delimiters found by the lexer. If an entry is used for error recovery
- /// it gets removed from here. Every entry left at the end gets emitted as an independent
- /// error.
- pub(super) unclosed_delims: Vec<UnmatchedBrace>,
- last_unexpected_token_span: Option<Span>,
- pub last_type_ascription: Option<(Span, bool /* likely path typo */)>,
- /// If present, this `Parser` is not parsing Rust code but rather a macro call.
- subparser_name: Option<&'static str>,
-}
-
-impl<'a> Drop for Parser<'a> {
- fn drop(&mut self) {
- emit_unclosed_delims(&mut self.unclosed_delims, &self.sess);
- }
-}
-
-#[derive(Clone)]
-struct TokenCursor {
- frame: TokenCursorFrame,
- stack: Vec<TokenCursorFrame>,
-}
-
-#[derive(Clone)]
-struct TokenCursorFrame {
- delim: token::DelimToken,
- span: DelimSpan,
- open_delim: bool,
- tree_cursor: tokenstream::Cursor,
- close_delim: bool,
- last_token: LastToken,
-}
-
-/// This is used in `TokenCursorFrame` above to track tokens that are consumed
-/// by the parser, and then that's transitively used to record the tokens that
-/// each parse AST item is created with.
-///
-/// Right now this has two states, either collecting tokens or not collecting
-/// tokens. If we're collecting tokens we just save everything off into a local
-/// `Vec`. This should eventually though likely save tokens from the original
-/// token stream and just use slicing of token streams to avoid creation of a
-/// whole new vector.
-///
-/// The second state is where we're passively not recording tokens, but the last
-/// token is still tracked for when we want to start recording tokens. This
-/// "last token" means that when we start recording tokens we'll want to ensure
-/// that this, the first token, is included in the output.
-///
-/// You can find some more example usage of this in the `collect_tokens` method
-/// on the parser.
-#[derive(Clone)]
-enum LastToken {
- Collecting(Vec<TreeAndJoint>),
- Was(Option<TreeAndJoint>),
-}
-
-impl TokenCursorFrame {
- fn new(span: DelimSpan, delim: DelimToken, tts: &TokenStream) -> Self {
- TokenCursorFrame {
- delim,
- span,
- open_delim: delim == token::NoDelim,
- tree_cursor: tts.clone().into_trees(),
- close_delim: delim == token::NoDelim,
- last_token: LastToken::Was(None),
- }
- }
-}
-
-impl TokenCursor {
- fn next(&mut self) -> Token {
- loop {
- let tree = if !self.frame.open_delim {
- self.frame.open_delim = true;
- TokenTree::open_tt(self.frame.span, self.frame.delim)
- } else if let Some(tree) = self.frame.tree_cursor.next() {
- tree
- } else if !self.frame.close_delim {
- self.frame.close_delim = true;
- TokenTree::close_tt(self.frame.span, self.frame.delim)
- } else if let Some(frame) = self.stack.pop() {
- self.frame = frame;
- continue
- } else {
- return Token::new(token::Eof, DUMMY_SP);
- };
-
- match self.frame.last_token {
- LastToken::Collecting(ref mut v) => v.push(tree.clone().into()),
- LastToken::Was(ref mut t) => *t = Some(tree.clone().into()),
- }
-
- match tree {
- TokenTree::Token(token) => return token,
- TokenTree::Delimited(sp, delim, tts) => {
- let frame = TokenCursorFrame::new(sp, delim, &tts);
- self.stack.push(mem::replace(&mut self.frame, frame));
- }
- }
- }
- }
-
- fn next_desugared(&mut self) -> Token {
- let (name, sp) = match self.next() {
- Token { kind: token::DocComment(name), span } => (name, span),
- tok => return tok,
- };
-
- let stripped = strip_doc_comment_decoration(&name.as_str());
-
- // Searches for the occurrences of `"#*` and returns the minimum number of `#`s
- // required to wrap the text.
- let mut num_of_hashes = 0;
- let mut count = 0;
- for ch in stripped.chars() {
- count = match ch {
- '"' => 1,
- '#' if count > 0 => count + 1,
- _ => 0,
- };
- num_of_hashes = cmp::max(num_of_hashes, count);
- }
-
- let delim_span = DelimSpan::from_single(sp);
- let body = TokenTree::Delimited(
- delim_span,
- token::Bracket,
- [
- TokenTree::token(token::Ident(sym::doc, false), sp),
- TokenTree::token(token::Eq, sp),
- TokenTree::token(TokenKind::lit(
- token::StrRaw(num_of_hashes), Symbol::intern(&stripped), None
- ), sp),
- ]
- .iter().cloned().collect::<TokenStream>().into(),
- );
-
- self.stack.push(mem::replace(&mut self.frame, TokenCursorFrame::new(
- delim_span,
- token::NoDelim,
- &if doc_comment_style(&name.as_str()) == AttrStyle::Inner {
- [TokenTree::token(token::Pound, sp), TokenTree::token(token::Not, sp), body]
- .iter().cloned().collect::<TokenStream>()
- } else {
- [TokenTree::token(token::Pound, sp), body]
- .iter().cloned().collect::<TokenStream>()
- },
- )));
-
- self.next()
- }
-}
-
-#[derive(Clone, PartialEq)]
-enum TokenType {
- Token(TokenKind),
- Keyword(Symbol),
- Operator,
- Lifetime,
- Ident,
- Path,
- Type,
- Const,
-}
-
-impl TokenType {
- fn to_string(&self) -> String {
- match *self {
- TokenType::Token(ref t) => format!("`{}`", pprust::token_kind_to_string(t)),
- TokenType::Keyword(kw) => format!("`{}`", kw),
- TokenType::Operator => "an operator".to_string(),
- TokenType::Lifetime => "lifetime".to_string(),
- TokenType::Ident => "identifier".to_string(),
- TokenType::Path => "path".to_string(),
- TokenType::Type => "type".to_string(),
- TokenType::Const => "const".to_string(),
- }
- }
-}
-
-#[derive(Copy, Clone, Debug)]
-enum TokenExpectType {
- Expect,
- NoExpect,
-}
-
-/// A sequence separator.
-struct SeqSep {
- /// The separator token.
- sep: Option<TokenKind>,
- /// `true` if a trailing separator is allowed.
- trailing_sep_allowed: bool,
-}
-
-impl SeqSep {
- fn trailing_allowed(t: TokenKind) -> SeqSep {
- SeqSep {
- sep: Some(t),
- trailing_sep_allowed: true,
- }
- }
-
- fn none() -> SeqSep {
- SeqSep {
- sep: None,
- trailing_sep_allowed: false,
- }
- }
-}
-
-impl<'a> Parser<'a> {
- pub fn new(
- sess: &'a ParseSess,
- tokens: TokenStream,
- directory: Option<Directory<'a>>,
- recurse_into_file_modules: bool,
- desugar_doc_comments: bool,
- subparser_name: Option<&'static str>,
- ) -> Self {
- let mut parser = Parser {
- sess,
- token: Token::dummy(),
- prev_span: DUMMY_SP,
- meta_var_span: None,
- prev_token_kind: PrevTokenKind::Other,
- restrictions: Restrictions::empty(),
- recurse_into_file_modules,
- directory: Directory {
- path: Cow::from(PathBuf::new()),
- ownership: DirectoryOwnership::Owned { relative: None }
- },
- root_module_name: None,
- expected_tokens: Vec::new(),
- token_cursor: TokenCursor {
- frame: TokenCursorFrame::new(
- DelimSpan::dummy(),
- token::NoDelim,
- &tokens.into(),
- ),
- stack: Vec::new(),
- },
- desugar_doc_comments,
- cfg_mods: true,
- unmatched_angle_bracket_count: 0,
- max_angle_bracket_count: 0,
- unclosed_delims: Vec::new(),
- last_unexpected_token_span: None,
- last_type_ascription: None,
- subparser_name,
- };
-
- parser.token = parser.next_tok();
-
- if let Some(directory) = directory {
- parser.directory = directory;
- } else if !parser.token.span.is_dummy() {
- if let Some(FileName::Real(path)) =
- &sess.source_map().lookup_char_pos(parser.token.span.lo()).file.unmapped_path {
- if let Some(directory_path) = path.parent() {
- parser.directory.path = Cow::from(directory_path.to_path_buf());
- }
- }
- }
-
- parser.process_potential_macro_variable();
- parser
- }
-
- fn next_tok(&mut self) -> Token {
- let mut next = if self.desugar_doc_comments {
- self.token_cursor.next_desugared()
- } else {
- self.token_cursor.next()
- };
- if next.span.is_dummy() {
- // Tweak the location for better diagnostics, but keep syntactic context intact.
- next.span = self.prev_span.with_ctxt(next.span.ctxt());
- }
- next
- }
-
- /// Converts the current token to a string using `self`'s reader.
- pub fn this_token_to_string(&self) -> String {
- pprust::token_to_string(&self.token)
- }
-
- fn token_descr(&self) -> Option<&'static str> {
- Some(match &self.token.kind {
- _ if self.token.is_special_ident() => "reserved identifier",
- _ if self.token.is_used_keyword() => "keyword",
- _ if self.token.is_unused_keyword() => "reserved keyword",
- token::DocComment(..) => "doc comment",
- _ => return None,
- })
- }
-
- pub(super) fn this_token_descr(&self) -> String {
- if let Some(prefix) = self.token_descr() {
- format!("{} `{}`", prefix, self.this_token_to_string())
- } else {
- format!("`{}`", self.this_token_to_string())
- }
- }
-
- crate fn unexpected<T>(&mut self) -> PResult<'a, T> {
- match self.expect_one_of(&[], &[]) {
- Err(e) => Err(e),
- Ok(_) => unreachable!(),
- }
- }
-
- /// Expects and consumes the token `t`. Signals an error if the next token is not `t`.
- pub fn expect(&mut self, t: &TokenKind) -> PResult<'a, bool /* recovered */> {
- if self.expected_tokens.is_empty() {
- if self.token == *t {
- self.bump();
- Ok(false)
- } else {
- self.unexpected_try_recover(t)
- }
- } else {
- self.expect_one_of(slice::from_ref(t), &[])
- }
- }
-
- /// Expect next token to be edible or inedible token. If edible,
- /// then consume it; if inedible, then return without consuming
- /// anything. Signal a fatal error if next token is unexpected.
- pub fn expect_one_of(
- &mut self,
- edible: &[TokenKind],
- inedible: &[TokenKind],
- ) -> PResult<'a, bool /* recovered */> {
- if edible.contains(&self.token.kind) {
- self.bump();
- Ok(false)
- } else if inedible.contains(&self.token.kind) {
- // leave it in the input
- Ok(false)
- } else if self.last_unexpected_token_span == Some(self.token.span) {
- FatalError.raise();
- } else {
- self.expected_one_of_not_found(edible, inedible)
- }
- }
-
- fn parse_ident(&mut self) -> PResult<'a, ast::Ident> {
- self.parse_ident_common(true)
- }
-
- fn parse_ident_common(&mut self, recover: bool) -> PResult<'a, ast::Ident> {
- match self.token.kind {
- token::Ident(name, _) => {
- if self.token.is_reserved_ident() {
- let mut err = self.expected_ident_found();
- if recover {
- err.emit();
- } else {
- return Err(err);
- }
- }
- let span = self.token.span;
- self.bump();
- Ok(Ident::new(name, span))
- }
- _ => {
- Err(if self.prev_token_kind == PrevTokenKind::DocComment {
- self.span_fatal_err(self.prev_span, Error::UselessDocComment)
- } else {
- self.expected_ident_found()
- })
- }
- }
- }
-
- /// Checks if the next token is `tok`, and returns `true` if so.
- ///
- /// This method will automatically add `tok` to `expected_tokens` if `tok` is not
- /// encountered.
- fn check(&mut self, tok: &TokenKind) -> bool {
- let is_present = self.token == *tok;
- if !is_present { self.expected_tokens.push(TokenType::Token(tok.clone())); }
- is_present
- }
-
- /// Consumes a token 'tok' if it exists. Returns whether the given token was present.
- pub fn eat(&mut self, tok: &TokenKind) -> bool {
- let is_present = self.check(tok);
- if is_present { self.bump() }
- is_present
- }
-
- /// If the next token is the given keyword, returns `true` without eating it.
- /// An expectation is also added for diagnostics purposes.
- fn check_keyword(&mut self, kw: Symbol) -> bool {
- self.expected_tokens.push(TokenType::Keyword(kw));
- self.token.is_keyword(kw)
- }
-
- /// If the next token is the given keyword, eats it and returns `true`.
- /// Otherwise, returns `false`. An expectation is also added for diagnostics purposes.
- fn eat_keyword(&mut self, kw: Symbol) -> bool {
- if self.check_keyword(kw) {
- self.bump();
- true
- } else {
- false
- }
- }
-
- fn eat_keyword_noexpect(&mut self, kw: Symbol) -> bool {
- if self.token.is_keyword(kw) {
- self.bump();
- true
- } else {
- false
- }
- }
-
- /// If the given word is not a keyword, signals an error.
- /// If the next token is not the given word, signals an error.
- /// Otherwise, eats it.
- fn expect_keyword(&mut self, kw: Symbol) -> PResult<'a, ()> {
- if !self.eat_keyword(kw) {
- self.unexpected()
- } else {
- Ok(())
- }
- }
-
- fn check_or_expected(&mut self, ok: bool, typ: TokenType) -> bool {
- if ok {
- true
- } else {
- self.expected_tokens.push(typ);
- false
- }
- }
-
- fn check_ident(&mut self) -> bool {
- self.check_or_expected(self.token.is_ident(), TokenType::Ident)
- }
-
- fn check_path(&mut self) -> bool {
- self.check_or_expected(self.token.is_path_start(), TokenType::Path)
- }
-
- fn check_type(&mut self) -> bool {
- self.check_or_expected(self.token.can_begin_type(), TokenType::Type)
- }
-
- fn check_const_arg(&mut self) -> bool {
- self.check_or_expected(self.token.can_begin_const_arg(), TokenType::Const)
- }
-
- /// Checks to see if the next token is either `+` or `+=`.
- /// Otherwise returns `false`.
- fn check_plus(&mut self) -> bool {
- self.check_or_expected(
- self.token.is_like_plus(),
- TokenType::Token(token::BinOp(token::Plus)),
- )
- }
-
- /// Expects and consumes a `+`. if `+=` is seen, replaces it with a `=`
- /// and continues. If a `+` is not seen, returns `false`.
- ///
- /// This is used when token-splitting `+=` into `+`.
- /// See issue #47856 for an example of when this may occur.
- fn eat_plus(&mut self) -> bool {
- self.expected_tokens.push(TokenType::Token(token::BinOp(token::Plus)));
- match self.token.kind {
- token::BinOp(token::Plus) => {
- self.bump();
- true
- }
- token::BinOpEq(token::Plus) => {
- let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
- self.bump_with(token::Eq, span);
- true
- }
- _ => false,
- }
- }
-
- /// Expects and consumes an `&`. If `&&` is seen, replaces it with a single
- /// `&` and continues. If an `&` is not seen, signals an error.
- fn expect_and(&mut self) -> PResult<'a, ()> {
- self.expected_tokens.push(TokenType::Token(token::BinOp(token::And)));
- match self.token.kind {
- token::BinOp(token::And) => {
- self.bump();
- Ok(())
- }
- token::AndAnd => {
- let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
- Ok(self.bump_with(token::BinOp(token::And), span))
- }
- _ => self.unexpected()
- }
- }
-
- /// Expects and consumes an `|`. If `||` is seen, replaces it with a single
- /// `|` and continues. If an `|` is not seen, signals an error.
- fn expect_or(&mut self) -> PResult<'a, ()> {
- self.expected_tokens.push(TokenType::Token(token::BinOp(token::Or)));
- match self.token.kind {
- token::BinOp(token::Or) => {
- self.bump();
- Ok(())
- }
- token::OrOr => {
- let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
- Ok(self.bump_with(token::BinOp(token::Or), span))
- }
- _ => self.unexpected()
- }
- }
-
- /// Attempts to consume a `<`. If `<<` is seen, replaces it with a single
- /// `<` and continue. If `<-` is seen, replaces it with a single `<`
- /// and continue. If a `<` is not seen, returns false.
- ///
- /// This is meant to be used when parsing generics on a path to get the
- /// starting token.
- fn eat_lt(&mut self) -> bool {
- self.expected_tokens.push(TokenType::Token(token::Lt));
- let ate = match self.token.kind {
- token::Lt => {
- self.bump();
- true
- }
- token::BinOp(token::Shl) => {
- let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
- self.bump_with(token::Lt, span);
- true
- }
- token::LArrow => {
- let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
- self.bump_with(token::BinOp(token::Minus), span);
- true
- }
- _ => false,
- };
-
- if ate {
- // See doc comment for `unmatched_angle_bracket_count`.
- self.unmatched_angle_bracket_count += 1;
- self.max_angle_bracket_count += 1;
- debug!("eat_lt: (increment) count={:?}", self.unmatched_angle_bracket_count);
- }
-
- ate
- }
-
- fn expect_lt(&mut self) -> PResult<'a, ()> {
- if !self.eat_lt() {
- self.unexpected()
- } else {
- Ok(())
- }
- }
-
- /// Expects and consumes a single `>` token. if a `>>` is seen, replaces it
- /// with a single `>` and continues. If a `>` is not seen, signals an error.
- fn expect_gt(&mut self) -> PResult<'a, ()> {
- self.expected_tokens.push(TokenType::Token(token::Gt));
- let ate = match self.token.kind {
- token::Gt => {
- self.bump();
- Some(())
- }
- token::BinOp(token::Shr) => {
- let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
- Some(self.bump_with(token::Gt, span))
- }
- token::BinOpEq(token::Shr) => {
- let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
- Some(self.bump_with(token::Ge, span))
- }
- token::Ge => {
- let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
- Some(self.bump_with(token::Eq, span))
- }
- _ => None,
- };
-
- match ate {
- Some(_) => {
- // See doc comment for `unmatched_angle_bracket_count`.
- if self.unmatched_angle_bracket_count > 0 {
- self.unmatched_angle_bracket_count -= 1;
- debug!("expect_gt: (decrement) count={:?}", self.unmatched_angle_bracket_count);
- }
-
- Ok(())
- },
- None => self.unexpected(),
- }
- }
-
- /// Parses a sequence, including the closing delimiter. The function
- /// `f` must consume tokens until reaching the next separator or
- /// closing bracket.
- fn parse_seq_to_end<T>(
- &mut self,
- ket: &TokenKind,
- sep: SeqSep,
- f: impl FnMut(&mut Parser<'a>) -> PResult<'a, T>,
- ) -> PResult<'a, Vec<T>> {
- let (val, _, recovered) = self.parse_seq_to_before_end(ket, sep, f)?;
- if !recovered {
- self.bump();
- }
- Ok(val)
- }
-
- /// Parses a sequence, not including the closing delimiter. The function
- /// `f` must consume tokens until reaching the next separator or
- /// closing bracket.
- fn parse_seq_to_before_end<T>(
- &mut self,
- ket: &TokenKind,
- sep: SeqSep,
- f: impl FnMut(&mut Parser<'a>) -> PResult<'a, T>,
- ) -> PResult<'a, (Vec<T>, bool, bool)> {
- self.parse_seq_to_before_tokens(&[ket], sep, TokenExpectType::Expect, f)
- }
-
- fn expect_any_with_type(&mut self, kets: &[&TokenKind], expect: TokenExpectType) -> bool {
- kets.iter().any(|k| {
- match expect {
- TokenExpectType::Expect => self.check(k),
- TokenExpectType::NoExpect => self.token == **k,
- }
- })
- }
-
- fn parse_seq_to_before_tokens<T>(
- &mut self,
- kets: &[&TokenKind],
- sep: SeqSep,
- expect: TokenExpectType,
- mut f: impl FnMut(&mut Parser<'a>) -> PResult<'a, T>,
- ) -> PResult<'a, (Vec<T>, bool /* trailing */, bool /* recovered */)> {
- let mut first = true;
- let mut recovered = false;
- let mut trailing = false;
- let mut v = vec![];
- while !self.expect_any_with_type(kets, expect) {
- if let token::CloseDelim(..) | token::Eof = self.token.kind {
- break
- }
- if let Some(ref t) = sep.sep {
- if first {
- first = false;
- } else {
- match self.expect(t) {
- Ok(false) => {}
- Ok(true) => {
- recovered = true;
- break;
- }
- Err(mut e) => {
- // Attempt to keep parsing if it was a similar separator.
- if let Some(ref tokens) = t.similar_tokens() {
- if tokens.contains(&self.token.kind) {
- self.bump();
- }
- }
- e.emit();
- // Attempt to keep parsing if it was an omitted separator.
- match f(self) {
- Ok(t) => {
- v.push(t);
- continue;
- },
- Err(mut e) => {
- e.cancel();
- break;
- }
- }
- }
- }
- }
- }
- if sep.trailing_sep_allowed && self.expect_any_with_type(kets, expect) {
- trailing = true;
- break;
- }
-
- let t = f(self)?;
- v.push(t);
- }
-
- Ok((v, trailing, recovered))
- }
-
- /// Parses a sequence, including the closing delimiter. The function
- /// `f` must consume tokens until reaching the next separator or
- /// closing bracket.
- fn parse_unspanned_seq<T>(
- &mut self,
- bra: &TokenKind,
- ket: &TokenKind,
- sep: SeqSep,
- f: impl FnMut(&mut Parser<'a>) -> PResult<'a, T>,
- ) -> PResult<'a, (Vec<T>, bool)> {
- self.expect(bra)?;
- let (result, trailing, recovered) = self.parse_seq_to_before_end(ket, sep, f)?;
- if !recovered {
- self.eat(ket);
- }
- Ok((result, trailing))
- }
-
- fn parse_delim_comma_seq<T>(
- &mut self,
- delim: DelimToken,
- f: impl FnMut(&mut Parser<'a>) -> PResult<'a, T>,
- ) -> PResult<'a, (Vec<T>, bool)> {
- self.parse_unspanned_seq(
- &token::OpenDelim(delim),
- &token::CloseDelim(delim),
- SeqSep::trailing_allowed(token::Comma),
- f,
- )
- }
-
- fn parse_paren_comma_seq<T>(
- &mut self,
- f: impl FnMut(&mut Parser<'a>) -> PResult<'a, T>,
- ) -> PResult<'a, (Vec<T>, bool)> {
- self.parse_delim_comma_seq(token::Paren, f)
- }
-
- /// Advance the parser by one token.
- pub fn bump(&mut self) {
- if self.prev_token_kind == PrevTokenKind::Eof {
- // Bumping after EOF is a bad sign, usually an infinite loop.
- self.bug("attempted to bump the parser past EOF (may be stuck in a loop)");
- }
-
- self.prev_span = self.meta_var_span.take().unwrap_or(self.token.span);
-
- // Record last token kind for possible error recovery.
- self.prev_token_kind = match self.token.kind {
- token::DocComment(..) => PrevTokenKind::DocComment,
- token::Comma => PrevTokenKind::Comma,
- token::BinOp(token::Plus) => PrevTokenKind::Plus,
- token::BinOp(token::Or) => PrevTokenKind::BitOr,
- token::Interpolated(..) => PrevTokenKind::Interpolated,
- token::Eof => PrevTokenKind::Eof,
- token::Ident(..) => PrevTokenKind::Ident,
- _ => PrevTokenKind::Other,
- };
-
- self.token = self.next_tok();
- self.expected_tokens.clear();
- // Check after each token.
- self.process_potential_macro_variable();
- }
-
- /// Advances the parser using provided token as a next one. Use this when
- /// consuming a part of a token. For example a single `<` from `<<`.
- fn bump_with(&mut self, next: TokenKind, span: Span) {
- self.prev_span = self.token.span.with_hi(span.lo());
- // It would be incorrect to record the kind of the current token, but
- // fortunately for tokens currently using `bump_with`, the
- // `prev_token_kind` will be of no use anyway.
- self.prev_token_kind = PrevTokenKind::Other;
- self.token = Token::new(next, span);
- self.expected_tokens.clear();
- }
-
- /// Look-ahead `dist` tokens of `self.token` and get access to that token there.
- /// When `dist == 0` then the current token is looked at.
- pub fn look_ahead<R>(&self, dist: usize, looker: impl FnOnce(&Token) -> R) -> R {
- if dist == 0 {
- return looker(&self.token);
- }
-
- let frame = &self.token_cursor.frame;
- looker(&match frame.tree_cursor.look_ahead(dist - 1) {
- Some(tree) => match tree {
- TokenTree::Token(token) => token,
- TokenTree::Delimited(dspan, delim, _) =>
- Token::new(token::OpenDelim(delim), dspan.open),
- }
- None => Token::new(token::CloseDelim(frame.delim), frame.span.close)
- })
- }
-
- /// Returns whether any of the given keywords are `dist` tokens ahead of the current one.
- fn is_keyword_ahead(&self, dist: usize, kws: &[Symbol]) -> bool {
- self.look_ahead(dist, |t| kws.iter().any(|&kw| t.is_keyword(kw)))
- }
-
- /// Parses asyncness: `async` or nothing.
- fn parse_asyncness(&mut self) -> IsAsync {
- if self.eat_keyword(kw::Async) {
- IsAsync::Async {
- closure_id: DUMMY_NODE_ID,
- return_impl_trait_id: DUMMY_NODE_ID,
- }
- } else {
- IsAsync::NotAsync
- }
- }
-
- /// Parses unsafety: `unsafe` or nothing.
- fn parse_unsafety(&mut self) -> Unsafety {
- if self.eat_keyword(kw::Unsafe) {
- Unsafety::Unsafe
- } else {
- Unsafety::Normal
- }
- }
-
- /// Parses mutability (`mut` or nothing).
- fn parse_mutability(&mut self) -> Mutability {
- if self.eat_keyword(kw::Mut) {
- Mutability::Mutable
- } else {
- Mutability::Immutable
- }
- }
-
- /// Possibly parses mutability (`const` or `mut`).
- fn parse_const_or_mut(&mut self) -> Option<Mutability> {
- if self.eat_keyword(kw::Mut) {
- Some(Mutability::Mutable)
- } else if self.eat_keyword(kw::Const) {
- Some(Mutability::Immutable)
- } else {
- None
- }
- }
-
- fn parse_field_name(&mut self) -> PResult<'a, Ident> {
- if let token::Literal(token::Lit { kind: token::Integer, symbol, suffix }) =
- self.token.kind {
- self.expect_no_suffix(self.token.span, "a tuple index", suffix);
- self.bump();
- Ok(Ident::new(symbol, self.prev_span))
- } else {
- self.parse_ident_common(false)
- }
- }
-
- fn expect_delimited_token_tree(&mut self) -> PResult<'a, (MacDelimiter, TokenStream)> {
- let delim = match self.token.kind {
- token::OpenDelim(delim) => delim,
- _ => {
- let msg = "expected open delimiter";
- let mut err = self.fatal(msg);
- err.span_label(self.token.span, msg);
- return Err(err)
- }
- };
- let tts = match self.parse_token_tree() {
- TokenTree::Delimited(_, _, tts) => tts,
- _ => unreachable!(),
- };
- let delim = match delim {
- token::Paren => MacDelimiter::Parenthesis,
- token::Bracket => MacDelimiter::Bracket,
- token::Brace => MacDelimiter::Brace,
- token::NoDelim => self.bug("unexpected no delimiter"),
- };
- Ok((delim, tts.into()))
- }
-
- fn parse_or_use_outer_attributes(
- &mut self,
- already_parsed_attrs: Option<ThinVec<Attribute>>,
- ) -> PResult<'a, ThinVec<Attribute>> {
- if let Some(attrs) = already_parsed_attrs {
- Ok(attrs)
- } else {
- self.parse_outer_attributes().map(|a| a.into())
- }
- }
-
- pub fn process_potential_macro_variable(&mut self) {
- self.token = match self.token.kind {
- token::Dollar if self.token.span.from_expansion() &&
- self.look_ahead(1, |t| t.is_ident()) => {
- self.bump();
- let name = match self.token.kind {
- token::Ident(name, _) => name,
- _ => unreachable!()
- };
- let span = self.prev_span.to(self.token.span);
- self.diagnostic()
- .struct_span_fatal(span, &format!("unknown macro variable `{}`", name))
- .span_label(span, "unknown macro variable")
- .emit();
- self.bump();
- return
- }
- token::Interpolated(ref nt) => {
- self.meta_var_span = Some(self.token.span);
- // Interpolated identifier and lifetime tokens are replaced with usual identifier
- // and lifetime tokens, so the former are never encountered during normal parsing.
- match **nt {
- token::NtIdent(ident, is_raw) =>
- Token::new(token::Ident(ident.name, is_raw), ident.span),
- token::NtLifetime(ident) =>
- Token::new(token::Lifetime(ident.name), ident.span),
- _ => return,
- }
- }
- _ => return,
- };
- }
-
- /// Parses a single token tree from the input.
- pub fn parse_token_tree(&mut self) -> TokenTree {
- match self.token.kind {
- token::OpenDelim(..) => {
- let frame = mem::replace(&mut self.token_cursor.frame,
- self.token_cursor.stack.pop().unwrap());
- self.token.span = frame.span.entire();
- self.bump();
- TokenTree::Delimited(
- frame.span,
- frame.delim,
- frame.tree_cursor.stream.into(),
- )
- },
- token::CloseDelim(_) | token::Eof => unreachable!(),
- _ => {
- let token = self.token.take();
- self.bump();
- TokenTree::Token(token)
- }
- }
- }
-
- /// Parses a stream of tokens into a list of `TokenTree`s, up to EOF.
- pub fn parse_all_token_trees(&mut self) -> PResult<'a, Vec<TokenTree>> {
- let mut tts = Vec::new();
- while self.token != token::Eof {
- tts.push(self.parse_token_tree());
- }
- Ok(tts)
- }
-
- pub fn parse_tokens(&mut self) -> TokenStream {
- let mut result = Vec::new();
- loop {
- match self.token.kind {
- token::Eof | token::CloseDelim(..) => break,
- _ => result.push(self.parse_token_tree().into()),
- }
- }
- TokenStream::new(result)
- }
-
- /// Evaluates the closure with restrictions in place.
- ///
- /// Afters the closure is evaluated, restrictions are reset.
- fn with_res<T>(&mut self, res: Restrictions, f: impl FnOnce(&mut Self) -> T) -> T {
- let old = self.restrictions;
- self.restrictions = res;
- let res = f(self);
- self.restrictions = old;
- res
- }
-
- fn is_crate_vis(&self) -> bool {
- self.token.is_keyword(kw::Crate) && self.look_ahead(1, |t| t != &token::ModSep)
- }
-
- /// Parses `pub`, `pub(crate)` and `pub(in path)` plus shortcuts `crate` for `pub(crate)`,
- /// `pub(self)` for `pub(in self)` and `pub(super)` for `pub(in super)`.
- /// If the following element can't be a tuple (i.e., it's a function definition), then
- /// it's not a tuple struct field), and the contents within the parentheses isn't valid,
- /// so emit a proper diagnostic.
- pub fn parse_visibility(&mut self, can_take_tuple: bool) -> PResult<'a, Visibility> {
- maybe_whole!(self, NtVis, |x| x);
-
- self.expected_tokens.push(TokenType::Keyword(kw::Crate));
- if self.is_crate_vis() {
- self.bump(); // `crate`
- self.sess.gated_spans.crate_visibility_modifier.borrow_mut().push(self.prev_span);
- return Ok(respan(self.prev_span, VisibilityKind::Crate(CrateSugar::JustCrate)));
- }
-
- if !self.eat_keyword(kw::Pub) {
- // We need a span for our `Spanned<VisibilityKind>`, but there's inherently no
- // keyword to grab a span from for inherited visibility; an empty span at the
- // beginning of the current token would seem to be the "Schelling span".
- return Ok(respan(self.token.span.shrink_to_lo(), VisibilityKind::Inherited))
- }
- let lo = self.prev_span;
-
- if self.check(&token::OpenDelim(token::Paren)) {
- // We don't `self.bump()` the `(` yet because this might be a struct definition where
- // `()` or a tuple might be allowed. For example, `struct Struct(pub (), pub (usize));`.
- // Because of this, we only `bump` the `(` if we're assured it is appropriate to do so
- // by the following tokens.
- if self.is_keyword_ahead(1, &[kw::Crate])
- && self.look_ahead(2, |t| t != &token::ModSep) // account for `pub(crate::foo)`
- {
- // Parse `pub(crate)`.
- self.bump(); // `(`
- self.bump(); // `crate`
- self.expect(&token::CloseDelim(token::Paren))?; // `)`
- let vis = VisibilityKind::Crate(CrateSugar::PubCrate);
- return Ok(respan(lo.to(self.prev_span), vis));
- } else if self.is_keyword_ahead(1, &[kw::In]) {
- // Parse `pub(in path)`.
- self.bump(); // `(`
- self.bump(); // `in`
- let path = self.parse_path(PathStyle::Mod)?; // `path`
- self.expect(&token::CloseDelim(token::Paren))?; // `)`
- let vis = VisibilityKind::Restricted {
- path: P(path),
- id: ast::DUMMY_NODE_ID,
- };
- return Ok(respan(lo.to(self.prev_span), vis));
- } else if self.look_ahead(2, |t| t == &token::CloseDelim(token::Paren))
- && self.is_keyword_ahead(1, &[kw::Super, kw::SelfLower])
- {
- // Parse `pub(self)` or `pub(super)`.
- self.bump(); // `(`
- let path = self.parse_path(PathStyle::Mod)?; // `super`/`self`
- self.expect(&token::CloseDelim(token::Paren))?; // `)`
- let vis = VisibilityKind::Restricted {
- path: P(path),
- id: ast::DUMMY_NODE_ID,
- };
- return Ok(respan(lo.to(self.prev_span), vis));
- } else if !can_take_tuple { // Provide this diagnostic if this is not a tuple struct.
- self.recover_incorrect_vis_restriction()?;
- // Emit diagnostic, but continue with public visibility.
- }
- }
-
- Ok(respan(lo, VisibilityKind::Public))
- }
-
- /// Recovery for e.g. `pub(something) fn ...` or `struct X { pub(something) y: Z }`
- fn recover_incorrect_vis_restriction(&mut self) -> PResult<'a, ()> {
- self.bump(); // `(`
- let path = self.parse_path(PathStyle::Mod)?;
- self.expect(&token::CloseDelim(token::Paren))?; // `)`
-
- let msg = "incorrect visibility restriction";
- let suggestion = r##"some possible visibility restrictions are:
-`pub(crate)`: visible only on the current crate
-`pub(super)`: visible only in the current module's parent
-`pub(in path::to::module)`: visible only on the specified path"##;
-
- let path_str = pprust::path_to_string(&path);
-
- struct_span_err!(self.sess.span_diagnostic, path.span, E0704, "{}", msg)
- .help(suggestion)
- .span_suggestion(
- path.span,
- &format!("make this visible only to module `{}` with `in`", path_str),
- format!("in {}", path_str),
- Applicability::MachineApplicable,
- )
- .emit();
-
- Ok(())
- }
-
- /// Parses `extern` followed by an optional ABI string, or nothing.
- fn parse_extern_abi(&mut self) -> PResult<'a, Abi> {
- if self.eat_keyword(kw::Extern) {
- Ok(self.parse_opt_abi()?.unwrap_or(Abi::C))
- } else {
- Ok(Abi::Rust)
- }
- }
-
- /// Parses a string as an ABI spec on an extern type or module. Consumes
- /// the `extern` keyword, if one is found.
- fn parse_opt_abi(&mut self) -> PResult<'a, Option<Abi>> {
- match self.token.kind {
- token::Literal(token::Lit { kind: token::Str, symbol, suffix }) |
- token::Literal(token::Lit { kind: token::StrRaw(..), symbol, suffix }) => {
- self.expect_no_suffix(self.token.span, "an ABI spec", suffix);
- self.bump();
- match abi::lookup(&symbol.as_str()) {
- Some(abi) => Ok(Some(abi)),
- None => {
- self.error_on_invalid_abi(symbol);
- Ok(None)
- }
- }
- }
- _ => Ok(None),
- }
- }
-
- /// Emit an error where `symbol` is an invalid ABI.
- fn error_on_invalid_abi(&self, symbol: Symbol) {
- let prev_span = self.prev_span;
- struct_span_err!(
- self.sess.span_diagnostic,
- prev_span,
- E0703,
- "invalid ABI: found `{}`",
- symbol
- )
- .span_label(prev_span, "invalid ABI")
- .help(&format!("valid ABIs: {}", abi::all_names().join(", ")))
- .emit();
- }
-
- /// We are parsing `async fn`. If we are on Rust 2015, emit an error.
- fn ban_async_in_2015(&self, async_span: Span) {
- if async_span.rust_2015() {
- self.diagnostic()
- .struct_span_err_with_code(
- async_span,
- "`async fn` is not permitted in the 2015 edition",
- DiagnosticId::Error("E0670".into())
- )
- .emit();
- }
- }
-
- fn collect_tokens<R>(
- &mut self,
- f: impl FnOnce(&mut Self) -> PResult<'a, R>,
- ) -> PResult<'a, (R, TokenStream)> {
- // Record all tokens we parse when parsing this item.
- let mut tokens = Vec::new();
- let prev_collecting = match self.token_cursor.frame.last_token {
- LastToken::Collecting(ref mut list) => {
- Some(mem::take(list))
- }
- LastToken::Was(ref mut last) => {
- tokens.extend(last.take());
- None
- }
- };
- self.token_cursor.frame.last_token = LastToken::Collecting(tokens);
- let prev = self.token_cursor.stack.len();
- let ret = f(self);
- let last_token = if self.token_cursor.stack.len() == prev {
- &mut self.token_cursor.frame.last_token
- } else if self.token_cursor.stack.get(prev).is_none() {
- // This can happen due to a bad interaction of two unrelated recovery mechanisms with
- // mismatched delimiters *and* recovery lookahead on the likely typo `pub ident(`
- // (#62881).
- return Ok((ret?, TokenStream::default()));
- } else {
- &mut self.token_cursor.stack[prev].last_token
- };
-
- // Pull out the tokens that we've collected from the call to `f` above.
- let mut collected_tokens = match *last_token {
- LastToken::Collecting(ref mut v) => mem::take(v),
- LastToken::Was(ref was) => {
- let msg = format!("our vector went away? - found Was({:?})", was);
- debug!("collect_tokens: {}", msg);
- self.sess.span_diagnostic.delay_span_bug(self.token.span, &msg);
- // This can happen due to a bad interaction of two unrelated recovery mechanisms
- // with mismatched delimiters *and* recovery lookahead on the likely typo
- // `pub ident(` (#62895, different but similar to the case above).
- return Ok((ret?, TokenStream::default()));
- }
- };
-
- // If we're not at EOF our current token wasn't actually consumed by
- // `f`, but it'll still be in our list that we pulled out. In that case
- // put it back.
- let extra_token = if self.token != token::Eof {
- collected_tokens.pop()
- } else {
- None
- };
-
- // If we were previously collecting tokens, then this was a recursive
- // call. In that case we need to record all the tokens we collected in
- // our parent list as well. To do that we push a clone of our stream
- // onto the previous list.
- match prev_collecting {
- Some(mut list) => {
- list.extend(collected_tokens.iter().cloned());
- list.extend(extra_token);
- *last_token = LastToken::Collecting(list);
- }
- None => {
- *last_token = LastToken::Was(extra_token);
- }
- }
-
- Ok((ret?, TokenStream::new(collected_tokens)))
- }
-
- /// `::{` or `::*`
- fn is_import_coupler(&mut self) -> bool {
- self.check(&token::ModSep) &&
- self.look_ahead(1, |t| *t == token::OpenDelim(token::Brace) ||
- *t == token::BinOp(token::Star))
- }
-
- fn parse_optional_str(&mut self) -> Option<(Symbol, ast::StrStyle, Option<ast::Name>)> {
- let ret = match self.token.kind {
- token::Literal(token::Lit { kind: token::Str, symbol, suffix }) =>
- (symbol, ast::StrStyle::Cooked, suffix),
- token::Literal(token::Lit { kind: token::StrRaw(n), symbol, suffix }) =>
- (symbol, ast::StrStyle::Raw(n), suffix),
- _ => return None
- };
- self.bump();
- Some(ret)
- }
-
- pub fn parse_str(&mut self) -> PResult<'a, (Symbol, StrStyle)> {
- match self.parse_optional_str() {
- Some((s, style, suf)) => {
- let sp = self.prev_span;
- self.expect_no_suffix(sp, "a string literal", suf);
- Ok((s, style))
- }
- _ => {
- let msg = "expected string literal";
- let mut err = self.fatal(msg);
- err.span_label(self.token.span, msg);
- Err(err)
- }
- }
- }
-}
-
-crate fn make_unclosed_delims_error(
- unmatched: UnmatchedBrace,
- sess: &ParseSess,
-) -> Option<DiagnosticBuilder<'_>> {
- // `None` here means an `Eof` was found. We already emit those errors elsewhere, we add them to
- // `unmatched_braces` only for error recovery in the `Parser`.
- let found_delim = unmatched.found_delim?;
- let mut err = sess.span_diagnostic.struct_span_err(unmatched.found_span, &format!(
- "incorrect close delimiter: `{}`",
- pprust::token_kind_to_string(&token::CloseDelim(found_delim)),
- ));
- err.span_label(unmatched.found_span, "incorrect close delimiter");
- if let Some(sp) = unmatched.candidate_span {
- err.span_label(sp, "close delimiter possibly meant for this");
- }
- if let Some(sp) = unmatched.unclosed_span {
- err.span_label(sp, "un-closed delimiter");
- }
- Some(err)
-}
-
-pub fn emit_unclosed_delims(unclosed_delims: &mut Vec<UnmatchedBrace>, sess: &ParseSess) {
- *sess.reached_eof.borrow_mut() |= unclosed_delims.iter()
- .any(|unmatched_delim| unmatched_delim.found_delim.is_none());
- for unmatched in unclosed_delims.drain(..) {
- make_unclosed_delims_error(unmatched, sess).map(|mut e| e.emit());
- }
-}
-use super::{SeqSep, PResult, Parser, TokenType, PathStyle};
+use super::{SeqSep, Parser, TokenType, PathStyle};
use crate::attr;
use crate::ast;
-use crate::parse::token::{self, Nonterminal, DelimToken};
+use crate::util::comments;
+use crate::token::{self, Nonterminal, DelimToken};
use crate::tokenstream::{TokenStream, TokenTree};
use crate::source_map::Span;
+use syntax_pos::Symbol;
+use errors::PResult;
+
use log::debug;
#[derive(Debug)]
just_parsed_doc_comment = false;
}
token::DocComment(s) => {
- let attr = attr::mk_sugared_doc_attr(s, self.token.span);
+ let attr = self.mk_doc_comment(s);
if attr.style != ast::AttrStyle::Outer {
let mut err = self.fatal("expected outer doc comment");
err.note("inner doc comments like this (starting with \
Ok(attrs)
}
+ fn mk_doc_comment(&self, s: Symbol) -> ast::Attribute {
+ let style = comments::doc_comment_style(&s.as_str());
+ attr::mk_doc_comment(style, s, self.token.span)
+ }
+
/// Matches `attribute = # ! [ meta_item ]`.
///
/// If `permit_inner` is `true`, then a leading `!` indicates an inner
};
Ok(ast::Attribute {
- item,
+ kind: ast::AttrKind::Normal(item),
id: attr::mk_attr_id(),
style,
- is_sugared_doc: false,
span,
})
}
}
token::DocComment(s) => {
// We need to get the position of this token before we bump.
- let attr = attr::mk_sugared_doc_attr(s, self.token.span);
+ let attr = self.mk_doc_comment(s);
if attr.style == ast::AttrStyle::Inner {
attrs.push(attr);
self.bump();
-use super::{
- BlockMode, PathStyle, SemiColonMode, TokenType, TokenExpectType,
- SeqSep, PResult, Parser
-};
+use super::{BlockMode, PathStyle, SemiColonMode, TokenType, TokenExpectType, SeqSep, Parser};
use crate::ast::{
self, Param, BinOpKind, BindingMode, BlockCheckMode, Expr, ExprKind, Ident, Item, ItemKind,
Mutability, Pat, PatKind, PathSegment, QSelf, Ty, TyKind,
};
-use crate::parse::token::{self, TokenKind, token_can_begin_expr};
+use crate::token::{self, TokenKind, token_can_begin_expr};
use crate::print::pprust;
use crate::ptr::P;
use crate::symbol::{kw, sym};
use crate::ThinVec;
use crate::util::parser::AssocOp;
-use errors::{Applicability, DiagnosticBuilder, DiagnosticId, pluralize};
+
+use errors::{PResult, Applicability, DiagnosticBuilder, DiagnosticId, pluralize};
use rustc_data_structures::fx::FxHashSet;
use syntax_pos::{Span, DUMMY_SP, MultiSpan, SpanSnippetError};
use log::{debug, trace};
-use super::{Parser, PResult, Restrictions, PrevTokenKind, TokenType, PathStyle, BlockMode};
+use super::{Parser, Restrictions, PrevTokenKind, TokenType, PathStyle, BlockMode};
use super::{SemiColonMode, SeqSep, TokenExpectType};
use super::pat::{GateOr, PARAM_EXPECTED};
use super::diagnostics::Error;
-use crate::parse::literal::LitError;
-
use crate::ast::{
self, DUMMY_NODE_ID, Attribute, AttrStyle, Ident, CaptureBy, BlockCheckMode,
Expr, ExprKind, RangeLimits, Label, Movability, IsAsync, Arm, Ty, TyKind,
FunctionRetTy, Param, FnDecl, BinOpKind, BinOp, UnOp, Mac, AnonConst, Field, Lit,
};
use crate::maybe_recover_from_interpolated_ty_qpath;
-use crate::parse::classify;
-use crate::parse::token::{self, Token, TokenKind};
+use crate::token::{self, Token, TokenKind};
use crate::print::pprust;
use crate::ptr::P;
use crate::source_map::{self, Span};
-use crate::symbol::{kw, sym};
+use crate::util::classify;
+use crate::util::literal::LitError;
use crate::util::parser::{AssocOp, Fixity, prec_let_scrutinee_needs_par};
-use errors::Applicability;
+use errors::{PResult, Applicability};
+use syntax_pos::symbol::{kw, sym};
use syntax_pos::Symbol;
use std::mem;
use rustc_data_structures::thin_vec::ThinVec;
self.last_type_ascription = Some((self.prev_span, maybe_path));
lhs = self.parse_assoc_op_cast(lhs, lhs_span, ExprKind::Type)?;
- self.sess.gated_spans.type_ascription.borrow_mut().push(lhs.span);
+ self.sess.gated_spans.gate(sym::type_ascription, lhs.span);
continue
} else if op == AssocOp::DotDot || op == AssocOp::DotDotEq {
// If we didn’t have to handle `x..`/`x..=`, it would be pretty easy to
let e = self.parse_prefix_expr(None);
let (span, e) = self.interpolated_or_expr_span(e)?;
let span = lo.to(span);
- self.sess.gated_spans.box_syntax.borrow_mut().push(span);
+ self.sess.gated_spans.gate(sym::box_syntax, span);
(span, ExprKind::Box(e))
}
token::Ident(..) if self.token.is_ident_named(sym::not) => {
}
let span = lo.to(hi);
- self.sess.gated_spans.yields.borrow_mut().push(span);
+ self.sess.gated_spans.gate(sym::generators, span);
} else if self.eat_keyword(kw::Let) {
return self.parse_let_expr(attrs);
} else if is_span_rust_2018 && self.eat_keyword(kw::Await) {
Err(self.span_fatal(token.span, &msg))
}
Err(err) => {
- let (lit, span) = (token.expect_lit(), token.span);
+ let span = token.span;
+ let lit = match token.kind {
+ token::Literal(lit) => lit,
+ _ => unreachable!(),
+ };
self.bump();
self.error_literal_from_token(err, lit, span);
// Pack possible quotes and prefixes from the original literal into
outer_attrs: ThinVec<Attribute>,
) -> PResult<'a, P<Expr>> {
if let Some(label) = opt_label {
- self.sess.gated_spans.label_break_value.borrow_mut().push(label.ident.span);
+ self.sess.gated_spans.gate(sym::label_break_value, label.ident.span);
}
self.expect(&token::OpenDelim(token::Brace))?;
};
if asyncness.is_async() {
// Feature-gate `async ||` closures.
- self.sess.gated_spans.async_closure.borrow_mut().push(self.prev_span);
+ self.sess.gated_spans.gate(sym::async_closure, self.prev_span);
}
let capture_clause = self.parse_capture_clause();
if let ExprKind::Let(..) = cond.kind {
// Remove the last feature gating of a `let` expression since it's stable.
- let last = self.sess.gated_spans.let_chains.borrow_mut().pop();
- debug_assert_eq!(cond.span, last.unwrap());
+ self.sess.gated_spans.ungate_last(sym::let_chains, cond.span);
}
Ok(cond)
|this| this.parse_assoc_expr_with(1 + prec_let_scrutinee_needs_par(), None.into())
)?;
let span = lo.to(expr.span);
- self.sess.gated_spans.let_chains.borrow_mut().push(span);
+ self.sess.gated_spans.gate(sym::let_chains, span);
Ok(self.mk_expr(span, ExprKind::Let(pat, expr), attrs))
}
Err(error)
} else {
let span = span_lo.to(body.span);
- self.sess.gated_spans.try_blocks.borrow_mut().push(span);
+ self.sess.gated_spans.gate(sym::try_blocks, span);
Ok(self.mk_expr(span, ExprKind::TryBlock(body), attrs))
}
}
-use super::{Parser, PResult};
+use super::Parser;
use crate::ast::{self, WhereClause, GenericParam, GenericParamKind, GenericBounds, Attribute};
-use crate::parse::token;
+use crate::token;
use crate::source_map::DUMMY_SP;
-use crate::symbol::kw;
+
+use syntax_pos::symbol::{kw, sym};
+
+use errors::PResult;
impl<'a> Parser<'a> {
/// Parses bounds of a lifetime parameter `BOUND + BOUND + BOUND`, possibly with trailing `+`.
self.expect(&token::Colon)?;
let ty = self.parse_ty()?;
- self.sess.gated_spans.const_generics.borrow_mut().push(lo.to(self.prev_span));
+ self.sess.gated_spans.gate(sym::const_generics, lo.to(self.prev_span));
Ok(GenericParam {
ident,
-use super::{Parser, PResult, PathStyle};
+use super::{Parser, PathStyle};
use super::diagnostics::{Error, dummy_arg, ConsumeClosingDelim};
use crate::maybe_whole;
use crate::ptr::P;
-use crate::ast::{self, DUMMY_NODE_ID, Ident, Attribute, AttrStyle, AnonConst, Item, ItemKind};
-use crate::ast::{ImplItem, ImplItemKind, TraitItem, TraitItemKind, UseTree, UseTreeKind};
+use crate::ast::{self, Abi, DUMMY_NODE_ID, Ident, Attribute, AttrKind, AttrStyle, AnonConst, Item};
+use crate::ast::{ItemKind, ImplItem, ImplItemKind, TraitItem, TraitItemKind, UseTree, UseTreeKind};
use crate::ast::{PathSegment, IsAuto, Constness, IsAsync, Unsafety, Defaultness};
use crate::ast::{Visibility, VisibilityKind, Mutability, FnHeader, ForeignItem, ForeignItemKind};
use crate::ast::{Ty, TyKind, Generics, GenericBounds, TraitRef, EnumDef, VariantData, StructField};
use log::debug;
use std::mem;
-use rustc_target::spec::abi::Abi;
-use errors::{Applicability, DiagnosticBuilder, DiagnosticId, StashKey};
+use errors::{PResult, Applicability, DiagnosticBuilder, DiagnosticId, StashKey};
use syntax_pos::BytePos;
/// Whether the type alias or associated type is a concrete type or an opaque type.
return Ok(Some(self.parse_item_extern_crate(lo, vis, attrs)?));
}
- let opt_abi = self.parse_opt_abi()?;
+ let abi = self.parse_opt_abi()?;
if self.eat_keyword(kw::Fn) {
// EXTERN FUNCTION ITEM
unsafety: Unsafety::Normal,
asyncness: respan(fn_span, IsAsync::NotAsync),
constness: respan(fn_span, Constness::NotConst),
- abi: opt_abi.unwrap_or(Abi::C),
+ abi,
};
return self.parse_item_fn(lo, vis, attrs, header);
} else if self.check(&token::OpenDelim(token::Brace)) {
return Ok(Some(
- self.parse_item_foreign_mod(lo, opt_abi, vis, attrs, extern_sp)?,
+ self.parse_item_foreign_mod(lo, abi, vis, attrs, extern_sp)?,
));
}
let unsafety = self.parse_unsafety();
if self.check_keyword(kw::Extern) {
- self.sess.gated_spans.const_extern_fn.borrow_mut().push(
- lo.to(self.token.span)
- );
+ self.sess.gated_spans.gate(sym::const_extern_fn, lo.to(self.token.span));
}
let abi = self.parse_extern_abi()?;
self.bump(); // `fn`
unsafety,
asyncness,
constness: respan(fn_span, Constness::NotConst),
- abi: Abi::Rust,
+ abi: Abi::new(sym::Rust, fn_span),
};
return self.parse_item_fn(lo, vis, attrs, header);
}
unsafety: Unsafety::Normal,
asyncness: respan(fn_span, IsAsync::NotAsync),
constness: respan(fn_span, Constness::NotConst),
- abi: Abi::Rust,
+ abi: Abi::new(sym::Rust, fn_span),
};
return self.parse_item_fn(lo, vis, attrs, header);
}
/// Emits an expected-item-after-attributes error.
fn expected_item_err(&mut self, attrs: &[Attribute]) -> PResult<'a, ()> {
let message = match attrs.last() {
- Some(&Attribute { is_sugared_doc: true, .. }) => "expected item after doc comment",
- _ => "expected item after attributes",
+ Some(&Attribute { kind: AttrKind::DocComment(_), .. }) =>
+ "expected item after doc comment",
+ _ =>
+ "expected item after attributes",
};
let mut err = self.diagnostic().struct_span_err(self.prev_span, message);
- if attrs.last().unwrap().is_sugared_doc {
+ if attrs.last().unwrap().is_doc_comment() {
err.span_label(self.prev_span, "this doc comment doesn't document anything");
}
Err(err)
.emit();
}
- self.sess.gated_spans.trait_alias.borrow_mut().push(whole_span);
+ self.sess.gated_spans.gate(sym::trait_alias, whole_span);
Ok((ident, ItemKind::TraitAlias(tps, bounds), None))
} else {
fn parse_item_foreign_mod(
&mut self,
lo: Span,
- opt_abi: Option<Abi>,
+ abi: Abi,
visibility: Visibility,
mut attrs: Vec<Attribute>,
extern_sp: Span,
) -> PResult<'a, P<Item>> {
self.expect(&token::OpenDelim(token::Brace))?;
- let abi = opt_abi.unwrap_or(Abi::C);
-
attrs.extend(self.parse_inner_attributes()?);
let mut foreign_items = vec![];
let span = lo.to(self.prev_span);
if !def.legacy {
- self.sess.gated_spans.decl_macro.borrow_mut().push(span);
+ self.sess.gated_spans.gate(sym::decl_macro, span);
}
Ok(Some(self.mk_item(span, ident, ItemKind::MacroDef(def), vis.clone(), attrs.to_vec())))
) -> PResult<'a, Option<P<Item>>> {
let (ident, decl, generics) = self.parse_fn_sig(ParamCfg {
is_self_allowed: false,
- allow_c_variadic: header.abi == Abi::C && header.unsafety == Unsafety::Unsafe,
+ allow_c_variadic: header.abi.symbol == sym::C && header.unsafety == Unsafety::Unsafe,
is_name_required: |_| true,
})?;
let (inner_attrs, body) = self.parse_inner_attrs_and_block()?;
let asyncness = respan(self.prev_span, asyncness);
let unsafety = self.parse_unsafety();
let (constness, unsafety, abi) = if is_const_fn {
- (respan(const_span, Constness::Const), unsafety, Abi::Rust)
+ (respan(const_span, Constness::Const), unsafety, Abi::default())
} else {
let abi = self.parse_extern_abi()?;
(respan(self.prev_span, Constness::NotConst), unsafety, abi)
--- /dev/null
+pub mod attr;
+mod expr;
+mod pat;
+mod item;
+mod module;
+mod ty;
+mod path;
+pub use path::PathStyle;
+mod stmt;
+mod generics;
+mod diagnostics;
+use diagnostics::Error;
+
+use crate::ast::{
+ self, Abi, DUMMY_NODE_ID, AttrStyle, Attribute, CrateSugar, Ident,
+ IsAsync, MacDelimiter, Mutability, StrStyle, Visibility, VisibilityKind, Unsafety,
+};
+use crate::parse::{Directory, DirectoryOwnership};
+use crate::parse::lexer::UnmatchedBrace;
+use crate::util::comments::{doc_comment_style, strip_doc_comment_decoration};
+use crate::token::{self, Token, TokenKind, DelimToken};
+use crate::print::pprust;
+use crate::ptr::P;
+use crate::sess::ParseSess;
+use crate::source_map::respan;
+use crate::symbol::{kw, sym, Symbol};
+use crate::tokenstream::{self, DelimSpan, TokenTree, TokenStream, TreeAndJoint};
+use crate::ThinVec;
+
+use errors::{PResult, Applicability, DiagnosticBuilder, DiagnosticId, FatalError};
+use syntax_pos::{Span, BytePos, DUMMY_SP, FileName};
+use log::debug;
+
+use std::borrow::Cow;
+use std::{cmp, mem, slice};
+use std::path::PathBuf;
+
+bitflags::bitflags! {
+ struct Restrictions: u8 {
+ const STMT_EXPR = 1 << 0;
+ const NO_STRUCT_LITERAL = 1 << 1;
+ }
+}
+
+#[derive(Clone, Copy, PartialEq, Debug)]
+enum SemiColonMode {
+ Break,
+ Ignore,
+ Comma,
+}
+
+#[derive(Clone, Copy, PartialEq, Debug)]
+enum BlockMode {
+ Break,
+ Ignore,
+}
+
+/// Like `maybe_whole_expr`, but for things other than expressions.
+#[macro_export]
+macro_rules! maybe_whole {
+ ($p:expr, $constructor:ident, |$x:ident| $e:expr) => {
+ if let token::Interpolated(nt) = &$p.token.kind {
+ if let token::$constructor(x) = &**nt {
+ let $x = x.clone();
+ $p.bump();
+ return Ok($e);
+ }
+ }
+ };
+}
+
+/// If the next tokens are ill-formed `$ty::` recover them as `<$ty>::`.
+#[macro_export]
+macro_rules! maybe_recover_from_interpolated_ty_qpath {
+ ($self: expr, $allow_qpath_recovery: expr) => {
+ if $allow_qpath_recovery && $self.look_ahead(1, |t| t == &token::ModSep) {
+ if let token::Interpolated(nt) = &$self.token.kind {
+ if let token::NtTy(ty) = &**nt {
+ let ty = ty.clone();
+ $self.bump();
+ return $self.maybe_recover_from_bad_qpath_stage_2($self.prev_span, ty);
+ }
+ }
+ }
+ }
+}
+
+#[derive(Debug, Clone, Copy, PartialEq)]
+enum PrevTokenKind {
+ DocComment,
+ Comma,
+ Plus,
+ Interpolated,
+ Eof,
+ Ident,
+ BitOr,
+ Other,
+}
+
+// NOTE: `Ident`s are handled by `common.rs`.
+
+#[derive(Clone)]
+pub struct Parser<'a> {
+ pub sess: &'a ParseSess,
+ /// The current normalized token.
+ /// "Normalized" means that some interpolated tokens
+ /// (`$i: ident` and `$l: lifetime` meta-variables) are replaced
+ /// with non-interpolated identifier and lifetime tokens they refer to.
+ /// Perhaps the normalized / non-normalized setup can be simplified somehow.
+ pub token: Token,
+ /// The span of the current non-normalized token.
+ meta_var_span: Option<Span>,
+ /// The span of the previous non-normalized token.
+ pub prev_span: Span,
+ /// The kind of the previous normalized token (in simplified form).
+ prev_token_kind: PrevTokenKind,
+ restrictions: Restrictions,
+ /// Used to determine the path to externally loaded source files.
+ pub(super) directory: Directory<'a>,
+ /// `true` to parse sub-modules in other files.
+ pub(super) recurse_into_file_modules: bool,
+ /// Name of the root module this parser originated from. If `None`, then the
+ /// name is not known. This does not change while the parser is descending
+ /// into modules, and sub-parsers have new values for this name.
+ pub root_module_name: Option<String>,
+ expected_tokens: Vec<TokenType>,
+ token_cursor: TokenCursor,
+ desugar_doc_comments: bool,
+ /// `true` we should configure out of line modules as we parse.
+ cfg_mods: bool,
+ /// This field is used to keep track of how many left angle brackets we have seen. This is
+ /// required in order to detect extra leading left angle brackets (`<` characters) and error
+ /// appropriately.
+ ///
+ /// See the comments in the `parse_path_segment` function for more details.
+ unmatched_angle_bracket_count: u32,
+ max_angle_bracket_count: u32,
+ /// A list of all unclosed delimiters found by the lexer. If an entry is used for error recovery
+ /// it gets removed from here. Every entry left at the end gets emitted as an independent
+ /// error.
+ pub(super) unclosed_delims: Vec<UnmatchedBrace>,
+ last_unexpected_token_span: Option<Span>,
+ pub last_type_ascription: Option<(Span, bool /* likely path typo */)>,
+ /// If present, this `Parser` is not parsing Rust code but rather a macro call.
+ subparser_name: Option<&'static str>,
+}
+
+impl<'a> Drop for Parser<'a> {
+ fn drop(&mut self) {
+ emit_unclosed_delims(&mut self.unclosed_delims, &self.sess);
+ }
+}
+
+#[derive(Clone)]
+struct TokenCursor {
+ frame: TokenCursorFrame,
+ stack: Vec<TokenCursorFrame>,
+}
+
+#[derive(Clone)]
+struct TokenCursorFrame {
+ delim: token::DelimToken,
+ span: DelimSpan,
+ open_delim: bool,
+ tree_cursor: tokenstream::Cursor,
+ close_delim: bool,
+ last_token: LastToken,
+}
+
+/// This is used in `TokenCursorFrame` above to track tokens that are consumed
+/// by the parser, and then that's transitively used to record the tokens that
+/// each parse AST item is created with.
+///
+/// Right now this has two states, either collecting tokens or not collecting
+/// tokens. If we're collecting tokens we just save everything off into a local
+/// `Vec`. This should eventually though likely save tokens from the original
+/// token stream and just use slicing of token streams to avoid creation of a
+/// whole new vector.
+///
+/// The second state is where we're passively not recording tokens, but the last
+/// token is still tracked for when we want to start recording tokens. This
+/// "last token" means that when we start recording tokens we'll want to ensure
+/// that this, the first token, is included in the output.
+///
+/// You can find some more example usage of this in the `collect_tokens` method
+/// on the parser.
+#[derive(Clone)]
+enum LastToken {
+ Collecting(Vec<TreeAndJoint>),
+ Was(Option<TreeAndJoint>),
+}
+
+impl TokenCursorFrame {
+ fn new(span: DelimSpan, delim: DelimToken, tts: &TokenStream) -> Self {
+ TokenCursorFrame {
+ delim,
+ span,
+ open_delim: delim == token::NoDelim,
+ tree_cursor: tts.clone().into_trees(),
+ close_delim: delim == token::NoDelim,
+ last_token: LastToken::Was(None),
+ }
+ }
+}
+
+impl TokenCursor {
+ fn next(&mut self) -> Token {
+ loop {
+ let tree = if !self.frame.open_delim {
+ self.frame.open_delim = true;
+ TokenTree::open_tt(self.frame.span, self.frame.delim)
+ } else if let Some(tree) = self.frame.tree_cursor.next() {
+ tree
+ } else if !self.frame.close_delim {
+ self.frame.close_delim = true;
+ TokenTree::close_tt(self.frame.span, self.frame.delim)
+ } else if let Some(frame) = self.stack.pop() {
+ self.frame = frame;
+ continue
+ } else {
+ return Token::new(token::Eof, DUMMY_SP);
+ };
+
+ match self.frame.last_token {
+ LastToken::Collecting(ref mut v) => v.push(tree.clone().into()),
+ LastToken::Was(ref mut t) => *t = Some(tree.clone().into()),
+ }
+
+ match tree {
+ TokenTree::Token(token) => return token,
+ TokenTree::Delimited(sp, delim, tts) => {
+ let frame = TokenCursorFrame::new(sp, delim, &tts);
+ self.stack.push(mem::replace(&mut self.frame, frame));
+ }
+ }
+ }
+ }
+
+ fn next_desugared(&mut self) -> Token {
+ let (name, sp) = match self.next() {
+ Token { kind: token::DocComment(name), span } => (name, span),
+ tok => return tok,
+ };
+
+ let stripped = strip_doc_comment_decoration(&name.as_str());
+
+ // Searches for the occurrences of `"#*` and returns the minimum number of `#`s
+ // required to wrap the text.
+ let mut num_of_hashes = 0;
+ let mut count = 0;
+ for ch in stripped.chars() {
+ count = match ch {
+ '"' => 1,
+ '#' if count > 0 => count + 1,
+ _ => 0,
+ };
+ num_of_hashes = cmp::max(num_of_hashes, count);
+ }
+
+ let delim_span = DelimSpan::from_single(sp);
+ let body = TokenTree::Delimited(
+ delim_span,
+ token::Bracket,
+ [
+ TokenTree::token(token::Ident(sym::doc, false), sp),
+ TokenTree::token(token::Eq, sp),
+ TokenTree::token(TokenKind::lit(
+ token::StrRaw(num_of_hashes), Symbol::intern(&stripped), None
+ ), sp),
+ ]
+ .iter().cloned().collect::<TokenStream>().into(),
+ );
+
+ self.stack.push(mem::replace(&mut self.frame, TokenCursorFrame::new(
+ delim_span,
+ token::NoDelim,
+ &if doc_comment_style(&name.as_str()) == AttrStyle::Inner {
+ [TokenTree::token(token::Pound, sp), TokenTree::token(token::Not, sp), body]
+ .iter().cloned().collect::<TokenStream>()
+ } else {
+ [TokenTree::token(token::Pound, sp), body]
+ .iter().cloned().collect::<TokenStream>()
+ },
+ )));
+
+ self.next()
+ }
+}
+
+#[derive(Clone, PartialEq)]
+enum TokenType {
+ Token(TokenKind),
+ Keyword(Symbol),
+ Operator,
+ Lifetime,
+ Ident,
+ Path,
+ Type,
+ Const,
+}
+
+impl TokenType {
+ fn to_string(&self) -> String {
+ match *self {
+ TokenType::Token(ref t) => format!("`{}`", pprust::token_kind_to_string(t)),
+ TokenType::Keyword(kw) => format!("`{}`", kw),
+ TokenType::Operator => "an operator".to_string(),
+ TokenType::Lifetime => "lifetime".to_string(),
+ TokenType::Ident => "identifier".to_string(),
+ TokenType::Path => "path".to_string(),
+ TokenType::Type => "type".to_string(),
+ TokenType::Const => "const".to_string(),
+ }
+ }
+}
+
+#[derive(Copy, Clone, Debug)]
+enum TokenExpectType {
+ Expect,
+ NoExpect,
+}
+
+/// A sequence separator.
+struct SeqSep {
+ /// The separator token.
+ sep: Option<TokenKind>,
+ /// `true` if a trailing separator is allowed.
+ trailing_sep_allowed: bool,
+}
+
+impl SeqSep {
+ fn trailing_allowed(t: TokenKind) -> SeqSep {
+ SeqSep {
+ sep: Some(t),
+ trailing_sep_allowed: true,
+ }
+ }
+
+ fn none() -> SeqSep {
+ SeqSep {
+ sep: None,
+ trailing_sep_allowed: false,
+ }
+ }
+}
+
+impl<'a> Parser<'a> {
+ pub fn new(
+ sess: &'a ParseSess,
+ tokens: TokenStream,
+ directory: Option<Directory<'a>>,
+ recurse_into_file_modules: bool,
+ desugar_doc_comments: bool,
+ subparser_name: Option<&'static str>,
+ ) -> Self {
+ let mut parser = Parser {
+ sess,
+ token: Token::dummy(),
+ prev_span: DUMMY_SP,
+ meta_var_span: None,
+ prev_token_kind: PrevTokenKind::Other,
+ restrictions: Restrictions::empty(),
+ recurse_into_file_modules,
+ directory: Directory {
+ path: Cow::from(PathBuf::new()),
+ ownership: DirectoryOwnership::Owned { relative: None }
+ },
+ root_module_name: None,
+ expected_tokens: Vec::new(),
+ token_cursor: TokenCursor {
+ frame: TokenCursorFrame::new(
+ DelimSpan::dummy(),
+ token::NoDelim,
+ &tokens.into(),
+ ),
+ stack: Vec::new(),
+ },
+ desugar_doc_comments,
+ cfg_mods: true,
+ unmatched_angle_bracket_count: 0,
+ max_angle_bracket_count: 0,
+ unclosed_delims: Vec::new(),
+ last_unexpected_token_span: None,
+ last_type_ascription: None,
+ subparser_name,
+ };
+
+ parser.token = parser.next_tok();
+
+ if let Some(directory) = directory {
+ parser.directory = directory;
+ } else if !parser.token.span.is_dummy() {
+ if let Some(FileName::Real(path)) =
+ &sess.source_map().lookup_char_pos(parser.token.span.lo()).file.unmapped_path {
+ if let Some(directory_path) = path.parent() {
+ parser.directory.path = Cow::from(directory_path.to_path_buf());
+ }
+ }
+ }
+
+ parser.process_potential_macro_variable();
+ parser
+ }
+
+ fn next_tok(&mut self) -> Token {
+ let mut next = if self.desugar_doc_comments {
+ self.token_cursor.next_desugared()
+ } else {
+ self.token_cursor.next()
+ };
+ if next.span.is_dummy() {
+ // Tweak the location for better diagnostics, but keep syntactic context intact.
+ next.span = self.prev_span.with_ctxt(next.span.ctxt());
+ }
+ next
+ }
+
+ /// Converts the current token to a string using `self`'s reader.
+ pub fn this_token_to_string(&self) -> String {
+ pprust::token_to_string(&self.token)
+ }
+
+ fn token_descr(&self) -> Option<&'static str> {
+ Some(match &self.token.kind {
+ _ if self.token.is_special_ident() => "reserved identifier",
+ _ if self.token.is_used_keyword() => "keyword",
+ _ if self.token.is_unused_keyword() => "reserved keyword",
+ token::DocComment(..) => "doc comment",
+ _ => return None,
+ })
+ }
+
+ pub(super) fn this_token_descr(&self) -> String {
+ if let Some(prefix) = self.token_descr() {
+ format!("{} `{}`", prefix, self.this_token_to_string())
+ } else {
+ format!("`{}`", self.this_token_to_string())
+ }
+ }
+
+ crate fn unexpected<T>(&mut self) -> PResult<'a, T> {
+ match self.expect_one_of(&[], &[]) {
+ Err(e) => Err(e),
+ Ok(_) => unreachable!(),
+ }
+ }
+
+ /// Expects and consumes the token `t`. Signals an error if the next token is not `t`.
+ pub fn expect(&mut self, t: &TokenKind) -> PResult<'a, bool /* recovered */> {
+ if self.expected_tokens.is_empty() {
+ if self.token == *t {
+ self.bump();
+ Ok(false)
+ } else {
+ self.unexpected_try_recover(t)
+ }
+ } else {
+ self.expect_one_of(slice::from_ref(t), &[])
+ }
+ }
+
+ /// Expect next token to be edible or inedible token. If edible,
+ /// then consume it; if inedible, then return without consuming
+ /// anything. Signal a fatal error if next token is unexpected.
+ pub fn expect_one_of(
+ &mut self,
+ edible: &[TokenKind],
+ inedible: &[TokenKind],
+ ) -> PResult<'a, bool /* recovered */> {
+ if edible.contains(&self.token.kind) {
+ self.bump();
+ Ok(false)
+ } else if inedible.contains(&self.token.kind) {
+ // leave it in the input
+ Ok(false)
+ } else if self.last_unexpected_token_span == Some(self.token.span) {
+ FatalError.raise();
+ } else {
+ self.expected_one_of_not_found(edible, inedible)
+ }
+ }
+
+ fn parse_ident(&mut self) -> PResult<'a, ast::Ident> {
+ self.parse_ident_common(true)
+ }
+
+ fn parse_ident_common(&mut self, recover: bool) -> PResult<'a, ast::Ident> {
+ match self.token.kind {
+ token::Ident(name, _) => {
+ if self.token.is_reserved_ident() {
+ let mut err = self.expected_ident_found();
+ if recover {
+ err.emit();
+ } else {
+ return Err(err);
+ }
+ }
+ let span = self.token.span;
+ self.bump();
+ Ok(Ident::new(name, span))
+ }
+ _ => {
+ Err(if self.prev_token_kind == PrevTokenKind::DocComment {
+ self.span_fatal_err(self.prev_span, Error::UselessDocComment)
+ } else {
+ self.expected_ident_found()
+ })
+ }
+ }
+ }
+
+ /// Checks if the next token is `tok`, and returns `true` if so.
+ ///
+ /// This method will automatically add `tok` to `expected_tokens` if `tok` is not
+ /// encountered.
+ fn check(&mut self, tok: &TokenKind) -> bool {
+ let is_present = self.token == *tok;
+ if !is_present { self.expected_tokens.push(TokenType::Token(tok.clone())); }
+ is_present
+ }
+
+ /// Consumes a token 'tok' if it exists. Returns whether the given token was present.
+ pub fn eat(&mut self, tok: &TokenKind) -> bool {
+ let is_present = self.check(tok);
+ if is_present { self.bump() }
+ is_present
+ }
+
+ /// If the next token is the given keyword, returns `true` without eating it.
+ /// An expectation is also added for diagnostics purposes.
+ fn check_keyword(&mut self, kw: Symbol) -> bool {
+ self.expected_tokens.push(TokenType::Keyword(kw));
+ self.token.is_keyword(kw)
+ }
+
+ /// If the next token is the given keyword, eats it and returns `true`.
+ /// Otherwise, returns `false`. An expectation is also added for diagnostics purposes.
+ fn eat_keyword(&mut self, kw: Symbol) -> bool {
+ if self.check_keyword(kw) {
+ self.bump();
+ true
+ } else {
+ false
+ }
+ }
+
+ fn eat_keyword_noexpect(&mut self, kw: Symbol) -> bool {
+ if self.token.is_keyword(kw) {
+ self.bump();
+ true
+ } else {
+ false
+ }
+ }
+
+ /// If the given word is not a keyword, signals an error.
+ /// If the next token is not the given word, signals an error.
+ /// Otherwise, eats it.
+ fn expect_keyword(&mut self, kw: Symbol) -> PResult<'a, ()> {
+ if !self.eat_keyword(kw) {
+ self.unexpected()
+ } else {
+ Ok(())
+ }
+ }
+
+ fn check_or_expected(&mut self, ok: bool, typ: TokenType) -> bool {
+ if ok {
+ true
+ } else {
+ self.expected_tokens.push(typ);
+ false
+ }
+ }
+
+ fn check_ident(&mut self) -> bool {
+ self.check_or_expected(self.token.is_ident(), TokenType::Ident)
+ }
+
+ fn check_path(&mut self) -> bool {
+ self.check_or_expected(self.token.is_path_start(), TokenType::Path)
+ }
+
+ fn check_type(&mut self) -> bool {
+ self.check_or_expected(self.token.can_begin_type(), TokenType::Type)
+ }
+
+ fn check_const_arg(&mut self) -> bool {
+ self.check_or_expected(self.token.can_begin_const_arg(), TokenType::Const)
+ }
+
+ /// Checks to see if the next token is either `+` or `+=`.
+ /// Otherwise returns `false`.
+ fn check_plus(&mut self) -> bool {
+ self.check_or_expected(
+ self.token.is_like_plus(),
+ TokenType::Token(token::BinOp(token::Plus)),
+ )
+ }
+
+ /// Expects and consumes a `+`. if `+=` is seen, replaces it with a `=`
+ /// and continues. If a `+` is not seen, returns `false`.
+ ///
+ /// This is used when token-splitting `+=` into `+`.
+ /// See issue #47856 for an example of when this may occur.
+ fn eat_plus(&mut self) -> bool {
+ self.expected_tokens.push(TokenType::Token(token::BinOp(token::Plus)));
+ match self.token.kind {
+ token::BinOp(token::Plus) => {
+ self.bump();
+ true
+ }
+ token::BinOpEq(token::Plus) => {
+ let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
+ self.bump_with(token::Eq, span);
+ true
+ }
+ _ => false,
+ }
+ }
+
+ /// Expects and consumes an `&`. If `&&` is seen, replaces it with a single
+ /// `&` and continues. If an `&` is not seen, signals an error.
+ fn expect_and(&mut self) -> PResult<'a, ()> {
+ self.expected_tokens.push(TokenType::Token(token::BinOp(token::And)));
+ match self.token.kind {
+ token::BinOp(token::And) => {
+ self.bump();
+ Ok(())
+ }
+ token::AndAnd => {
+ let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
+ Ok(self.bump_with(token::BinOp(token::And), span))
+ }
+ _ => self.unexpected()
+ }
+ }
+
+ /// Expects and consumes an `|`. If `||` is seen, replaces it with a single
+ /// `|` and continues. If an `|` is not seen, signals an error.
+ fn expect_or(&mut self) -> PResult<'a, ()> {
+ self.expected_tokens.push(TokenType::Token(token::BinOp(token::Or)));
+ match self.token.kind {
+ token::BinOp(token::Or) => {
+ self.bump();
+ Ok(())
+ }
+ token::OrOr => {
+ let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
+ Ok(self.bump_with(token::BinOp(token::Or), span))
+ }
+ _ => self.unexpected()
+ }
+ }
+
+ /// Attempts to consume a `<`. If `<<` is seen, replaces it with a single
+ /// `<` and continue. If `<-` is seen, replaces it with a single `<`
+ /// and continue. If a `<` is not seen, returns false.
+ ///
+ /// This is meant to be used when parsing generics on a path to get the
+ /// starting token.
+ fn eat_lt(&mut self) -> bool {
+ self.expected_tokens.push(TokenType::Token(token::Lt));
+ let ate = match self.token.kind {
+ token::Lt => {
+ self.bump();
+ true
+ }
+ token::BinOp(token::Shl) => {
+ let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
+ self.bump_with(token::Lt, span);
+ true
+ }
+ token::LArrow => {
+ let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
+ self.bump_with(token::BinOp(token::Minus), span);
+ true
+ }
+ _ => false,
+ };
+
+ if ate {
+ // See doc comment for `unmatched_angle_bracket_count`.
+ self.unmatched_angle_bracket_count += 1;
+ self.max_angle_bracket_count += 1;
+ debug!("eat_lt: (increment) count={:?}", self.unmatched_angle_bracket_count);
+ }
+
+ ate
+ }
+
+ fn expect_lt(&mut self) -> PResult<'a, ()> {
+ if !self.eat_lt() {
+ self.unexpected()
+ } else {
+ Ok(())
+ }
+ }
+
+ /// Expects and consumes a single `>` token. if a `>>` is seen, replaces it
+ /// with a single `>` and continues. If a `>` is not seen, signals an error.
+ fn expect_gt(&mut self) -> PResult<'a, ()> {
+ self.expected_tokens.push(TokenType::Token(token::Gt));
+ let ate = match self.token.kind {
+ token::Gt => {
+ self.bump();
+ Some(())
+ }
+ token::BinOp(token::Shr) => {
+ let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
+ Some(self.bump_with(token::Gt, span))
+ }
+ token::BinOpEq(token::Shr) => {
+ let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
+ Some(self.bump_with(token::Ge, span))
+ }
+ token::Ge => {
+ let span = self.token.span.with_lo(self.token.span.lo() + BytePos(1));
+ Some(self.bump_with(token::Eq, span))
+ }
+ _ => None,
+ };
+
+ match ate {
+ Some(_) => {
+ // See doc comment for `unmatched_angle_bracket_count`.
+ if self.unmatched_angle_bracket_count > 0 {
+ self.unmatched_angle_bracket_count -= 1;
+ debug!("expect_gt: (decrement) count={:?}", self.unmatched_angle_bracket_count);
+ }
+
+ Ok(())
+ },
+ None => self.unexpected(),
+ }
+ }
+
+ /// Parses a sequence, including the closing delimiter. The function
+ /// `f` must consume tokens until reaching the next separator or
+ /// closing bracket.
+ fn parse_seq_to_end<T>(
+ &mut self,
+ ket: &TokenKind,
+ sep: SeqSep,
+ f: impl FnMut(&mut Parser<'a>) -> PResult<'a, T>,
+ ) -> PResult<'a, Vec<T>> {
+ let (val, _, recovered) = self.parse_seq_to_before_end(ket, sep, f)?;
+ if !recovered {
+ self.bump();
+ }
+ Ok(val)
+ }
+
+ /// Parses a sequence, not including the closing delimiter. The function
+ /// `f` must consume tokens until reaching the next separator or
+ /// closing bracket.
+ fn parse_seq_to_before_end<T>(
+ &mut self,
+ ket: &TokenKind,
+ sep: SeqSep,
+ f: impl FnMut(&mut Parser<'a>) -> PResult<'a, T>,
+ ) -> PResult<'a, (Vec<T>, bool, bool)> {
+ self.parse_seq_to_before_tokens(&[ket], sep, TokenExpectType::Expect, f)
+ }
+
+ fn expect_any_with_type(&mut self, kets: &[&TokenKind], expect: TokenExpectType) -> bool {
+ kets.iter().any(|k| {
+ match expect {
+ TokenExpectType::Expect => self.check(k),
+ TokenExpectType::NoExpect => self.token == **k,
+ }
+ })
+ }
+
+ fn parse_seq_to_before_tokens<T>(
+ &mut self,
+ kets: &[&TokenKind],
+ sep: SeqSep,
+ expect: TokenExpectType,
+ mut f: impl FnMut(&mut Parser<'a>) -> PResult<'a, T>,
+ ) -> PResult<'a, (Vec<T>, bool /* trailing */, bool /* recovered */)> {
+ let mut first = true;
+ let mut recovered = false;
+ let mut trailing = false;
+ let mut v = vec![];
+ while !self.expect_any_with_type(kets, expect) {
+ if let token::CloseDelim(..) | token::Eof = self.token.kind {
+ break
+ }
+ if let Some(ref t) = sep.sep {
+ if first {
+ first = false;
+ } else {
+ match self.expect(t) {
+ Ok(false) => {}
+ Ok(true) => {
+ recovered = true;
+ break;
+ }
+ Err(mut e) => {
+ // Attempt to keep parsing if it was a similar separator.
+ if let Some(ref tokens) = t.similar_tokens() {
+ if tokens.contains(&self.token.kind) {
+ self.bump();
+ }
+ }
+ e.emit();
+ // Attempt to keep parsing if it was an omitted separator.
+ match f(self) {
+ Ok(t) => {
+ v.push(t);
+ continue;
+ },
+ Err(mut e) => {
+ e.cancel();
+ break;
+ }
+ }
+ }
+ }
+ }
+ }
+ if sep.trailing_sep_allowed && self.expect_any_with_type(kets, expect) {
+ trailing = true;
+ break;
+ }
+
+ let t = f(self)?;
+ v.push(t);
+ }
+
+ Ok((v, trailing, recovered))
+ }
+
+ /// Parses a sequence, including the closing delimiter. The function
+ /// `f` must consume tokens until reaching the next separator or
+ /// closing bracket.
+ fn parse_unspanned_seq<T>(
+ &mut self,
+ bra: &TokenKind,
+ ket: &TokenKind,
+ sep: SeqSep,
+ f: impl FnMut(&mut Parser<'a>) -> PResult<'a, T>,
+ ) -> PResult<'a, (Vec<T>, bool)> {
+ self.expect(bra)?;
+ let (result, trailing, recovered) = self.parse_seq_to_before_end(ket, sep, f)?;
+ if !recovered {
+ self.eat(ket);
+ }
+ Ok((result, trailing))
+ }
+
+ fn parse_delim_comma_seq<T>(
+ &mut self,
+ delim: DelimToken,
+ f: impl FnMut(&mut Parser<'a>) -> PResult<'a, T>,
+ ) -> PResult<'a, (Vec<T>, bool)> {
+ self.parse_unspanned_seq(
+ &token::OpenDelim(delim),
+ &token::CloseDelim(delim),
+ SeqSep::trailing_allowed(token::Comma),
+ f,
+ )
+ }
+
+ fn parse_paren_comma_seq<T>(
+ &mut self,
+ f: impl FnMut(&mut Parser<'a>) -> PResult<'a, T>,
+ ) -> PResult<'a, (Vec<T>, bool)> {
+ self.parse_delim_comma_seq(token::Paren, f)
+ }
+
+ /// Advance the parser by one token.
+ pub fn bump(&mut self) {
+ if self.prev_token_kind == PrevTokenKind::Eof {
+ // Bumping after EOF is a bad sign, usually an infinite loop.
+ self.bug("attempted to bump the parser past EOF (may be stuck in a loop)");
+ }
+
+ self.prev_span = self.meta_var_span.take().unwrap_or(self.token.span);
+
+ // Record last token kind for possible error recovery.
+ self.prev_token_kind = match self.token.kind {
+ token::DocComment(..) => PrevTokenKind::DocComment,
+ token::Comma => PrevTokenKind::Comma,
+ token::BinOp(token::Plus) => PrevTokenKind::Plus,
+ token::BinOp(token::Or) => PrevTokenKind::BitOr,
+ token::Interpolated(..) => PrevTokenKind::Interpolated,
+ token::Eof => PrevTokenKind::Eof,
+ token::Ident(..) => PrevTokenKind::Ident,
+ _ => PrevTokenKind::Other,
+ };
+
+ self.token = self.next_tok();
+ self.expected_tokens.clear();
+ // Check after each token.
+ self.process_potential_macro_variable();
+ }
+
+ /// Advances the parser using provided token as a next one. Use this when
+ /// consuming a part of a token. For example a single `<` from `<<`.
+ fn bump_with(&mut self, next: TokenKind, span: Span) {
+ self.prev_span = self.token.span.with_hi(span.lo());
+ // It would be incorrect to record the kind of the current token, but
+ // fortunately for tokens currently using `bump_with`, the
+ // `prev_token_kind` will be of no use anyway.
+ self.prev_token_kind = PrevTokenKind::Other;
+ self.token = Token::new(next, span);
+ self.expected_tokens.clear();
+ }
+
+ /// Look-ahead `dist` tokens of `self.token` and get access to that token there.
+ /// When `dist == 0` then the current token is looked at.
+ pub fn look_ahead<R>(&self, dist: usize, looker: impl FnOnce(&Token) -> R) -> R {
+ if dist == 0 {
+ return looker(&self.token);
+ }
+
+ let frame = &self.token_cursor.frame;
+ looker(&match frame.tree_cursor.look_ahead(dist - 1) {
+ Some(tree) => match tree {
+ TokenTree::Token(token) => token,
+ TokenTree::Delimited(dspan, delim, _) =>
+ Token::new(token::OpenDelim(delim), dspan.open),
+ }
+ None => Token::new(token::CloseDelim(frame.delim), frame.span.close)
+ })
+ }
+
+ /// Returns whether any of the given keywords are `dist` tokens ahead of the current one.
+ fn is_keyword_ahead(&self, dist: usize, kws: &[Symbol]) -> bool {
+ self.look_ahead(dist, |t| kws.iter().any(|&kw| t.is_keyword(kw)))
+ }
+
+ /// Parses asyncness: `async` or nothing.
+ fn parse_asyncness(&mut self) -> IsAsync {
+ if self.eat_keyword(kw::Async) {
+ IsAsync::Async {
+ closure_id: DUMMY_NODE_ID,
+ return_impl_trait_id: DUMMY_NODE_ID,
+ }
+ } else {
+ IsAsync::NotAsync
+ }
+ }
+
+ /// Parses unsafety: `unsafe` or nothing.
+ fn parse_unsafety(&mut self) -> Unsafety {
+ if self.eat_keyword(kw::Unsafe) {
+ Unsafety::Unsafe
+ } else {
+ Unsafety::Normal
+ }
+ }
+
+ /// Parses mutability (`mut` or nothing).
+ fn parse_mutability(&mut self) -> Mutability {
+ if self.eat_keyword(kw::Mut) {
+ Mutability::Mutable
+ } else {
+ Mutability::Immutable
+ }
+ }
+
+ /// Possibly parses mutability (`const` or `mut`).
+ fn parse_const_or_mut(&mut self) -> Option<Mutability> {
+ if self.eat_keyword(kw::Mut) {
+ Some(Mutability::Mutable)
+ } else if self.eat_keyword(kw::Const) {
+ Some(Mutability::Immutable)
+ } else {
+ None
+ }
+ }
+
+ fn parse_field_name(&mut self) -> PResult<'a, Ident> {
+ if let token::Literal(token::Lit { kind: token::Integer, symbol, suffix }) =
+ self.token.kind {
+ self.expect_no_suffix(self.token.span, "a tuple index", suffix);
+ self.bump();
+ Ok(Ident::new(symbol, self.prev_span))
+ } else {
+ self.parse_ident_common(false)
+ }
+ }
+
+ fn expect_delimited_token_tree(&mut self) -> PResult<'a, (MacDelimiter, TokenStream)> {
+ let delim = match self.token.kind {
+ token::OpenDelim(delim) => delim,
+ _ => {
+ let msg = "expected open delimiter";
+ let mut err = self.fatal(msg);
+ err.span_label(self.token.span, msg);
+ return Err(err)
+ }
+ };
+ let tts = match self.parse_token_tree() {
+ TokenTree::Delimited(_, _, tts) => tts,
+ _ => unreachable!(),
+ };
+ let delim = match delim {
+ token::Paren => MacDelimiter::Parenthesis,
+ token::Bracket => MacDelimiter::Bracket,
+ token::Brace => MacDelimiter::Brace,
+ token::NoDelim => self.bug("unexpected no delimiter"),
+ };
+ Ok((delim, tts.into()))
+ }
+
+ fn parse_or_use_outer_attributes(
+ &mut self,
+ already_parsed_attrs: Option<ThinVec<Attribute>>,
+ ) -> PResult<'a, ThinVec<Attribute>> {
+ if let Some(attrs) = already_parsed_attrs {
+ Ok(attrs)
+ } else {
+ self.parse_outer_attributes().map(|a| a.into())
+ }
+ }
+
+ pub fn process_potential_macro_variable(&mut self) {
+ self.token = match self.token.kind {
+ token::Dollar if self.token.span.from_expansion() &&
+ self.look_ahead(1, |t| t.is_ident()) => {
+ self.bump();
+ let name = match self.token.kind {
+ token::Ident(name, _) => name,
+ _ => unreachable!()
+ };
+ let span = self.prev_span.to(self.token.span);
+ self.diagnostic()
+ .struct_span_fatal(span, &format!("unknown macro variable `{}`", name))
+ .span_label(span, "unknown macro variable")
+ .emit();
+ self.bump();
+ return
+ }
+ token::Interpolated(ref nt) => {
+ self.meta_var_span = Some(self.token.span);
+ // Interpolated identifier and lifetime tokens are replaced with usual identifier
+ // and lifetime tokens, so the former are never encountered during normal parsing.
+ match **nt {
+ token::NtIdent(ident, is_raw) =>
+ Token::new(token::Ident(ident.name, is_raw), ident.span),
+ token::NtLifetime(ident) =>
+ Token::new(token::Lifetime(ident.name), ident.span),
+ _ => return,
+ }
+ }
+ _ => return,
+ };
+ }
+
+ /// Parses a single token tree from the input.
+ pub fn parse_token_tree(&mut self) -> TokenTree {
+ match self.token.kind {
+ token::OpenDelim(..) => {
+ let frame = mem::replace(&mut self.token_cursor.frame,
+ self.token_cursor.stack.pop().unwrap());
+ self.token.span = frame.span.entire();
+ self.bump();
+ TokenTree::Delimited(
+ frame.span,
+ frame.delim,
+ frame.tree_cursor.stream.into(),
+ )
+ },
+ token::CloseDelim(_) | token::Eof => unreachable!(),
+ _ => {
+ let token = self.token.take();
+ self.bump();
+ TokenTree::Token(token)
+ }
+ }
+ }
+
+ /// Parses a stream of tokens into a list of `TokenTree`s, up to EOF.
+ pub fn parse_all_token_trees(&mut self) -> PResult<'a, Vec<TokenTree>> {
+ let mut tts = Vec::new();
+ while self.token != token::Eof {
+ tts.push(self.parse_token_tree());
+ }
+ Ok(tts)
+ }
+
+ pub fn parse_tokens(&mut self) -> TokenStream {
+ let mut result = Vec::new();
+ loop {
+ match self.token.kind {
+ token::Eof | token::CloseDelim(..) => break,
+ _ => result.push(self.parse_token_tree().into()),
+ }
+ }
+ TokenStream::new(result)
+ }
+
+ /// Evaluates the closure with restrictions in place.
+ ///
+ /// Afters the closure is evaluated, restrictions are reset.
+ fn with_res<T>(&mut self, res: Restrictions, f: impl FnOnce(&mut Self) -> T) -> T {
+ let old = self.restrictions;
+ self.restrictions = res;
+ let res = f(self);
+ self.restrictions = old;
+ res
+ }
+
+ fn is_crate_vis(&self) -> bool {
+ self.token.is_keyword(kw::Crate) && self.look_ahead(1, |t| t != &token::ModSep)
+ }
+
+ /// Parses `pub`, `pub(crate)` and `pub(in path)` plus shortcuts `crate` for `pub(crate)`,
+ /// `pub(self)` for `pub(in self)` and `pub(super)` for `pub(in super)`.
+ /// If the following element can't be a tuple (i.e., it's a function definition), then
+ /// it's not a tuple struct field), and the contents within the parentheses isn't valid,
+ /// so emit a proper diagnostic.
+ pub fn parse_visibility(&mut self, can_take_tuple: bool) -> PResult<'a, Visibility> {
+ maybe_whole!(self, NtVis, |x| x);
+
+ self.expected_tokens.push(TokenType::Keyword(kw::Crate));
+ if self.is_crate_vis() {
+ self.bump(); // `crate`
+ self.sess.gated_spans.gate(sym::crate_visibility_modifier, self.prev_span);
+ return Ok(respan(self.prev_span, VisibilityKind::Crate(CrateSugar::JustCrate)));
+ }
+
+ if !self.eat_keyword(kw::Pub) {
+ // We need a span for our `Spanned<VisibilityKind>`, but there's inherently no
+ // keyword to grab a span from for inherited visibility; an empty span at the
+ // beginning of the current token would seem to be the "Schelling span".
+ return Ok(respan(self.token.span.shrink_to_lo(), VisibilityKind::Inherited))
+ }
+ let lo = self.prev_span;
+
+ if self.check(&token::OpenDelim(token::Paren)) {
+ // We don't `self.bump()` the `(` yet because this might be a struct definition where
+ // `()` or a tuple might be allowed. For example, `struct Struct(pub (), pub (usize));`.
+ // Because of this, we only `bump` the `(` if we're assured it is appropriate to do so
+ // by the following tokens.
+ if self.is_keyword_ahead(1, &[kw::Crate])
+ && self.look_ahead(2, |t| t != &token::ModSep) // account for `pub(crate::foo)`
+ {
+ // Parse `pub(crate)`.
+ self.bump(); // `(`
+ self.bump(); // `crate`
+ self.expect(&token::CloseDelim(token::Paren))?; // `)`
+ let vis = VisibilityKind::Crate(CrateSugar::PubCrate);
+ return Ok(respan(lo.to(self.prev_span), vis));
+ } else if self.is_keyword_ahead(1, &[kw::In]) {
+ // Parse `pub(in path)`.
+ self.bump(); // `(`
+ self.bump(); // `in`
+ let path = self.parse_path(PathStyle::Mod)?; // `path`
+ self.expect(&token::CloseDelim(token::Paren))?; // `)`
+ let vis = VisibilityKind::Restricted {
+ path: P(path),
+ id: ast::DUMMY_NODE_ID,
+ };
+ return Ok(respan(lo.to(self.prev_span), vis));
+ } else if self.look_ahead(2, |t| t == &token::CloseDelim(token::Paren))
+ && self.is_keyword_ahead(1, &[kw::Super, kw::SelfLower])
+ {
+ // Parse `pub(self)` or `pub(super)`.
+ self.bump(); // `(`
+ let path = self.parse_path(PathStyle::Mod)?; // `super`/`self`
+ self.expect(&token::CloseDelim(token::Paren))?; // `)`
+ let vis = VisibilityKind::Restricted {
+ path: P(path),
+ id: ast::DUMMY_NODE_ID,
+ };
+ return Ok(respan(lo.to(self.prev_span), vis));
+ } else if !can_take_tuple { // Provide this diagnostic if this is not a tuple struct.
+ self.recover_incorrect_vis_restriction()?;
+ // Emit diagnostic, but continue with public visibility.
+ }
+ }
+
+ Ok(respan(lo, VisibilityKind::Public))
+ }
+
+ /// Recovery for e.g. `pub(something) fn ...` or `struct X { pub(something) y: Z }`
+ fn recover_incorrect_vis_restriction(&mut self) -> PResult<'a, ()> {
+ self.bump(); // `(`
+ let path = self.parse_path(PathStyle::Mod)?;
+ self.expect(&token::CloseDelim(token::Paren))?; // `)`
+
+ let msg = "incorrect visibility restriction";
+ let suggestion = r##"some possible visibility restrictions are:
+`pub(crate)`: visible only on the current crate
+`pub(super)`: visible only in the current module's parent
+`pub(in path::to::module)`: visible only on the specified path"##;
+
+ let path_str = pprust::path_to_string(&path);
+
+ struct_span_err!(self.sess.span_diagnostic, path.span, E0704, "{}", msg)
+ .help(suggestion)
+ .span_suggestion(
+ path.span,
+ &format!("make this visible only to module `{}` with `in`", path_str),
+ format!("in {}", path_str),
+ Applicability::MachineApplicable,
+ )
+ .emit();
+
+ Ok(())
+ }
+
+ /// Parses `extern string_literal?`.
+ /// If `extern` is not found, the Rust ABI is used.
+ /// If `extern` is found and a `string_literal` does not follow, the C ABI is used.
+ fn parse_extern_abi(&mut self) -> PResult<'a, Abi> {
+ Ok(if self.eat_keyword(kw::Extern) {
+ self.parse_opt_abi()?
+ } else {
+ Abi::default()
+ })
+ }
+
+ /// Parses a string literal as an ABI spec.
+ /// If one is not found, the "C" ABI is used.
+ fn parse_opt_abi(&mut self) -> PResult<'a, Abi> {
+ let span = if self.token.can_begin_literal_or_bool() {
+ let ast::Lit { span, kind, .. } = self.parse_lit()?;
+ match kind {
+ ast::LitKind::Str(symbol, _) => return Ok(Abi::new(symbol, span)),
+ ast::LitKind::Err(_) => {}
+ _ => {
+ self.struct_span_err(span, "non-string ABI literal")
+ .span_suggestion(
+ span,
+ "specify the ABI with a string literal",
+ "\"C\"".to_string(),
+ Applicability::MaybeIncorrect,
+ )
+ .emit();
+ }
+ }
+ span
+ } else {
+ self.prev_span
+ };
+ Ok(Abi::new(sym::C, span))
+ }
+
+ /// We are parsing `async fn`. If we are on Rust 2015, emit an error.
+ fn ban_async_in_2015(&self, async_span: Span) {
+ if async_span.rust_2015() {
+ self.diagnostic()
+ .struct_span_err_with_code(
+ async_span,
+ "`async fn` is not permitted in the 2015 edition",
+ DiagnosticId::Error("E0670".into())
+ )
+ .emit();
+ }
+ }
+
+ fn collect_tokens<R>(
+ &mut self,
+ f: impl FnOnce(&mut Self) -> PResult<'a, R>,
+ ) -> PResult<'a, (R, TokenStream)> {
+ // Record all tokens we parse when parsing this item.
+ let mut tokens = Vec::new();
+ let prev_collecting = match self.token_cursor.frame.last_token {
+ LastToken::Collecting(ref mut list) => {
+ Some(mem::take(list))
+ }
+ LastToken::Was(ref mut last) => {
+ tokens.extend(last.take());
+ None
+ }
+ };
+ self.token_cursor.frame.last_token = LastToken::Collecting(tokens);
+ let prev = self.token_cursor.stack.len();
+ let ret = f(self);
+ let last_token = if self.token_cursor.stack.len() == prev {
+ &mut self.token_cursor.frame.last_token
+ } else if self.token_cursor.stack.get(prev).is_none() {
+ // This can happen due to a bad interaction of two unrelated recovery mechanisms with
+ // mismatched delimiters *and* recovery lookahead on the likely typo `pub ident(`
+ // (#62881).
+ return Ok((ret?, TokenStream::default()));
+ } else {
+ &mut self.token_cursor.stack[prev].last_token
+ };
+
+ // Pull out the tokens that we've collected from the call to `f` above.
+ let mut collected_tokens = match *last_token {
+ LastToken::Collecting(ref mut v) => mem::take(v),
+ LastToken::Was(ref was) => {
+ let msg = format!("our vector went away? - found Was({:?})", was);
+ debug!("collect_tokens: {}", msg);
+ self.sess.span_diagnostic.delay_span_bug(self.token.span, &msg);
+ // This can happen due to a bad interaction of two unrelated recovery mechanisms
+ // with mismatched delimiters *and* recovery lookahead on the likely typo
+ // `pub ident(` (#62895, different but similar to the case above).
+ return Ok((ret?, TokenStream::default()));
+ }
+ };
+
+ // If we're not at EOF our current token wasn't actually consumed by
+ // `f`, but it'll still be in our list that we pulled out. In that case
+ // put it back.
+ let extra_token = if self.token != token::Eof {
+ collected_tokens.pop()
+ } else {
+ None
+ };
+
+ // If we were previously collecting tokens, then this was a recursive
+ // call. In that case we need to record all the tokens we collected in
+ // our parent list as well. To do that we push a clone of our stream
+ // onto the previous list.
+ match prev_collecting {
+ Some(mut list) => {
+ list.extend(collected_tokens.iter().cloned());
+ list.extend(extra_token);
+ *last_token = LastToken::Collecting(list);
+ }
+ None => {
+ *last_token = LastToken::Was(extra_token);
+ }
+ }
+
+ Ok((ret?, TokenStream::new(collected_tokens)))
+ }
+
+ /// `::{` or `::*`
+ fn is_import_coupler(&mut self) -> bool {
+ self.check(&token::ModSep) &&
+ self.look_ahead(1, |t| *t == token::OpenDelim(token::Brace) ||
+ *t == token::BinOp(token::Star))
+ }
+
+ fn parse_optional_str(&mut self) -> Option<(Symbol, ast::StrStyle, Option<ast::Name>)> {
+ let ret = match self.token.kind {
+ token::Literal(token::Lit { kind: token::Str, symbol, suffix }) =>
+ (symbol, ast::StrStyle::Cooked, suffix),
+ token::Literal(token::Lit { kind: token::StrRaw(n), symbol, suffix }) =>
+ (symbol, ast::StrStyle::Raw(n), suffix),
+ _ => return None
+ };
+ self.bump();
+ Some(ret)
+ }
+
+ pub fn parse_str(&mut self) -> PResult<'a, (Symbol, StrStyle)> {
+ match self.parse_optional_str() {
+ Some((s, style, suf)) => {
+ let sp = self.prev_span;
+ self.expect_no_suffix(sp, "a string literal", suf);
+ Ok((s, style))
+ }
+ _ => {
+ let msg = "expected string literal";
+ let mut err = self.fatal(msg);
+ err.span_label(self.token.span, msg);
+ Err(err)
+ }
+ }
+ }
+}
+
+crate fn make_unclosed_delims_error(
+ unmatched: UnmatchedBrace,
+ sess: &ParseSess,
+) -> Option<DiagnosticBuilder<'_>> {
+ // `None` here means an `Eof` was found. We already emit those errors elsewhere, we add them to
+ // `unmatched_braces` only for error recovery in the `Parser`.
+ let found_delim = unmatched.found_delim?;
+ let mut err = sess.span_diagnostic.struct_span_err(unmatched.found_span, &format!(
+ "incorrect close delimiter: `{}`",
+ pprust::token_kind_to_string(&token::CloseDelim(found_delim)),
+ ));
+ err.span_label(unmatched.found_span, "incorrect close delimiter");
+ if let Some(sp) = unmatched.candidate_span {
+ err.span_label(sp, "close delimiter possibly meant for this");
+ }
+ if let Some(sp) = unmatched.unclosed_span {
+ err.span_label(sp, "un-closed delimiter");
+ }
+ Some(err)
+}
+
+pub fn emit_unclosed_delims(unclosed_delims: &mut Vec<UnmatchedBrace>, sess: &ParseSess) {
+ *sess.reached_eof.borrow_mut() |= unclosed_delims.iter()
+ .any(|unmatched_delim| unmatched_delim.found_delim.is_none());
+ for unmatched in unclosed_delims.drain(..) {
+ make_unclosed_delims_error(unmatched, sess).map(|mut e| e.emit());
+ }
+}
-use super::{Parser, PResult};
+use super::Parser;
use super::item::ItemInfo;
use super::diagnostics::Error;
use crate::attr;
use crate::ast::{self, Ident, Attribute, ItemKind, Mod, Crate};
use crate::parse::{new_sub_parser_from_file, DirectoryOwnership};
-use crate::parse::token::{self, TokenKind};
+use crate::token::{self, TokenKind};
use crate::source_map::{SourceMap, Span, DUMMY_SP, FileName};
use crate::symbol::sym;
+use errors::PResult;
+
use std::path::{self, Path, PathBuf};
/// Information about the path to a module.
-use super::{Parser, PResult, PathStyle};
+use super::{Parser, PathStyle};
use crate::{maybe_recover_from_interpolated_ty_qpath, maybe_whole};
use crate::ptr::P;
use crate::ast::{self, Attribute, Pat, PatKind, FieldPat, RangeEnd, RangeSyntax, Mac};
use crate::ast::{BindingMode, Ident, Mutability, Path, QSelf, Expr, ExprKind};
use crate::mut_visit::{noop_visit_pat, noop_visit_mac, MutVisitor};
-use crate::parse::token::{self};
+use crate::token;
use crate::print::pprust;
use crate::source_map::{respan, Span, Spanned};
-use crate::symbol::kw;
use crate::ThinVec;
-
-use errors::{Applicability, DiagnosticBuilder};
+use syntax_pos::symbol::{kw, sym};
+use errors::{PResult, Applicability, DiagnosticBuilder};
type Expected = Option<&'static str>;
// and no other gated or-pattern has been parsed thus far,
// then we should really gate the leading `|`.
// This complicated procedure is done purely for diagnostics UX.
- if gated_leading_vert {
- let mut or_pattern_spans = self.sess.gated_spans.or_patterns.borrow_mut();
- if or_pattern_spans.is_empty() {
- or_pattern_spans.push(leading_vert_span);
- }
+ if gated_leading_vert && self.sess.gated_spans.is_ungated(sym::or_patterns) {
+ self.sess.gated_spans.gate(sym::or_patterns, leading_vert_span);
}
Ok(pat)
// Feature gate the or-pattern if instructed:
if gate_or == GateOr::Yes {
- self.sess.gated_spans.or_patterns.borrow_mut().push(or_pattern_span);
+ self.sess.gated_spans.gate(sym::or_patterns, or_pattern_span);
}
Ok(self.mk_pat(or_pattern_span, PatKind::Or(pats)))
} else if self.eat_keyword(kw::Box) {
// Parse `box pat`
let pat = self.parse_pat_with_range_pat(false, None)?;
- self.sess.gated_spans.box_patterns.borrow_mut().push(lo.to(self.prev_span));
+ self.sess.gated_spans.gate(sym::box_patterns, lo.to(self.prev_span));
PatKind::Box(pat)
} else if self.can_be_ident_pat() {
// Parse `ident @ pat`
}
fn excluded_range_end(&self, span: Span) -> RangeEnd {
- self.sess.gated_spans.exclusive_range_pattern.borrow_mut().push(span);
+ self.sess.gated_spans.gate(sym::exclusive_range_pattern, span);
RangeEnd::Excluded
}
-use super::{Parser, PResult, TokenType};
+use super::{Parser, TokenType};
use crate::{maybe_whole, ThinVec};
use crate::ast::{self, QSelf, Path, PathSegment, Ident, ParenthesizedArgs, AngleBracketedArgs};
use crate::ast::{AnonConst, GenericArg, AssocTyConstraint, AssocTyConstraintKind, BlockCheckMode};
-use crate::parse::token::{self, Token};
+use crate::token::{self, Token};
use crate::source_map::{Span, BytePos};
-use crate::symbol::kw;
+use syntax_pos::symbol::{kw, sym};
use std::mem;
use log::debug;
-use errors::{Applicability, pluralize};
+use errors::{PResult, Applicability, pluralize};
/// Specifies how to parse a path.
#[derive(Copy, Clone, PartialEq)]
// Gate associated type bounds, e.g., `Iterator<Item: Ord>`.
if let AssocTyConstraintKind::Bound { .. } = kind {
- self.sess.gated_spans.associated_type_bounds.borrow_mut().push(span);
+ self.sess.gated_spans.gate(sym::associated_type_bounds, span);
}
constraints.push(AssocTyConstraint {
-use super::{Parser, PResult, Restrictions, PrevTokenKind, SemiColonMode, BlockMode};
+use super::{Parser, Restrictions, PrevTokenKind, SemiColonMode, BlockMode};
use super::expr::LhsExpr;
use super::path::PathStyle;
use super::pat::GateOr;
use crate::{maybe_whole, ThinVec};
use crate::ast::{self, DUMMY_NODE_ID, Stmt, StmtKind, Local, Block, BlockCheckMode, Expr, ExprKind};
use crate::ast::{Attribute, AttrStyle, VisibilityKind, MacStmtStyle, Mac, MacDelimiter};
-use crate::parse::{classify, DirectoryOwnership};
-use crate::parse::token;
+use crate::parse::DirectoryOwnership;
+use crate::util::classify;
+use crate::token;
use crate::source_map::{respan, Span};
use crate::symbol::{kw, sym};
use std::mem;
-use errors::Applicability;
+use errors::{PResult, Applicability};
impl<'a> Parser<'a> {
/// Parses a statement. This stops just before trailing semicolons on everything but items.
-use super::{Parser, PResult, PathStyle, PrevTokenKind, TokenType};
+use super::{Parser, PathStyle, PrevTokenKind, TokenType};
use super::item::ParamCfg;
use crate::{maybe_whole, maybe_recover_from_interpolated_ty_qpath};
use crate::ast::{self, Ty, TyKind, MutTy, BareFnTy, FunctionRetTy, GenericParam, Lifetime, Ident};
use crate::ast::{TraitBoundModifier, TraitObjectSyntax, GenericBound, GenericBounds, PolyTraitRef};
use crate::ast::{Mutability, AnonConst, Mac};
-use crate::parse::token::{self, Token};
+use crate::token::{self, Token};
use crate::source_map::Span;
use crate::symbol::{kw};
-use errors::{Applicability, pluralize};
+use errors::{PResult, Applicability, pluralize};
/// Returns `true` if `IDENT t` can start a type -- `IDENT::a::b`, `IDENT<u8, u8>`,
/// `IDENT<<u8 as Trait>::AssocTy>`.
use crate::ast::{self, Name, PatKind};
use crate::attr::first_attr_value_str_by_name;
-use crate::parse::{ParseSess, PResult};
-use crate::parse::new_parser_from_source_str;
-use crate::parse::token::Token;
+use crate::sess::ParseSess;
+use crate::parse::{PResult, new_parser_from_source_str};
+use crate::token::Token;
use crate::print::pprust::item_to_string;
use crate::ptr::P;
use crate::source_map::FilePathMapping;
let source = "/// doc comment\r\n/// line 2\r\nfn foo() {}".to_string();
let item = parse_item_from_source_str(name_2, source, &sess)
.unwrap().unwrap();
- let docs = item.attrs.iter().filter(|a| a.path == sym::doc)
+ let docs = item.attrs.iter().filter(|a| a.has_name(sym::doc))
.map(|a| a.value_str().unwrap().to_string()).collect::<Vec<_>>();
let b: &[_] = &["/// doc comment".to_string(), "/// line 2".to_string()];
assert_eq!(&docs[..], b);
+++ /dev/null
-pub use BinOpToken::*;
-pub use Nonterminal::*;
-pub use DelimToken::*;
-pub use LitKind::*;
-pub use TokenKind::*;
-
-use crate::ast;
-use crate::ptr::P;
-use crate::symbol::kw;
-use crate::tokenstream::TokenTree;
-
-use syntax_pos::symbol::Symbol;
-use syntax_pos::{self, Span, DUMMY_SP};
-
-use std::fmt;
-use std::mem;
-#[cfg(target_arch = "x86_64")]
-use rustc_data_structures::static_assert_size;
-use rustc_data_structures::sync::Lrc;
-
-#[derive(Clone, PartialEq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]
-pub enum BinOpToken {
- Plus,
- Minus,
- Star,
- Slash,
- Percent,
- Caret,
- And,
- Or,
- Shl,
- Shr,
-}
-
-/// A delimiter token.
-#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]
-pub enum DelimToken {
- /// A round parenthesis (i.e., `(` or `)`).
- Paren,
- /// A square bracket (i.e., `[` or `]`).
- Bracket,
- /// A curly brace (i.e., `{` or `}`).
- Brace,
- /// An empty delimiter.
- NoDelim,
-}
-
-impl DelimToken {
- pub fn len(self) -> usize {
- if self == NoDelim { 0 } else { 1 }
- }
-
- pub fn is_empty(self) -> bool {
- self == NoDelim
- }
-}
-
-#[derive(Clone, Copy, PartialEq, RustcEncodable, RustcDecodable, Debug)]
-pub enum LitKind {
- Bool, // AST only, must never appear in a `Token`
- Byte,
- Char,
- Integer,
- Float,
- Str,
- StrRaw(u16), // raw string delimited by `n` hash symbols
- ByteStr,
- ByteStrRaw(u16), // raw byte string delimited by `n` hash symbols
- Err,
-}
-
-/// A literal token.
-#[derive(Clone, Copy, PartialEq, RustcEncodable, RustcDecodable, Debug)]
-pub struct Lit {
- pub kind: LitKind,
- pub symbol: Symbol,
- pub suffix: Option<Symbol>,
-}
-
-impl fmt::Display for Lit {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- let Lit { kind, symbol, suffix } = *self;
- match kind {
- Byte => write!(f, "b'{}'", symbol)?,
- Char => write!(f, "'{}'", symbol)?,
- Str => write!(f, "\"{}\"", symbol)?,
- StrRaw(n) => write!(f, "r{delim}\"{string}\"{delim}",
- delim="#".repeat(n as usize),
- string=symbol)?,
- ByteStr => write!(f, "b\"{}\"", symbol)?,
- ByteStrRaw(n) => write!(f, "br{delim}\"{string}\"{delim}",
- delim="#".repeat(n as usize),
- string=symbol)?,
- Integer |
- Float |
- Bool |
- Err => write!(f, "{}", symbol)?,
- }
-
- if let Some(suffix) = suffix {
- write!(f, "{}", suffix)?;
- }
-
- Ok(())
- }
-}
-
-impl LitKind {
- /// An English article for the literal token kind.
- crate fn article(self) -> &'static str {
- match self {
- Integer | Err => "an",
- _ => "a",
- }
- }
-
- crate fn descr(self) -> &'static str {
- match self {
- Bool => panic!("literal token contains `Lit::Bool`"),
- Byte => "byte",
- Char => "char",
- Integer => "integer",
- Float => "float",
- Str | StrRaw(..) => "string",
- ByteStr | ByteStrRaw(..) => "byte string",
- Err => "error",
- }
- }
-
- crate fn may_have_suffix(self) -> bool {
- match self {
- Integer | Float | Err => true,
- _ => false,
- }
- }
-}
-
-impl Lit {
- pub fn new(kind: LitKind, symbol: Symbol, suffix: Option<Symbol>) -> Lit {
- Lit { kind, symbol, suffix }
- }
-}
-
-pub(crate) fn ident_can_begin_expr(name: ast::Name, span: Span, is_raw: bool) -> bool {
- let ident_token = Token::new(Ident(name, is_raw), span);
- token_can_begin_expr(&ident_token)
-}
-
-pub(crate) fn token_can_begin_expr(ident_token: &Token) -> bool {
- !ident_token.is_reserved_ident() ||
- ident_token.is_path_segment_keyword() ||
- match ident_token.kind {
- TokenKind::Ident(ident, _) => [
- kw::Async,
- kw::Do,
- kw::Box,
- kw::Break,
- kw::Continue,
- kw::False,
- kw::For,
- kw::If,
- kw::Let,
- kw::Loop,
- kw::Match,
- kw::Move,
- kw::Return,
- kw::True,
- kw::Unsafe,
- kw::While,
- kw::Yield,
- kw::Static,
- ].contains(&ident),
- _=> false,
- }
-}
-
-fn ident_can_begin_type(name: ast::Name, span: Span, is_raw: bool) -> bool {
- let ident_token = Token::new(Ident(name, is_raw), span);
-
- !ident_token.is_reserved_ident() ||
- ident_token.is_path_segment_keyword() ||
- [
- kw::Underscore,
- kw::For,
- kw::Impl,
- kw::Fn,
- kw::Unsafe,
- kw::Extern,
- kw::Typeof,
- kw::Dyn,
- ].contains(&name)
-}
-
-#[derive(Clone, PartialEq, RustcEncodable, RustcDecodable, Debug)]
-pub enum TokenKind {
- /* Expression-operator symbols. */
- Eq,
- Lt,
- Le,
- EqEq,
- Ne,
- Ge,
- Gt,
- AndAnd,
- OrOr,
- Not,
- Tilde,
- BinOp(BinOpToken),
- BinOpEq(BinOpToken),
-
- /* Structural symbols */
- At,
- Dot,
- DotDot,
- DotDotDot,
- DotDotEq,
- Comma,
- Semi,
- Colon,
- ModSep,
- RArrow,
- LArrow,
- FatArrow,
- Pound,
- Dollar,
- Question,
- /// Used by proc macros for representing lifetimes, not generated by lexer right now.
- SingleQuote,
- /// An opening delimiter (e.g., `{`).
- OpenDelim(DelimToken),
- /// A closing delimiter (e.g., `}`).
- CloseDelim(DelimToken),
-
- /* Literals */
- Literal(Lit),
-
- /* Name components */
- Ident(ast::Name, /* is_raw */ bool),
- Lifetime(ast::Name),
-
- Interpolated(Lrc<Nonterminal>),
-
- // Can be expanded into several tokens.
- /// A doc comment.
- DocComment(ast::Name),
-
- // Junk. These carry no data because we don't really care about the data
- // they *would* carry, and don't really want to allocate a new ident for
- // them. Instead, users could extract that from the associated span.
-
- /// Whitespace.
- Whitespace,
- /// A comment.
- Comment,
- Shebang(ast::Name),
- /// A completely invalid token which should be skipped.
- Unknown(ast::Name),
-
- Eof,
-}
-
-// `TokenKind` is used a lot. Make sure it doesn't unintentionally get bigger.
-#[cfg(target_arch = "x86_64")]
-static_assert_size!(TokenKind, 16);
-
-#[derive(Clone, PartialEq, RustcEncodable, RustcDecodable, Debug)]
-pub struct Token {
- pub kind: TokenKind,
- pub span: Span,
-}
-
-impl TokenKind {
- pub fn lit(kind: LitKind, symbol: Symbol, suffix: Option<Symbol>) -> TokenKind {
- Literal(Lit::new(kind, symbol, suffix))
- }
-
- /// Returns tokens that are likely to be typed accidentally instead of the current token.
- /// Enables better error recovery when the wrong token is found.
- crate fn similar_tokens(&self) -> Option<Vec<TokenKind>> {
- match *self {
- Comma => Some(vec![Dot, Lt, Semi]),
- Semi => Some(vec![Colon, Comma]),
- _ => None
- }
- }
-}
-
-impl Token {
- pub fn new(kind: TokenKind, span: Span) -> Self {
- Token { kind, span }
- }
-
- /// Some token that will be thrown away later.
- crate fn dummy() -> Self {
- Token::new(TokenKind::Whitespace, DUMMY_SP)
- }
-
- /// Recovers a `Token` from an `ast::Ident`. This creates a raw identifier if necessary.
- pub fn from_ast_ident(ident: ast::Ident) -> Self {
- Token::new(Ident(ident.name, ident.is_raw_guess()), ident.span)
- }
-
- /// Return this token by value and leave a dummy token in its place.
- pub fn take(&mut self) -> Self {
- mem::replace(self, Token::dummy())
- }
-
- crate fn is_op(&self) -> bool {
- match self.kind {
- OpenDelim(..) | CloseDelim(..) | Literal(..) | DocComment(..) |
- Ident(..) | Lifetime(..) | Interpolated(..) |
- Whitespace | Comment | Shebang(..) | Eof => false,
- _ => true,
- }
- }
-
- crate fn is_like_plus(&self) -> bool {
- match self.kind {
- BinOp(Plus) | BinOpEq(Plus) => true,
- _ => false,
- }
- }
-
- /// Returns `true` if the token can appear at the start of an expression.
- pub fn can_begin_expr(&self) -> bool {
- match self.kind {
- Ident(name, is_raw) =>
- ident_can_begin_expr(name, self.span, is_raw), // value name or keyword
- OpenDelim(..) | // tuple, array or block
- Literal(..) | // literal
- Not | // operator not
- BinOp(Minus) | // unary minus
- BinOp(Star) | // dereference
- BinOp(Or) | OrOr | // closure
- BinOp(And) | // reference
- AndAnd | // double reference
- // DotDotDot is no longer supported, but we need some way to display the error
- DotDot | DotDotDot | DotDotEq | // range notation
- Lt | BinOp(Shl) | // associated path
- ModSep | // global path
- Lifetime(..) | // labeled loop
- Pound => true, // expression attributes
- Interpolated(ref nt) => match **nt {
- NtLiteral(..) |
- NtIdent(..) |
- NtExpr(..) |
- NtBlock(..) |
- NtPath(..) |
- NtLifetime(..) => true,
- _ => false,
- },
- _ => false,
- }
- }
-
- /// Returns `true` if the token can appear at the start of a type.
- pub fn can_begin_type(&self) -> bool {
- match self.kind {
- Ident(name, is_raw) =>
- ident_can_begin_type(name, self.span, is_raw), // type name or keyword
- OpenDelim(Paren) | // tuple
- OpenDelim(Bracket) | // array
- Not | // never
- BinOp(Star) | // raw pointer
- BinOp(And) | // reference
- AndAnd | // double reference
- Question | // maybe bound in trait object
- Lifetime(..) | // lifetime bound in trait object
- Lt | BinOp(Shl) | // associated path
- ModSep => true, // global path
- Interpolated(ref nt) => match **nt {
- NtIdent(..) | NtTy(..) | NtPath(..) | NtLifetime(..) => true,
- _ => false,
- },
- _ => false,
- }
- }
-
- /// Returns `true` if the token can appear at the start of a const param.
- crate fn can_begin_const_arg(&self) -> bool {
- match self.kind {
- OpenDelim(Brace) => true,
- Interpolated(ref nt) => match **nt {
- NtExpr(..) => true,
- NtBlock(..) => true,
- NtLiteral(..) => true,
- _ => false,
- }
- _ => self.can_begin_literal_or_bool(),
- }
- }
-
- /// Returns `true` if the token can appear at the start of a generic bound.
- crate fn can_begin_bound(&self) -> bool {
- self.is_path_start() || self.is_lifetime() || self.is_keyword(kw::For) ||
- self == &Question || self == &OpenDelim(Paren)
- }
-
- /// Returns `true` if the token is any literal
- pub fn is_lit(&self) -> bool {
- match self.kind {
- Literal(..) => true,
- _ => false,
- }
- }
-
- crate fn expect_lit(&self) -> Lit {
- match self.kind {
- Literal(lit) => lit,
- _ => panic!("`expect_lit` called on non-literal"),
- }
- }
-
- /// Returns `true` if the token is any literal, a minus (which can prefix a literal,
- /// for example a '-42', or one of the boolean idents).
- pub fn can_begin_literal_or_bool(&self) -> bool {
- match self.kind {
- Literal(..) | BinOp(Minus) => true,
- Ident(name, false) if name.is_bool_lit() => true,
- Interpolated(ref nt) => match **nt {
- NtLiteral(..) => true,
- _ => false,
- },
- _ => false,
- }
- }
-
- /// Returns an identifier if this token is an identifier.
- pub fn ident(&self) -> Option<(ast::Ident, /* is_raw */ bool)> {
- match self.kind {
- Ident(name, is_raw) => Some((ast::Ident::new(name, self.span), is_raw)),
- Interpolated(ref nt) => match **nt {
- NtIdent(ident, is_raw) => Some((ident, is_raw)),
- _ => None,
- },
- _ => None,
- }
- }
-
- /// Returns a lifetime identifier if this token is a lifetime.
- pub fn lifetime(&self) -> Option<ast::Ident> {
- match self.kind {
- Lifetime(name) => Some(ast::Ident::new(name, self.span)),
- Interpolated(ref nt) => match **nt {
- NtLifetime(ident) => Some(ident),
- _ => None,
- },
- _ => None,
- }
- }
-
- /// Returns `true` if the token is an identifier.
- pub fn is_ident(&self) -> bool {
- self.ident().is_some()
- }
-
- /// Returns `true` if the token is a lifetime.
- crate fn is_lifetime(&self) -> bool {
- self.lifetime().is_some()
- }
-
- /// Returns `true` if the token is a identifier whose name is the given
- /// string slice.
- crate fn is_ident_named(&self, name: Symbol) -> bool {
- self.ident().map_or(false, |(ident, _)| ident.name == name)
- }
-
- /// Returns `true` if the token is an interpolated path.
- fn is_path(&self) -> bool {
- if let Interpolated(ref nt) = self.kind {
- if let NtPath(..) = **nt {
- return true;
- }
- }
- false
- }
-
- /// Would `maybe_whole_expr` in `parser.rs` return `Ok(..)`?
- /// That is, is this a pre-parsed expression dropped into the token stream
- /// (which happens while parsing the result of macro expansion)?
- crate fn is_whole_expr(&self) -> bool {
- if let Interpolated(ref nt) = self.kind {
- if let NtExpr(_) | NtLiteral(_) | NtPath(_) | NtIdent(..) | NtBlock(_) = **nt {
- return true;
- }
- }
-
- false
- }
-
- /// Returns `true` if the token is either the `mut` or `const` keyword.
- crate fn is_mutability(&self) -> bool {
- self.is_keyword(kw::Mut) ||
- self.is_keyword(kw::Const)
- }
-
- crate fn is_qpath_start(&self) -> bool {
- self == &Lt || self == &BinOp(Shl)
- }
-
- crate fn is_path_start(&self) -> bool {
- self == &ModSep || self.is_qpath_start() || self.is_path() ||
- self.is_path_segment_keyword() || self.is_ident() && !self.is_reserved_ident()
- }
-
- /// Returns `true` if the token is a given keyword, `kw`.
- pub fn is_keyword(&self, kw: Symbol) -> bool {
- self.is_non_raw_ident_where(|id| id.name == kw)
- }
-
- crate fn is_path_segment_keyword(&self) -> bool {
- self.is_non_raw_ident_where(ast::Ident::is_path_segment_keyword)
- }
-
- // Returns true for reserved identifiers used internally for elided lifetimes,
- // unnamed method parameters, crate root module, error recovery etc.
- crate fn is_special_ident(&self) -> bool {
- self.is_non_raw_ident_where(ast::Ident::is_special)
- }
-
- /// Returns `true` if the token is a keyword used in the language.
- crate fn is_used_keyword(&self) -> bool {
- self.is_non_raw_ident_where(ast::Ident::is_used_keyword)
- }
-
- /// Returns `true` if the token is a keyword reserved for possible future use.
- crate fn is_unused_keyword(&self) -> bool {
- self.is_non_raw_ident_where(ast::Ident::is_unused_keyword)
- }
-
- /// Returns `true` if the token is either a special identifier or a keyword.
- pub fn is_reserved_ident(&self) -> bool {
- self.is_non_raw_ident_where(ast::Ident::is_reserved)
- }
-
- /// Returns `true` if the token is the identifier `true` or `false`.
- crate fn is_bool_lit(&self) -> bool {
- self.is_non_raw_ident_where(|id| id.name.is_bool_lit())
- }
-
- /// Returns `true` if the token is a non-raw identifier for which `pred` holds.
- fn is_non_raw_ident_where(&self, pred: impl FnOnce(ast::Ident) -> bool) -> bool {
- match self.ident() {
- Some((id, false)) => pred(id),
- _ => false,
- }
- }
-
- crate fn glue(&self, joint: &Token) -> Option<Token> {
- let kind = match self.kind {
- Eq => match joint.kind {
- Eq => EqEq,
- Gt => FatArrow,
- _ => return None,
- },
- Lt => match joint.kind {
- Eq => Le,
- Lt => BinOp(Shl),
- Le => BinOpEq(Shl),
- BinOp(Minus) => LArrow,
- _ => return None,
- },
- Gt => match joint.kind {
- Eq => Ge,
- Gt => BinOp(Shr),
- Ge => BinOpEq(Shr),
- _ => return None,
- },
- Not => match joint.kind {
- Eq => Ne,
- _ => return None,
- },
- BinOp(op) => match joint.kind {
- Eq => BinOpEq(op),
- BinOp(And) if op == And => AndAnd,
- BinOp(Or) if op == Or => OrOr,
- Gt if op == Minus => RArrow,
- _ => return None,
- },
- Dot => match joint.kind {
- Dot => DotDot,
- DotDot => DotDotDot,
- _ => return None,
- },
- DotDot => match joint.kind {
- Dot => DotDotDot,
- Eq => DotDotEq,
- _ => return None,
- },
- Colon => match joint.kind {
- Colon => ModSep,
- _ => return None,
- },
- SingleQuote => match joint.kind {
- Ident(name, false) => Lifetime(Symbol::intern(&format!("'{}", name))),
- _ => return None,
- },
-
- Le | EqEq | Ne | Ge | AndAnd | OrOr | Tilde | BinOpEq(..) | At | DotDotDot |
- DotDotEq | Comma | Semi | ModSep | RArrow | LArrow | FatArrow | Pound | Dollar |
- Question | OpenDelim(..) | CloseDelim(..) |
- Literal(..) | Ident(..) | Lifetime(..) | Interpolated(..) | DocComment(..) |
- Whitespace | Comment | Shebang(..) | Unknown(..) | Eof => return None,
- };
-
- Some(Token::new(kind, self.span.to(joint.span)))
- }
-
- // See comments in `Nonterminal::to_tokenstream` for why we care about
- // *probably* equal here rather than actual equality
- crate fn probably_equal_for_proc_macro(&self, other: &Token) -> bool {
- if mem::discriminant(&self.kind) != mem::discriminant(&other.kind) {
- return false
- }
- match (&self.kind, &other.kind) {
- (&Eq, &Eq) |
- (&Lt, &Lt) |
- (&Le, &Le) |
- (&EqEq, &EqEq) |
- (&Ne, &Ne) |
- (&Ge, &Ge) |
- (&Gt, &Gt) |
- (&AndAnd, &AndAnd) |
- (&OrOr, &OrOr) |
- (&Not, &Not) |
- (&Tilde, &Tilde) |
- (&At, &At) |
- (&Dot, &Dot) |
- (&DotDot, &DotDot) |
- (&DotDotDot, &DotDotDot) |
- (&DotDotEq, &DotDotEq) |
- (&Comma, &Comma) |
- (&Semi, &Semi) |
- (&Colon, &Colon) |
- (&ModSep, &ModSep) |
- (&RArrow, &RArrow) |
- (&LArrow, &LArrow) |
- (&FatArrow, &FatArrow) |
- (&Pound, &Pound) |
- (&Dollar, &Dollar) |
- (&Question, &Question) |
- (&Whitespace, &Whitespace) |
- (&Comment, &Comment) |
- (&Eof, &Eof) => true,
-
- (&BinOp(a), &BinOp(b)) |
- (&BinOpEq(a), &BinOpEq(b)) => a == b,
-
- (&OpenDelim(a), &OpenDelim(b)) |
- (&CloseDelim(a), &CloseDelim(b)) => a == b,
-
- (&DocComment(a), &DocComment(b)) |
- (&Shebang(a), &Shebang(b)) => a == b,
-
- (&Literal(a), &Literal(b)) => a == b,
-
- (&Lifetime(a), &Lifetime(b)) => a == b,
- (&Ident(a, b), &Ident(c, d)) => b == d && (a == c ||
- a == kw::DollarCrate ||
- c == kw::DollarCrate),
-
- (&Interpolated(_), &Interpolated(_)) => false,
-
- _ => panic!("forgot to add a token?"),
- }
- }
-}
-
-impl PartialEq<TokenKind> for Token {
- fn eq(&self, rhs: &TokenKind) -> bool {
- self.kind == *rhs
- }
-}
-
-#[derive(Clone, RustcEncodable, RustcDecodable)]
-/// For interpolation during macro expansion.
-pub enum Nonterminal {
- NtItem(P<ast::Item>),
- NtBlock(P<ast::Block>),
- NtStmt(ast::Stmt),
- NtPat(P<ast::Pat>),
- NtExpr(P<ast::Expr>),
- NtTy(P<ast::Ty>),
- NtIdent(ast::Ident, /* is_raw */ bool),
- NtLifetime(ast::Ident),
- NtLiteral(P<ast::Expr>),
- /// Stuff inside brackets for attributes
- NtMeta(ast::AttrItem),
- NtPath(ast::Path),
- NtVis(ast::Visibility),
- NtTT(TokenTree),
- // Used only for passing items to proc macro attributes (they are not
- // strictly necessary for that, `Annotatable` can be converted into
- // tokens directly, but doing that naively regresses pretty-printing).
- NtTraitItem(ast::TraitItem),
- NtImplItem(ast::ImplItem),
- NtForeignItem(ast::ForeignItem),
-}
-
-impl PartialEq for Nonterminal {
- fn eq(&self, rhs: &Self) -> bool {
- match (self, rhs) {
- (NtIdent(ident_lhs, is_raw_lhs), NtIdent(ident_rhs, is_raw_rhs)) =>
- ident_lhs == ident_rhs && is_raw_lhs == is_raw_rhs,
- (NtLifetime(ident_lhs), NtLifetime(ident_rhs)) => ident_lhs == ident_rhs,
- (NtTT(tt_lhs), NtTT(tt_rhs)) => tt_lhs == tt_rhs,
- // FIXME: Assume that all "complex" nonterminal are not equal, we can't compare them
- // correctly based on data from AST. This will prevent them from matching each other
- // in macros. The comparison will become possible only when each nonterminal has an
- // attached token stream from which it was parsed.
- _ => false,
- }
- }
-}
-
-impl fmt::Debug for Nonterminal {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- match *self {
- NtItem(..) => f.pad("NtItem(..)"),
- NtBlock(..) => f.pad("NtBlock(..)"),
- NtStmt(..) => f.pad("NtStmt(..)"),
- NtPat(..) => f.pad("NtPat(..)"),
- NtExpr(..) => f.pad("NtExpr(..)"),
- NtTy(..) => f.pad("NtTy(..)"),
- NtIdent(..) => f.pad("NtIdent(..)"),
- NtLiteral(..) => f.pad("NtLiteral(..)"),
- NtMeta(..) => f.pad("NtMeta(..)"),
- NtPath(..) => f.pad("NtPath(..)"),
- NtTT(..) => f.pad("NtTT(..)"),
- NtImplItem(..) => f.pad("NtImplItem(..)"),
- NtTraitItem(..) => f.pad("NtTraitItem(..)"),
- NtForeignItem(..) => f.pad("NtForeignItem(..)"),
- NtVis(..) => f.pad("NtVis(..)"),
- NtLifetime(..) => f.pad("NtLifetime(..)"),
- }
- }
-}
+++ /dev/null
-//! Utilities for rendering escape sequence errors as diagnostics.
-
-use std::ops::Range;
-use std::iter::once;
-
-use rustc_lexer::unescape::{EscapeError, Mode};
-use syntax_pos::{Span, BytePos};
-
-use crate::errors::{Handler, Applicability};
-
-pub(crate) fn emit_unescape_error(
- handler: &Handler,
- // interior part of the literal, without quotes
- lit: &str,
- // full span of the literal, including quotes
- span_with_quotes: Span,
- mode: Mode,
- // range of the error inside `lit`
- range: Range<usize>,
- error: EscapeError,
-) {
- log::debug!("emit_unescape_error: {:?}, {:?}, {:?}, {:?}, {:?}",
- lit, span_with_quotes, mode, range, error);
- let span = {
- let Range { start, end } = range;
- let (start, end) = (start as u32, end as u32);
- let lo = span_with_quotes.lo() + BytePos(start + 1);
- let hi = lo + BytePos(end - start);
- span_with_quotes
- .with_lo(lo)
- .with_hi(hi)
- };
- let last_char = || {
- let c = lit[range.clone()].chars().rev().next().unwrap();
- let span = span.with_lo(span.hi() - BytePos(c.len_utf8() as u32));
- (c, span)
- };
- match error {
- EscapeError::LoneSurrogateUnicodeEscape => {
- handler.struct_span_err(span, "invalid unicode character escape")
- .help("unicode escape must not be a surrogate")
- .emit();
- }
- EscapeError::OutOfRangeUnicodeEscape => {
- handler.struct_span_err(span, "invalid unicode character escape")
- .help("unicode escape must be at most 10FFFF")
- .emit();
- }
- EscapeError::MoreThanOneChar => {
- let msg = if mode.is_bytes() {
- "if you meant to write a byte string literal, use double quotes"
- } else {
- "if you meant to write a `str` literal, use double quotes"
- };
-
- handler
- .struct_span_err(
- span_with_quotes,
- "character literal may only contain one codepoint",
- )
- .span_suggestion(
- span_with_quotes,
- msg,
- format!("\"{}\"", lit),
- Applicability::MachineApplicable,
- ).emit()
- }
- EscapeError::EscapeOnlyChar => {
- let (c, _span) = last_char();
-
- let mut msg = if mode.is_bytes() {
- "byte constant must be escaped: "
- } else {
- "character constant must be escaped: "
- }.to_string();
- push_escaped_char(&mut msg, c);
-
- handler.span_err(span, msg.as_str())
- }
- EscapeError::BareCarriageReturn => {
- let msg = if mode.in_double_quotes() {
- "bare CR not allowed in string, use \\r instead"
- } else {
- "character constant must be escaped: \\r"
- };
- handler.span_err(span, msg);
- }
- EscapeError::BareCarriageReturnInRawString => {
- assert!(mode.in_double_quotes());
- let msg = "bare CR not allowed in raw string";
- handler.span_err(span, msg);
- }
- EscapeError::InvalidEscape => {
- let (c, span) = last_char();
-
- let label = if mode.is_bytes() {
- "unknown byte escape"
- } else {
- "unknown character escape"
- };
- let mut msg = label.to_string();
- msg.push_str(": ");
- push_escaped_char(&mut msg, c);
-
- let mut diag = handler.struct_span_err(span, msg.as_str());
- diag.span_label(span, label);
- if c == '{' || c == '}' && !mode.is_bytes() {
- diag.help("if used in a formatting string, \
- curly braces are escaped with `{{` and `}}`");
- } else if c == '\r' {
- diag.help("this is an isolated carriage return; \
- consider checking your editor and version control settings");
- }
- diag.emit();
- }
- EscapeError::TooShortHexEscape => {
- handler.span_err(span, "numeric character escape is too short")
- }
- EscapeError::InvalidCharInHexEscape | EscapeError::InvalidCharInUnicodeEscape => {
- let (c, span) = last_char();
-
- let mut msg = if error == EscapeError::InvalidCharInHexEscape {
- "invalid character in numeric character escape: "
- } else {
- "invalid character in unicode escape: "
- }.to_string();
- push_escaped_char(&mut msg, c);
-
- handler.span_err(span, msg.as_str())
- }
- EscapeError::NonAsciiCharInByte => {
- assert!(mode.is_bytes());
- let (_c, span) = last_char();
- handler.span_err(span, "byte constant must be ASCII. \
- Use a \\xHH escape for a non-ASCII byte")
- }
- EscapeError::NonAsciiCharInByteString => {
- assert!(mode.is_bytes());
- let (_c, span) = last_char();
- handler.span_err(span, "raw byte string must be ASCII")
- }
- EscapeError::OutOfRangeHexEscape => {
- handler.span_err(span, "this form of character escape may only be used \
- with characters in the range [\\x00-\\x7f]")
- }
- EscapeError::LeadingUnderscoreUnicodeEscape => {
- let (_c, span) = last_char();
- handler.span_err(span, "invalid start of unicode escape")
- }
- EscapeError::OverlongUnicodeEscape => {
- handler.span_err(span, "overlong unicode escape (must have at most 6 hex digits)")
- }
- EscapeError::UnclosedUnicodeEscape => {
- handler.span_err(span, "unterminated unicode escape (needed a `}`)")
- }
- EscapeError::NoBraceInUnicodeEscape => {
- let msg = "incorrect unicode escape sequence";
- let mut diag = handler.struct_span_err(span, msg);
-
- let mut suggestion = "\\u{".to_owned();
- let mut suggestion_len = 0;
- let (c, char_span) = last_char();
- let chars = once(c).chain(lit[range.end..].chars());
- for c in chars.take(6).take_while(|c| c.is_digit(16)) {
- suggestion.push(c);
- suggestion_len += c.len_utf8();
- }
-
- if suggestion_len > 0 {
- suggestion.push('}');
- let lo = char_span.lo();
- let hi = lo + BytePos(suggestion_len as u32);
- diag.span_suggestion(
- span.with_lo(lo).with_hi(hi),
- "format of unicode escape sequences uses braces",
- suggestion,
- Applicability::MaybeIncorrect,
- );
- } else {
- diag.span_label(span, msg);
- diag.help(
- "format of unicode escape sequences is `\\u{...}`",
- );
- }
-
- diag.emit();
- }
- EscapeError::UnicodeEscapeInByte => {
- handler.span_err(span, "unicode escape sequences cannot be used \
- as a byte or in a byte string")
- }
- EscapeError::EmptyUnicodeEscape => {
- handler.span_err(span, "empty unicode escape (must have at least 1 hex digit)")
- }
- EscapeError::ZeroChars => {
- handler.span_err(span, "empty character literal")
- }
- EscapeError::LoneSlash => {
- handler.span_err(span, "invalid trailing slash in literal")
- }
- }
-}
-
-/// Pushes a character to a message string for error reporting
-pub(crate) fn push_escaped_char(msg: &mut String, c: char) {
- match c {
- '\u{20}'..='\u{7e}' => {
- // Don't escape \, ' or " for user-facing messages
- msg.push(c);
- }
- _ => {
- msg.extend(c.escape_default());
- }
- }
-}
use crate::ast::{SelfKind, GenericBound, TraitBoundModifier};
use crate::ast::{Attribute, MacDelimiter, GenericArg};
use crate::util::parser::{self, AssocOp, Fixity};
+use crate::util::comments;
use crate::attr;
use crate::source_map::{self, SourceMap, Spanned};
-use crate::parse::token::{self, BinOpToken, DelimToken, Nonterminal, Token, TokenKind};
-use crate::parse::lexer::comments;
-use crate::parse;
+use crate::token::{self, BinOpToken, DelimToken, Nonterminal, Token, TokenKind};
use crate::print::pp::{self, Breaks};
use crate::print::pp::Breaks::{Consistent, Inconsistent};
use crate::ptr::P;
+use crate::util::classify;
use crate::sess::ParseSess;
use crate::symbol::{kw, sym};
use crate::tokenstream::{self, TokenStream, TokenTree};
-use rustc_target::spec::abi::{self, Abi};
use syntax_pos::{self, BytePos};
use syntax_pos::{FileName, Span};
self.hardbreak_if_not_bol();
}
self.maybe_print_comment(attr.span.lo());
- if attr.is_sugared_doc {
- self.word(attr.value_str().unwrap().to_string());
- self.hardbreak()
- } else {
- match attr.style {
- ast::AttrStyle::Inner => self.word("#!["),
- ast::AttrStyle::Outer => self.word("#["),
+ match attr.kind {
+ ast::AttrKind::Normal(ref item) => {
+ match attr.style {
+ ast::AttrStyle::Inner => self.word("#!["),
+ ast::AttrStyle::Outer => self.word("#["),
+ }
+ self.print_attr_item(&item, attr.span);
+ self.word("]");
+ }
+ ast::AttrKind::DocComment(comment) => {
+ self.word(comment.to_string());
+ self.hardbreak()
}
- self.print_attr_item(&attr.item, attr.span);
- self.word("]");
}
}
}
ast::ItemKind::ForeignMod(ref nmod) => {
self.head("extern");
- self.word_nbsp(nmod.abi.to_string());
+ self.print_abi(nmod.abi);
self.bopen();
self.print_foreign_mod(nmod, &item.attrs);
self.bclose(item.span);
ast::StmtKind::Expr(ref expr) => {
self.space_if_not_bol();
self.print_expr_outer_attr_style(expr, false);
- if parse::classify::expr_requires_semi_to_be_stmt(expr) {
+ if classify::expr_requires_semi_to_be_stmt(expr) {
self.s.word(";");
}
}
}
crate fn print_ty_fn(&mut self,
- abi: abi::Abi,
+ abi: ast::Abi,
unsafety: ast::Unsafety,
decl: &ast::FnDecl,
name: Option<ast::Ident>,
self.print_asyncness(header.asyncness.node);
self.print_unsafety(header.unsafety);
- if header.abi != Abi::Rust {
+ if header.abi.symbol != sym::Rust {
self.word_nbsp("extern");
- self.word_nbsp(header.abi.to_string());
+ self.print_abi(header.abi);
}
self.s.word("fn")
}
+ fn print_abi(&mut self, abi: ast::Abi) {
+ self.word_nbsp(format!("\"{}\"", abi.symbol));
+ }
+
crate fn print_unsafety(&mut self, s: ast::Unsafety) {
match s {
ast::Unsafety::Normal => {},
assert_eq!(
fun_to_string(
&decl,
- ast::FnHeader {
- unsafety: ast::Unsafety::Normal,
- constness: source_map::dummy_spanned(ast::Constness::NotConst),
- asyncness: source_map::dummy_spanned(ast::IsAsync::NotAsync),
- abi: Abi::Rust,
- },
+ ast::FnHeader::default(),
abba_ident,
&generics
),
/// Collected spans during parsing for places where a certain feature was
/// used and should be feature gated accordingly in `check_crate`.
#[derive(Default)]
-crate struct GatedSpans {
- /// Spans collected for gating `let_chains`, e.g. `if a && let b = c {}`.
- crate let_chains: Lock<Vec<Span>>,
- /// Spans collected for gating `async_closure`, e.g. `async || ..`.
- crate async_closure: Lock<Vec<Span>>,
- /// Spans collected for gating `yield e?` expressions (`generators` gate).
- crate yields: Lock<Vec<Span>>,
- /// Spans collected for gating `or_patterns`, e.g. `Some(Foo | Bar)`.
- crate or_patterns: Lock<Vec<Span>>,
- /// Spans collected for gating `const_extern_fn`, e.g. `const extern fn foo`.
- crate const_extern_fn: Lock<Vec<Span>>,
- /// Spans collected for gating `trait_alias`, e.g. `trait Foo = Ord + Eq;`.
- pub trait_alias: Lock<Vec<Span>>,
- /// Spans collected for gating `associated_type_bounds`, e.g. `Iterator<Item: Ord>`.
- pub associated_type_bounds: Lock<Vec<Span>>,
- /// Spans collected for gating `crate_visibility_modifier`, e.g. `crate fn`.
- pub crate_visibility_modifier: Lock<Vec<Span>>,
- /// Spans collected for gating `const_generics`, e.g. `const N: usize`.
- pub const_generics: Lock<Vec<Span>>,
- /// Spans collected for gating `decl_macro`, e.g. `macro m() {}`.
- pub decl_macro: Lock<Vec<Span>>,
- /// Spans collected for gating `box_patterns`, e.g. `box 0`.
- pub box_patterns: Lock<Vec<Span>>,
- /// Spans collected for gating `exclusive_range_pattern`, e.g. `0..2`.
- pub exclusive_range_pattern: Lock<Vec<Span>>,
- /// Spans collected for gating `try_blocks`, e.g. `try { a? + b? }`.
- pub try_blocks: Lock<Vec<Span>>,
- /// Spans collected for gating `label_break_value`, e.g. `'label: { ... }`.
- pub label_break_value: Lock<Vec<Span>>,
- /// Spans collected for gating `box_syntax`, e.g. `box $expr`.
- pub box_syntax: Lock<Vec<Span>>,
- /// Spans collected for gating `type_ascription`, e.g. `42: usize`.
- pub type_ascription: Lock<Vec<Span>>,
+pub struct GatedSpans {
+ pub spans: Lock<FxHashMap<Symbol, Vec<Span>>>,
+}
+
+impl GatedSpans {
+ /// Feature gate the given `span` under the given `feature`
+ /// which is same `Symbol` used in `active.rs`.
+ pub fn gate(&self, feature: Symbol, span: Span) {
+ self.spans
+ .borrow_mut()
+ .entry(feature)
+ .or_default()
+ .push(span);
+ }
+
+ /// Ungate the last span under the given `feature`.
+ /// Panics if the given `span` wasn't the last one.
+ ///
+ /// Using this is discouraged unless you have a really good reason to.
+ pub fn ungate_last(&self, feature: Symbol, span: Span) {
+ let removed_span = self.spans
+ .borrow_mut()
+ .entry(feature)
+ .or_default()
+ .pop()
+ .unwrap();
+ debug_assert_eq!(span, removed_span);
+ }
+
+ /// Is the provided `feature` gate ungated currently?
+ ///
+ /// Using this is discouraged unless you have a really good reason to.
+ pub fn is_ungated(&self, feature: Symbol) -> bool {
+ self.spans
+ .borrow()
+ .get(&feature)
+ .map_or(true, |spans| spans.is_empty())
+ }
+
+ /// Prepend the given set of `spans` onto the set in `self`.
+ pub fn merge(&self, mut spans: FxHashMap<Symbol, Vec<Span>>) {
+ let mut inner = self.spans.borrow_mut();
+ for (gate, mut gate_spans) in inner.drain() {
+ spans.entry(gate).or_default().append(&mut gate_spans);
+ }
+ *inner = spans;
+ }
}
/// Info about a parsing session.
/// analysis.
pub ambiguous_block_expr_parse: Lock<FxHashMap<Span, Span>>,
pub injected_crate_name: Once<Symbol>,
- crate gated_spans: GatedSpans,
+ pub gated_spans: GatedSpans,
/// The parser has reached `Eof` due to an unclosed brace. Used to silence unnecessary errors.
pub reached_eof: Lock<bool>,
}
use crate::ast;
-use crate::parse::{PResult, source_file_to_stream};
+use crate::parse::source_file_to_stream;
use crate::parse::new_parser_from_source_str;
use crate::parse::parser::Parser;
use crate::sess::ParseSess;
use crate::with_default_globals;
use errors::emitter::EmitterWriter;
-use errors::Handler;
+use errors::{PResult, Handler};
use rustc_data_structures::sync::Lrc;
use syntax_pos::{BytePos, Span, MultiSpan};
--- /dev/null
+pub use BinOpToken::*;
+pub use Nonterminal::*;
+pub use DelimToken::*;
+pub use LitKind::*;
+pub use TokenKind::*;
+
+use crate::ast;
+use crate::ptr::P;
+use crate::symbol::kw;
+use crate::tokenstream::TokenTree;
+
+use syntax_pos::symbol::Symbol;
+use syntax_pos::{self, Span, DUMMY_SP};
+
+use std::fmt;
+use std::mem;
+#[cfg(target_arch = "x86_64")]
+use rustc_data_structures::static_assert_size;
+use rustc_data_structures::sync::Lrc;
+
+#[derive(Clone, PartialEq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]
+pub enum BinOpToken {
+ Plus,
+ Minus,
+ Star,
+ Slash,
+ Percent,
+ Caret,
+ And,
+ Or,
+ Shl,
+ Shr,
+}
+
+/// A delimiter token.
+#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]
+pub enum DelimToken {
+ /// A round parenthesis (i.e., `(` or `)`).
+ Paren,
+ /// A square bracket (i.e., `[` or `]`).
+ Bracket,
+ /// A curly brace (i.e., `{` or `}`).
+ Brace,
+ /// An empty delimiter.
+ NoDelim,
+}
+
+impl DelimToken {
+ pub fn len(self) -> usize {
+ if self == NoDelim { 0 } else { 1 }
+ }
+
+ pub fn is_empty(self) -> bool {
+ self == NoDelim
+ }
+}
+
+#[derive(Clone, Copy, PartialEq, RustcEncodable, RustcDecodable, Debug)]
+pub enum LitKind {
+ Bool, // AST only, must never appear in a `Token`
+ Byte,
+ Char,
+ Integer,
+ Float,
+ Str,
+ StrRaw(u16), // raw string delimited by `n` hash symbols
+ ByteStr,
+ ByteStrRaw(u16), // raw byte string delimited by `n` hash symbols
+ Err,
+}
+
+/// A literal token.
+#[derive(Clone, Copy, PartialEq, RustcEncodable, RustcDecodable, Debug)]
+pub struct Lit {
+ pub kind: LitKind,
+ pub symbol: Symbol,
+ pub suffix: Option<Symbol>,
+}
+
+impl fmt::Display for Lit {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ let Lit { kind, symbol, suffix } = *self;
+ match kind {
+ Byte => write!(f, "b'{}'", symbol)?,
+ Char => write!(f, "'{}'", symbol)?,
+ Str => write!(f, "\"{}\"", symbol)?,
+ StrRaw(n) => write!(f, "r{delim}\"{string}\"{delim}",
+ delim="#".repeat(n as usize),
+ string=symbol)?,
+ ByteStr => write!(f, "b\"{}\"", symbol)?,
+ ByteStrRaw(n) => write!(f, "br{delim}\"{string}\"{delim}",
+ delim="#".repeat(n as usize),
+ string=symbol)?,
+ Integer |
+ Float |
+ Bool |
+ Err => write!(f, "{}", symbol)?,
+ }
+
+ if let Some(suffix) = suffix {
+ write!(f, "{}", suffix)?;
+ }
+
+ Ok(())
+ }
+}
+
+impl LitKind {
+ /// An English article for the literal token kind.
+ crate fn article(self) -> &'static str {
+ match self {
+ Integer | Err => "an",
+ _ => "a",
+ }
+ }
+
+ crate fn descr(self) -> &'static str {
+ match self {
+ Bool => panic!("literal token contains `Lit::Bool`"),
+ Byte => "byte",
+ Char => "char",
+ Integer => "integer",
+ Float => "float",
+ Str | StrRaw(..) => "string",
+ ByteStr | ByteStrRaw(..) => "byte string",
+ Err => "error",
+ }
+ }
+
+ crate fn may_have_suffix(self) -> bool {
+ match self {
+ Integer | Float | Err => true,
+ _ => false,
+ }
+ }
+}
+
+impl Lit {
+ pub fn new(kind: LitKind, symbol: Symbol, suffix: Option<Symbol>) -> Lit {
+ Lit { kind, symbol, suffix }
+ }
+}
+
+pub(crate) fn ident_can_begin_expr(name: ast::Name, span: Span, is_raw: bool) -> bool {
+ let ident_token = Token::new(Ident(name, is_raw), span);
+ token_can_begin_expr(&ident_token)
+}
+
+pub(crate) fn token_can_begin_expr(ident_token: &Token) -> bool {
+ !ident_token.is_reserved_ident() ||
+ ident_token.is_path_segment_keyword() ||
+ match ident_token.kind {
+ TokenKind::Ident(ident, _) => [
+ kw::Async,
+ kw::Do,
+ kw::Box,
+ kw::Break,
+ kw::Continue,
+ kw::False,
+ kw::For,
+ kw::If,
+ kw::Let,
+ kw::Loop,
+ kw::Match,
+ kw::Move,
+ kw::Return,
+ kw::True,
+ kw::Unsafe,
+ kw::While,
+ kw::Yield,
+ kw::Static,
+ ].contains(&ident),
+ _=> false,
+ }
+}
+
+fn ident_can_begin_type(name: ast::Name, span: Span, is_raw: bool) -> bool {
+ let ident_token = Token::new(Ident(name, is_raw), span);
+
+ !ident_token.is_reserved_ident() ||
+ ident_token.is_path_segment_keyword() ||
+ [
+ kw::Underscore,
+ kw::For,
+ kw::Impl,
+ kw::Fn,
+ kw::Unsafe,
+ kw::Extern,
+ kw::Typeof,
+ kw::Dyn,
+ ].contains(&name)
+}
+
+#[derive(Clone, PartialEq, RustcEncodable, RustcDecodable, Debug)]
+pub enum TokenKind {
+ /* Expression-operator symbols. */
+ Eq,
+ Lt,
+ Le,
+ EqEq,
+ Ne,
+ Ge,
+ Gt,
+ AndAnd,
+ OrOr,
+ Not,
+ Tilde,
+ BinOp(BinOpToken),
+ BinOpEq(BinOpToken),
+
+ /* Structural symbols */
+ At,
+ Dot,
+ DotDot,
+ DotDotDot,
+ DotDotEq,
+ Comma,
+ Semi,
+ Colon,
+ ModSep,
+ RArrow,
+ LArrow,
+ FatArrow,
+ Pound,
+ Dollar,
+ Question,
+ /// Used by proc macros for representing lifetimes, not generated by lexer right now.
+ SingleQuote,
+ /// An opening delimiter (e.g., `{`).
+ OpenDelim(DelimToken),
+ /// A closing delimiter (e.g., `}`).
+ CloseDelim(DelimToken),
+
+ /* Literals */
+ Literal(Lit),
+
+ /* Name components */
+ Ident(ast::Name, /* is_raw */ bool),
+ Lifetime(ast::Name),
+
+ Interpolated(Lrc<Nonterminal>),
+
+ // Can be expanded into several tokens.
+ /// A doc comment.
+ DocComment(ast::Name),
+
+ // Junk. These carry no data because we don't really care about the data
+ // they *would* carry, and don't really want to allocate a new ident for
+ // them. Instead, users could extract that from the associated span.
+
+ /// Whitespace.
+ Whitespace,
+ /// A comment.
+ Comment,
+ Shebang(ast::Name),
+ /// A completely invalid token which should be skipped.
+ Unknown(ast::Name),
+
+ Eof,
+}
+
+// `TokenKind` is used a lot. Make sure it doesn't unintentionally get bigger.
+#[cfg(target_arch = "x86_64")]
+static_assert_size!(TokenKind, 16);
+
+#[derive(Clone, PartialEq, RustcEncodable, RustcDecodable, Debug)]
+pub struct Token {
+ pub kind: TokenKind,
+ pub span: Span,
+}
+
+impl TokenKind {
+ pub fn lit(kind: LitKind, symbol: Symbol, suffix: Option<Symbol>) -> TokenKind {
+ Literal(Lit::new(kind, symbol, suffix))
+ }
+
+ /// Returns tokens that are likely to be typed accidentally instead of the current token.
+ /// Enables better error recovery when the wrong token is found.
+ crate fn similar_tokens(&self) -> Option<Vec<TokenKind>> {
+ match *self {
+ Comma => Some(vec![Dot, Lt, Semi]),
+ Semi => Some(vec![Colon, Comma]),
+ _ => None
+ }
+ }
+}
+
+impl Token {
+ pub fn new(kind: TokenKind, span: Span) -> Self {
+ Token { kind, span }
+ }
+
+ /// Some token that will be thrown away later.
+ crate fn dummy() -> Self {
+ Token::new(TokenKind::Whitespace, DUMMY_SP)
+ }
+
+ /// Recovers a `Token` from an `ast::Ident`. This creates a raw identifier if necessary.
+ pub fn from_ast_ident(ident: ast::Ident) -> Self {
+ Token::new(Ident(ident.name, ident.is_raw_guess()), ident.span)
+ }
+
+ /// Return this token by value and leave a dummy token in its place.
+ pub fn take(&mut self) -> Self {
+ mem::replace(self, Token::dummy())
+ }
+
+ crate fn is_op(&self) -> bool {
+ match self.kind {
+ OpenDelim(..) | CloseDelim(..) | Literal(..) | DocComment(..) |
+ Ident(..) | Lifetime(..) | Interpolated(..) |
+ Whitespace | Comment | Shebang(..) | Eof => false,
+ _ => true,
+ }
+ }
+
+ crate fn is_like_plus(&self) -> bool {
+ match self.kind {
+ BinOp(Plus) | BinOpEq(Plus) => true,
+ _ => false,
+ }
+ }
+
+ /// Returns `true` if the token can appear at the start of an expression.
+ pub fn can_begin_expr(&self) -> bool {
+ match self.kind {
+ Ident(name, is_raw) =>
+ ident_can_begin_expr(name, self.span, is_raw), // value name or keyword
+ OpenDelim(..) | // tuple, array or block
+ Literal(..) | // literal
+ Not | // operator not
+ BinOp(Minus) | // unary minus
+ BinOp(Star) | // dereference
+ BinOp(Or) | OrOr | // closure
+ BinOp(And) | // reference
+ AndAnd | // double reference
+ // DotDotDot is no longer supported, but we need some way to display the error
+ DotDot | DotDotDot | DotDotEq | // range notation
+ Lt | BinOp(Shl) | // associated path
+ ModSep | // global path
+ Lifetime(..) | // labeled loop
+ Pound => true, // expression attributes
+ Interpolated(ref nt) => match **nt {
+ NtLiteral(..) |
+ NtIdent(..) |
+ NtExpr(..) |
+ NtBlock(..) |
+ NtPath(..) |
+ NtLifetime(..) => true,
+ _ => false,
+ },
+ _ => false,
+ }
+ }
+
+ /// Returns `true` if the token can appear at the start of a type.
+ pub fn can_begin_type(&self) -> bool {
+ match self.kind {
+ Ident(name, is_raw) =>
+ ident_can_begin_type(name, self.span, is_raw), // type name or keyword
+ OpenDelim(Paren) | // tuple
+ OpenDelim(Bracket) | // array
+ Not | // never
+ BinOp(Star) | // raw pointer
+ BinOp(And) | // reference
+ AndAnd | // double reference
+ Question | // maybe bound in trait object
+ Lifetime(..) | // lifetime bound in trait object
+ Lt | BinOp(Shl) | // associated path
+ ModSep => true, // global path
+ Interpolated(ref nt) => match **nt {
+ NtIdent(..) | NtTy(..) | NtPath(..) | NtLifetime(..) => true,
+ _ => false,
+ },
+ _ => false,
+ }
+ }
+
+ /// Returns `true` if the token can appear at the start of a const param.
+ crate fn can_begin_const_arg(&self) -> bool {
+ match self.kind {
+ OpenDelim(Brace) => true,
+ Interpolated(ref nt) => match **nt {
+ NtExpr(..) | NtBlock(..) | NtLiteral(..) => true,
+ _ => false,
+ }
+ _ => self.can_begin_literal_or_bool(),
+ }
+ }
+
+ /// Returns `true` if the token can appear at the start of a generic bound.
+ crate fn can_begin_bound(&self) -> bool {
+ self.is_path_start() || self.is_lifetime() || self.is_keyword(kw::For) ||
+ self == &Question || self == &OpenDelim(Paren)
+ }
+
+ /// Returns `true` if the token is any literal
+ pub fn is_lit(&self) -> bool {
+ match self.kind {
+ Literal(..) => true,
+ _ => false,
+ }
+ }
+
+ /// Returns `true` if the token is any literal, a minus (which can prefix a literal,
+ /// for example a '-42', or one of the boolean idents).
+ pub fn can_begin_literal_or_bool(&self) -> bool {
+ match self.kind {
+ Literal(..) | BinOp(Minus) => true,
+ Ident(name, false) if name.is_bool_lit() => true,
+ Interpolated(ref nt) => match **nt {
+ NtLiteral(..) => true,
+ _ => false,
+ },
+ _ => false,
+ }
+ }
+
+ /// Returns an identifier if this token is an identifier.
+ pub fn ident(&self) -> Option<(ast::Ident, /* is_raw */ bool)> {
+ match self.kind {
+ Ident(name, is_raw) => Some((ast::Ident::new(name, self.span), is_raw)),
+ Interpolated(ref nt) => match **nt {
+ NtIdent(ident, is_raw) => Some((ident, is_raw)),
+ _ => None,
+ },
+ _ => None,
+ }
+ }
+
+ /// Returns a lifetime identifier if this token is a lifetime.
+ pub fn lifetime(&self) -> Option<ast::Ident> {
+ match self.kind {
+ Lifetime(name) => Some(ast::Ident::new(name, self.span)),
+ Interpolated(ref nt) => match **nt {
+ NtLifetime(ident) => Some(ident),
+ _ => None,
+ },
+ _ => None,
+ }
+ }
+
+ /// Returns `true` if the token is an identifier.
+ pub fn is_ident(&self) -> bool {
+ self.ident().is_some()
+ }
+
+ /// Returns `true` if the token is a lifetime.
+ crate fn is_lifetime(&self) -> bool {
+ self.lifetime().is_some()
+ }
+
+ /// Returns `true` if the token is a identifier whose name is the given
+ /// string slice.
+ crate fn is_ident_named(&self, name: Symbol) -> bool {
+ self.ident().map_or(false, |(ident, _)| ident.name == name)
+ }
+
+ /// Returns `true` if the token is an interpolated path.
+ fn is_path(&self) -> bool {
+ if let Interpolated(ref nt) = self.kind {
+ if let NtPath(..) = **nt {
+ return true;
+ }
+ }
+ false
+ }
+
+ /// Would `maybe_whole_expr` in `parser.rs` return `Ok(..)`?
+ /// That is, is this a pre-parsed expression dropped into the token stream
+ /// (which happens while parsing the result of macro expansion)?
+ crate fn is_whole_expr(&self) -> bool {
+ if let Interpolated(ref nt) = self.kind {
+ if let NtExpr(_) | NtLiteral(_) | NtPath(_) | NtIdent(..) | NtBlock(_) = **nt {
+ return true;
+ }
+ }
+
+ false
+ }
+
+ /// Returns `true` if the token is either the `mut` or `const` keyword.
+ crate fn is_mutability(&self) -> bool {
+ self.is_keyword(kw::Mut) ||
+ self.is_keyword(kw::Const)
+ }
+
+ crate fn is_qpath_start(&self) -> bool {
+ self == &Lt || self == &BinOp(Shl)
+ }
+
+ crate fn is_path_start(&self) -> bool {
+ self == &ModSep || self.is_qpath_start() || self.is_path() ||
+ self.is_path_segment_keyword() || self.is_ident() && !self.is_reserved_ident()
+ }
+
+ /// Returns `true` if the token is a given keyword, `kw`.
+ pub fn is_keyword(&self, kw: Symbol) -> bool {
+ self.is_non_raw_ident_where(|id| id.name == kw)
+ }
+
+ crate fn is_path_segment_keyword(&self) -> bool {
+ self.is_non_raw_ident_where(ast::Ident::is_path_segment_keyword)
+ }
+
+ // Returns true for reserved identifiers used internally for elided lifetimes,
+ // unnamed method parameters, crate root module, error recovery etc.
+ crate fn is_special_ident(&self) -> bool {
+ self.is_non_raw_ident_where(ast::Ident::is_special)
+ }
+
+ /// Returns `true` if the token is a keyword used in the language.
+ crate fn is_used_keyword(&self) -> bool {
+ self.is_non_raw_ident_where(ast::Ident::is_used_keyword)
+ }
+
+ /// Returns `true` if the token is a keyword reserved for possible future use.
+ crate fn is_unused_keyword(&self) -> bool {
+ self.is_non_raw_ident_where(ast::Ident::is_unused_keyword)
+ }
+
+ /// Returns `true` if the token is either a special identifier or a keyword.
+ pub fn is_reserved_ident(&self) -> bool {
+ self.is_non_raw_ident_where(ast::Ident::is_reserved)
+ }
+
+ /// Returns `true` if the token is the identifier `true` or `false`.
+ crate fn is_bool_lit(&self) -> bool {
+ self.is_non_raw_ident_where(|id| id.name.is_bool_lit())
+ }
+
+ /// Returns `true` if the token is a non-raw identifier for which `pred` holds.
+ fn is_non_raw_ident_where(&self, pred: impl FnOnce(ast::Ident) -> bool) -> bool {
+ match self.ident() {
+ Some((id, false)) => pred(id),
+ _ => false,
+ }
+ }
+
+ crate fn glue(&self, joint: &Token) -> Option<Token> {
+ let kind = match self.kind {
+ Eq => match joint.kind {
+ Eq => EqEq,
+ Gt => FatArrow,
+ _ => return None,
+ },
+ Lt => match joint.kind {
+ Eq => Le,
+ Lt => BinOp(Shl),
+ Le => BinOpEq(Shl),
+ BinOp(Minus) => LArrow,
+ _ => return None,
+ },
+ Gt => match joint.kind {
+ Eq => Ge,
+ Gt => BinOp(Shr),
+ Ge => BinOpEq(Shr),
+ _ => return None,
+ },
+ Not => match joint.kind {
+ Eq => Ne,
+ _ => return None,
+ },
+ BinOp(op) => match joint.kind {
+ Eq => BinOpEq(op),
+ BinOp(And) if op == And => AndAnd,
+ BinOp(Or) if op == Or => OrOr,
+ Gt if op == Minus => RArrow,
+ _ => return None,
+ },
+ Dot => match joint.kind {
+ Dot => DotDot,
+ DotDot => DotDotDot,
+ _ => return None,
+ },
+ DotDot => match joint.kind {
+ Dot => DotDotDot,
+ Eq => DotDotEq,
+ _ => return None,
+ },
+ Colon => match joint.kind {
+ Colon => ModSep,
+ _ => return None,
+ },
+ SingleQuote => match joint.kind {
+ Ident(name, false) => Lifetime(Symbol::intern(&format!("'{}", name))),
+ _ => return None,
+ },
+
+ Le | EqEq | Ne | Ge | AndAnd | OrOr | Tilde | BinOpEq(..) | At | DotDotDot |
+ DotDotEq | Comma | Semi | ModSep | RArrow | LArrow | FatArrow | Pound | Dollar |
+ Question | OpenDelim(..) | CloseDelim(..) |
+ Literal(..) | Ident(..) | Lifetime(..) | Interpolated(..) | DocComment(..) |
+ Whitespace | Comment | Shebang(..) | Unknown(..) | Eof => return None,
+ };
+
+ Some(Token::new(kind, self.span.to(joint.span)))
+ }
+
+ // See comments in `Nonterminal::to_tokenstream` for why we care about
+ // *probably* equal here rather than actual equality
+ crate fn probably_equal_for_proc_macro(&self, other: &Token) -> bool {
+ if mem::discriminant(&self.kind) != mem::discriminant(&other.kind) {
+ return false
+ }
+ match (&self.kind, &other.kind) {
+ (&Eq, &Eq) |
+ (&Lt, &Lt) |
+ (&Le, &Le) |
+ (&EqEq, &EqEq) |
+ (&Ne, &Ne) |
+ (&Ge, &Ge) |
+ (&Gt, &Gt) |
+ (&AndAnd, &AndAnd) |
+ (&OrOr, &OrOr) |
+ (&Not, &Not) |
+ (&Tilde, &Tilde) |
+ (&At, &At) |
+ (&Dot, &Dot) |
+ (&DotDot, &DotDot) |
+ (&DotDotDot, &DotDotDot) |
+ (&DotDotEq, &DotDotEq) |
+ (&Comma, &Comma) |
+ (&Semi, &Semi) |
+ (&Colon, &Colon) |
+ (&ModSep, &ModSep) |
+ (&RArrow, &RArrow) |
+ (&LArrow, &LArrow) |
+ (&FatArrow, &FatArrow) |
+ (&Pound, &Pound) |
+ (&Dollar, &Dollar) |
+ (&Question, &Question) |
+ (&Whitespace, &Whitespace) |
+ (&Comment, &Comment) |
+ (&Eof, &Eof) => true,
+
+ (&BinOp(a), &BinOp(b)) |
+ (&BinOpEq(a), &BinOpEq(b)) => a == b,
+
+ (&OpenDelim(a), &OpenDelim(b)) |
+ (&CloseDelim(a), &CloseDelim(b)) => a == b,
+
+ (&DocComment(a), &DocComment(b)) |
+ (&Shebang(a), &Shebang(b)) => a == b,
+
+ (&Literal(a), &Literal(b)) => a == b,
+
+ (&Lifetime(a), &Lifetime(b)) => a == b,
+ (&Ident(a, b), &Ident(c, d)) => b == d && (a == c ||
+ a == kw::DollarCrate ||
+ c == kw::DollarCrate),
+
+ (&Interpolated(_), &Interpolated(_)) => false,
+
+ _ => panic!("forgot to add a token?"),
+ }
+ }
+}
+
+impl PartialEq<TokenKind> for Token {
+ fn eq(&self, rhs: &TokenKind) -> bool {
+ self.kind == *rhs
+ }
+}
+
+#[derive(Clone, RustcEncodable, RustcDecodable)]
+/// For interpolation during macro expansion.
+pub enum Nonterminal {
+ NtItem(P<ast::Item>),
+ NtBlock(P<ast::Block>),
+ NtStmt(ast::Stmt),
+ NtPat(P<ast::Pat>),
+ NtExpr(P<ast::Expr>),
+ NtTy(P<ast::Ty>),
+ NtIdent(ast::Ident, /* is_raw */ bool),
+ NtLifetime(ast::Ident),
+ NtLiteral(P<ast::Expr>),
+ /// Stuff inside brackets for attributes
+ NtMeta(ast::AttrItem),
+ NtPath(ast::Path),
+ NtVis(ast::Visibility),
+ NtTT(TokenTree),
+ // Used only for passing items to proc macro attributes (they are not
+ // strictly necessary for that, `Annotatable` can be converted into
+ // tokens directly, but doing that naively regresses pretty-printing).
+ NtTraitItem(ast::TraitItem),
+ NtImplItem(ast::ImplItem),
+ NtForeignItem(ast::ForeignItem),
+}
+
+impl PartialEq for Nonterminal {
+ fn eq(&self, rhs: &Self) -> bool {
+ match (self, rhs) {
+ (NtIdent(ident_lhs, is_raw_lhs), NtIdent(ident_rhs, is_raw_rhs)) =>
+ ident_lhs == ident_rhs && is_raw_lhs == is_raw_rhs,
+ (NtLifetime(ident_lhs), NtLifetime(ident_rhs)) => ident_lhs == ident_rhs,
+ (NtTT(tt_lhs), NtTT(tt_rhs)) => tt_lhs == tt_rhs,
+ // FIXME: Assume that all "complex" nonterminal are not equal, we can't compare them
+ // correctly based on data from AST. This will prevent them from matching each other
+ // in macros. The comparison will become possible only when each nonterminal has an
+ // attached token stream from which it was parsed.
+ _ => false,
+ }
+ }
+}
+
+impl fmt::Debug for Nonterminal {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ match *self {
+ NtItem(..) => f.pad("NtItem(..)"),
+ NtBlock(..) => f.pad("NtBlock(..)"),
+ NtStmt(..) => f.pad("NtStmt(..)"),
+ NtPat(..) => f.pad("NtPat(..)"),
+ NtExpr(..) => f.pad("NtExpr(..)"),
+ NtTy(..) => f.pad("NtTy(..)"),
+ NtIdent(..) => f.pad("NtIdent(..)"),
+ NtLiteral(..) => f.pad("NtLiteral(..)"),
+ NtMeta(..) => f.pad("NtMeta(..)"),
+ NtPath(..) => f.pad("NtPath(..)"),
+ NtTT(..) => f.pad("NtTT(..)"),
+ NtImplItem(..) => f.pad("NtImplItem(..)"),
+ NtTraitItem(..) => f.pad("NtTraitItem(..)"),
+ NtForeignItem(..) => f.pad("NtForeignItem(..)"),
+ NtVis(..) => f.pad("NtVis(..)"),
+ NtLifetime(..) => f.pad("NtLifetime(..)"),
+ }
+ }
+}
//! and a borrowed `TokenStream` is sufficient to build an owned `TokenStream` without taking
//! ownership of the original.
-use crate::parse::token::{self, DelimToken, Token, TokenKind};
+use crate::token::{self, DelimToken, Token, TokenKind};
use syntax_pos::{Span, DUMMY_SP};
#[cfg(target_arch = "x86_64")]
--- /dev/null
+//! Routines the parser uses to classify AST nodes
+
+// Predicates on exprs and stmts that the pretty-printer and parser use
+
+use crate::ast;
+
+/// Does this expression require a semicolon to be treated
+/// as a statement? The negation of this: 'can this expression
+/// be used as a statement without a semicolon' -- is used
+/// as an early-bail-out in the parser so that, for instance,
+/// if true {...} else {...}
+/// |x| 5
+/// isn't parsed as (if true {...} else {...} | x) | 5
+pub fn expr_requires_semi_to_be_stmt(e: &ast::Expr) -> bool {
+ match e.kind {
+ ast::ExprKind::If(..) |
+ ast::ExprKind::Match(..) |
+ ast::ExprKind::Block(..) |
+ ast::ExprKind::While(..) |
+ ast::ExprKind::Loop(..) |
+ ast::ExprKind::ForLoop(..) |
+ ast::ExprKind::TryBlock(..) => false,
+ _ => true,
+ }
+}
--- /dev/null
+pub use CommentStyle::*;
+
+use crate::ast;
+use crate::source_map::SourceMap;
+use crate::sess::ParseSess;
+
+use syntax_pos::{BytePos, CharPos, Pos, FileName};
+
+use std::usize;
+
+use log::debug;
+
+#[cfg(test)]
+mod tests;
+
+#[derive(Clone, Copy, PartialEq, Debug)]
+pub enum CommentStyle {
+ /// No code on either side of each line of the comment
+ Isolated,
+ /// Code exists to the left of the comment
+ Trailing,
+ /// Code before /* foo */ and after the comment
+ Mixed,
+ /// Just a manual blank line "\n\n", for layout
+ BlankLine,
+}
+
+#[derive(Clone)]
+pub struct Comment {
+ pub style: CommentStyle,
+ pub lines: Vec<String>,
+ pub pos: BytePos,
+}
+
+crate fn is_line_doc_comment(s: &str) -> bool {
+ let res = (s.starts_with("///") && *s.as_bytes().get(3).unwrap_or(&b' ') != b'/') ||
+ s.starts_with("//!");
+ debug!("is {:?} a doc comment? {}", s, res);
+ res
+}
+
+crate fn is_block_doc_comment(s: &str) -> bool {
+ // Prevent `/**/` from being parsed as a doc comment
+ let res = ((s.starts_with("/**") && *s.as_bytes().get(3).unwrap_or(&b' ') != b'*') ||
+ s.starts_with("/*!")) && s.len() >= 5;
+ debug!("is {:?} a doc comment? {}", s, res);
+ res
+}
+
+crate fn is_doc_comment(s: &str) -> bool {
+ (s.starts_with("///") && is_line_doc_comment(s)) || s.starts_with("//!") ||
+ (s.starts_with("/**") && is_block_doc_comment(s)) || s.starts_with("/*!")
+}
+
+pub fn doc_comment_style(comment: &str) -> ast::AttrStyle {
+ assert!(is_doc_comment(comment));
+ if comment.starts_with("//!") || comment.starts_with("/*!") {
+ ast::AttrStyle::Inner
+ } else {
+ ast::AttrStyle::Outer
+ }
+}
+
+pub fn strip_doc_comment_decoration(comment: &str) -> String {
+ /// remove whitespace-only lines from the start/end of lines
+ fn vertical_trim(lines: Vec<String>) -> Vec<String> {
+ let mut i = 0;
+ let mut j = lines.len();
+ // first line of all-stars should be omitted
+ if !lines.is_empty() && lines[0].chars().all(|c| c == '*') {
+ i += 1;
+ }
+
+ while i < j && lines[i].trim().is_empty() {
+ i += 1;
+ }
+ // like the first, a last line of all stars should be omitted
+ if j > i &&
+ lines[j - 1]
+ .chars()
+ .skip(1)
+ .all(|c| c == '*') {
+ j -= 1;
+ }
+
+ while j > i && lines[j - 1].trim().is_empty() {
+ j -= 1;
+ }
+
+ lines[i..j].to_vec()
+ }
+
+ /// remove a "[ \t]*\*" block from each line, if possible
+ fn horizontal_trim(lines: Vec<String>) -> Vec<String> {
+ let mut i = usize::MAX;
+ let mut can_trim = true;
+ let mut first = true;
+
+ for line in &lines {
+ for (j, c) in line.chars().enumerate() {
+ if j > i || !"* \t".contains(c) {
+ can_trim = false;
+ break;
+ }
+ if c == '*' {
+ if first {
+ i = j;
+ first = false;
+ } else if i != j {
+ can_trim = false;
+ }
+ break;
+ }
+ }
+ if i >= line.len() {
+ can_trim = false;
+ }
+ if !can_trim {
+ break;
+ }
+ }
+
+ if can_trim {
+ lines.iter()
+ .map(|line| (&line[i + 1..line.len()]).to_string())
+ .collect()
+ } else {
+ lines
+ }
+ }
+
+ // one-line comments lose their prefix
+ const ONELINERS: &[&str] = &["///!", "///", "//!", "//"];
+
+ for prefix in ONELINERS {
+ if comment.starts_with(*prefix) {
+ return (&comment[prefix.len()..]).to_string();
+ }
+ }
+
+ if comment.starts_with("/*") {
+ let lines = comment[3..comment.len() - 2]
+ .lines()
+ .map(|s| s.to_string())
+ .collect::<Vec<String>>();
+
+ let lines = vertical_trim(lines);
+ let lines = horizontal_trim(lines);
+
+ return lines.join("\n");
+ }
+
+ panic!("not a doc-comment: {}", comment);
+}
+
+/// Returns `None` if the first `col` chars of `s` contain a non-whitespace char.
+/// Otherwise returns `Some(k)` where `k` is first char offset after that leading
+/// whitespace. Note that `k` may be outside bounds of `s`.
+fn all_whitespace(s: &str, col: CharPos) -> Option<usize> {
+ let mut idx = 0;
+ for (i, ch) in s.char_indices().take(col.to_usize()) {
+ if !ch.is_whitespace() {
+ return None;
+ }
+ idx = i + ch.len_utf8();
+ }
+ Some(idx)
+}
+
+fn trim_whitespace_prefix(s: &str, col: CharPos) -> &str {
+ let len = s.len();
+ match all_whitespace(&s, col) {
+ Some(col) => if col < len { &s[col..] } else { "" },
+ None => s,
+ }
+}
+
+fn split_block_comment_into_lines(
+ text: &str,
+ col: CharPos,
+) -> Vec<String> {
+ let mut res: Vec<String> = vec![];
+ let mut lines = text.lines();
+ // just push the first line
+ res.extend(lines.next().map(|it| it.to_string()));
+ // for other lines, strip common whitespace prefix
+ for line in lines {
+ res.push(trim_whitespace_prefix(line, col).to_string())
+ }
+ res
+}
+
+// it appears this function is called only from pprust... that's
+// probably not a good thing.
+crate fn gather_comments(sess: &ParseSess, path: FileName, src: String) -> Vec<Comment> {
+ let cm = SourceMap::new(sess.source_map().path_mapping().clone());
+ let source_file = cm.new_source_file(path, src);
+ let text = (*source_file.src.as_ref().unwrap()).clone();
+
+ let text: &str = text.as_str();
+ let start_bpos = source_file.start_pos;
+ let mut pos = 0;
+ let mut comments: Vec<Comment> = Vec::new();
+ let mut code_to_the_left = false;
+
+ if let Some(shebang_len) = rustc_lexer::strip_shebang(text) {
+ comments.push(Comment {
+ style: Isolated,
+ lines: vec![text[..shebang_len].to_string()],
+ pos: start_bpos,
+ });
+ pos += shebang_len;
+ }
+
+ for token in rustc_lexer::tokenize(&text[pos..]) {
+ let token_text = &text[pos..pos + token.len];
+ match token.kind {
+ rustc_lexer::TokenKind::Whitespace => {
+ if let Some(mut idx) = token_text.find('\n') {
+ code_to_the_left = false;
+ while let Some(next_newline) = &token_text[idx + 1..].find('\n') {
+ idx = idx + 1 + next_newline;
+ comments.push(Comment {
+ style: BlankLine,
+ lines: vec![],
+ pos: start_bpos + BytePos((pos + idx) as u32),
+ });
+ }
+ }
+ }
+ rustc_lexer::TokenKind::BlockComment { terminated: _ } => {
+ if !is_block_doc_comment(token_text) {
+ let code_to_the_right = match text[pos + token.len..].chars().next() {
+ Some('\r') | Some('\n') => false,
+ _ => true,
+ };
+ let style = match (code_to_the_left, code_to_the_right) {
+ (true, true) | (false, true) => Mixed,
+ (false, false) => Isolated,
+ (true, false) => Trailing,
+ };
+
+ // Count the number of chars since the start of the line by rescanning.
+ let pos_in_file = start_bpos + BytePos(pos as u32);
+ let line_begin_in_file = source_file.line_begin_pos(pos_in_file);
+ let line_begin_pos = (line_begin_in_file - start_bpos).to_usize();
+ let col = CharPos(text[line_begin_pos..pos].chars().count());
+
+ let lines = split_block_comment_into_lines(token_text, col);
+ comments.push(Comment { style, lines, pos: pos_in_file })
+ }
+ }
+ rustc_lexer::TokenKind::LineComment => {
+ if !is_doc_comment(token_text) {
+ comments.push(Comment {
+ style: if code_to_the_left { Trailing } else { Isolated },
+ lines: vec![token_text.to_string()],
+ pos: start_bpos + BytePos(pos as u32),
+ })
+ }
+ }
+ _ => {
+ code_to_the_left = true;
+ }
+ }
+ pos += token.len;
+ }
+
+ comments
+}
--- /dev/null
+use super::*;
+
+#[test]
+fn test_block_doc_comment_1() {
+ let comment = "/**\n * Test \n ** Test\n * Test\n*/";
+ let stripped = strip_doc_comment_decoration(comment);
+ assert_eq!(stripped, " Test \n* Test\n Test");
+}
+
+#[test]
+fn test_block_doc_comment_2() {
+ let comment = "/**\n * Test\n * Test\n*/";
+ let stripped = strip_doc_comment_decoration(comment);
+ assert_eq!(stripped, " Test\n Test");
+}
+
+#[test]
+fn test_block_doc_comment_3() {
+ let comment = "/**\n let a: *i32;\n *a = 5;\n*/";
+ let stripped = strip_doc_comment_decoration(comment);
+ assert_eq!(stripped, " let a: *i32;\n *a = 5;");
+}
+
+#[test]
+fn test_block_doc_comment_4() {
+ let comment = "/*******************\n test\n *********************/";
+ let stripped = strip_doc_comment_decoration(comment);
+ assert_eq!(stripped, " test");
+}
+
+#[test]
+fn test_line_doc_comment() {
+ let stripped = strip_doc_comment_decoration("/// test");
+ assert_eq!(stripped, " test");
+ let stripped = strip_doc_comment_decoration("///! test");
+ assert_eq!(stripped, " test");
+ let stripped = strip_doc_comment_decoration("// test");
+ assert_eq!(stripped, " test");
+ let stripped = strip_doc_comment_decoration("// test");
+ assert_eq!(stripped, " test");
+ let stripped = strip_doc_comment_decoration("///test");
+ assert_eq!(stripped, "test");
+ let stripped = strip_doc_comment_decoration("///!test");
+ assert_eq!(stripped, "test");
+ let stripped = strip_doc_comment_decoration("//test");
+ assert_eq!(stripped, "test");
+}
--- /dev/null
+//! Code related to parsing literals.
+
+use crate::ast::{self, Lit, LitKind};
+use crate::symbol::{kw, sym, Symbol};
+use crate::token::{self, Token};
+use crate::tokenstream::TokenTree;
+
+use log::debug;
+use rustc_data_structures::sync::Lrc;
+use syntax_pos::Span;
+use rustc_lexer::unescape::{unescape_char, unescape_byte};
+use rustc_lexer::unescape::{unescape_str, unescape_byte_str};
+use rustc_lexer::unescape::{unescape_raw_str, unescape_raw_byte_str};
+
+use std::ascii;
+
+crate enum LitError {
+ NotLiteral,
+ LexerError,
+ InvalidSuffix,
+ InvalidIntSuffix,
+ InvalidFloatSuffix,
+ NonDecimalFloat(u32),
+ IntTooLarge,
+}
+
+impl LitKind {
+ /// Converts literal token into a semantic literal.
+ fn from_lit_token(lit: token::Lit) -> Result<LitKind, LitError> {
+ let token::Lit { kind, symbol, suffix } = lit;
+ if suffix.is_some() && !kind.may_have_suffix() {
+ return Err(LitError::InvalidSuffix);
+ }
+
+ Ok(match kind {
+ token::Bool => {
+ assert!(symbol.is_bool_lit());
+ LitKind::Bool(symbol == kw::True)
+ }
+ token::Byte => return unescape_byte(&symbol.as_str())
+ .map(LitKind::Byte).map_err(|_| LitError::LexerError),
+ token::Char => return unescape_char(&symbol.as_str())
+ .map(LitKind::Char).map_err(|_| LitError::LexerError),
+
+ // There are some valid suffixes for integer and float literals,
+ // so all the handling is done internally.
+ token::Integer => return integer_lit(symbol, suffix),
+ token::Float => return float_lit(symbol, suffix),
+
+ token::Str => {
+ // If there are no characters requiring special treatment we can
+ // reuse the symbol from the token. Otherwise, we must generate a
+ // new symbol because the string in the LitKind is different to the
+ // string in the token.
+ let s = symbol.as_str();
+ let symbol = if s.contains(&['\\', '\r'][..]) {
+ let mut buf = String::with_capacity(s.len());
+ let mut error = Ok(());
+ unescape_str(&s, &mut |_, unescaped_char| {
+ match unescaped_char {
+ Ok(c) => buf.push(c),
+ Err(_) => error = Err(LitError::LexerError),
+ }
+ });
+ error?;
+ Symbol::intern(&buf)
+ } else {
+ symbol
+ };
+ LitKind::Str(symbol, ast::StrStyle::Cooked)
+ }
+ token::StrRaw(n) => {
+ // Ditto.
+ let s = symbol.as_str();
+ let symbol = if s.contains('\r') {
+ let mut buf = String::with_capacity(s.len());
+ let mut error = Ok(());
+ unescape_raw_str(&s, &mut |_, unescaped_char| {
+ match unescaped_char {
+ Ok(c) => buf.push(c),
+ Err(_) => error = Err(LitError::LexerError),
+ }
+ });
+ error?;
+ buf.shrink_to_fit();
+ Symbol::intern(&buf)
+ } else {
+ symbol
+ };
+ LitKind::Str(symbol, ast::StrStyle::Raw(n))
+ }
+ token::ByteStr => {
+ let s = symbol.as_str();
+ let mut buf = Vec::with_capacity(s.len());
+ let mut error = Ok(());
+ unescape_byte_str(&s, &mut |_, unescaped_byte| {
+ match unescaped_byte {
+ Ok(c) => buf.push(c),
+ Err(_) => error = Err(LitError::LexerError),
+ }
+ });
+ error?;
+ buf.shrink_to_fit();
+ LitKind::ByteStr(Lrc::new(buf))
+ }
+ token::ByteStrRaw(_) => {
+ let s = symbol.as_str();
+ let bytes = if s.contains('\r') {
+ let mut buf = Vec::with_capacity(s.len());
+ let mut error = Ok(());
+ unescape_raw_byte_str(&s, &mut |_, unescaped_byte| {
+ match unescaped_byte {
+ Ok(c) => buf.push(c),
+ Err(_) => error = Err(LitError::LexerError),
+ }
+ });
+ error?;
+ buf.shrink_to_fit();
+ buf
+ } else {
+ symbol.to_string().into_bytes()
+ };
+
+ LitKind::ByteStr(Lrc::new(bytes))
+ },
+ token::Err => LitKind::Err(symbol),
+ })
+ }
+
+ /// Attempts to recover a token from semantic literal.
+ /// This function is used when the original token doesn't exist (e.g. the literal is created
+ /// by an AST-based macro) or unavailable (e.g. from HIR pretty-printing).
+ pub fn to_lit_token(&self) -> token::Lit {
+ let (kind, symbol, suffix) = match *self {
+ LitKind::Str(symbol, ast::StrStyle::Cooked) => {
+ // Don't re-intern unless the escaped string is different.
+ let s = symbol.as_str();
+ let escaped = s.escape_default().to_string();
+ let symbol = if s == escaped { symbol } else { Symbol::intern(&escaped) };
+ (token::Str, symbol, None)
+ }
+ LitKind::Str(symbol, ast::StrStyle::Raw(n)) => {
+ (token::StrRaw(n), symbol, None)
+ }
+ LitKind::ByteStr(ref bytes) => {
+ let string = bytes.iter().cloned().flat_map(ascii::escape_default)
+ .map(Into::<char>::into).collect::<String>();
+ (token::ByteStr, Symbol::intern(&string), None)
+ }
+ LitKind::Byte(byte) => {
+ let string: String = ascii::escape_default(byte).map(Into::<char>::into).collect();
+ (token::Byte, Symbol::intern(&string), None)
+ }
+ LitKind::Char(ch) => {
+ let string: String = ch.escape_default().map(Into::<char>::into).collect();
+ (token::Char, Symbol::intern(&string), None)
+ }
+ LitKind::Int(n, ty) => {
+ let suffix = match ty {
+ ast::LitIntType::Unsigned(ty) => Some(ty.name()),
+ ast::LitIntType::Signed(ty) => Some(ty.name()),
+ ast::LitIntType::Unsuffixed => None,
+ };
+ (token::Integer, sym::integer(n), suffix)
+ }
+ LitKind::Float(symbol, ty) => {
+ let suffix = match ty {
+ ast::LitFloatType::Suffixed(ty) => Some(ty.name()),
+ ast::LitFloatType::Unsuffixed => None,
+ };
+ (token::Float, symbol, suffix)
+ }
+ LitKind::Bool(value) => {
+ let symbol = if value { kw::True } else { kw::False };
+ (token::Bool, symbol, None)
+ }
+ LitKind::Err(symbol) => {
+ (token::Err, symbol, None)
+ }
+ };
+
+ token::Lit::new(kind, symbol, suffix)
+ }
+}
+
+impl Lit {
+ /// Converts literal token into an AST literal.
+ crate fn from_lit_token(token: token::Lit, span: Span) -> Result<Lit, LitError> {
+ Ok(Lit { token, kind: LitKind::from_lit_token(token)?, span })
+ }
+
+ /// Converts arbitrary token into an AST literal.
+ crate fn from_token(token: &Token) -> Result<Lit, LitError> {
+ let lit = match token.kind {
+ token::Ident(name, false) if name.is_bool_lit() =>
+ token::Lit::new(token::Bool, name, None),
+ token::Literal(lit) =>
+ lit,
+ token::Interpolated(ref nt) => {
+ if let token::NtExpr(expr) | token::NtLiteral(expr) = &**nt {
+ if let ast::ExprKind::Lit(lit) = &expr.kind {
+ return Ok(lit.clone());
+ }
+ }
+ return Err(LitError::NotLiteral);
+ }
+ _ => return Err(LitError::NotLiteral)
+ };
+
+ Lit::from_lit_token(lit, token.span)
+ }
+
+ /// Attempts to recover an AST literal from semantic literal.
+ /// This function is used when the original token doesn't exist (e.g. the literal is created
+ /// by an AST-based macro) or unavailable (e.g. from HIR pretty-printing).
+ pub fn from_lit_kind(kind: LitKind, span: Span) -> Lit {
+ Lit { token: kind.to_lit_token(), kind, span }
+ }
+
+ /// Losslessly convert an AST literal into a token tree.
+ crate fn token_tree(&self) -> TokenTree {
+ let token = match self.token.kind {
+ token::Bool => token::Ident(self.token.symbol, false),
+ _ => token::Literal(self.token),
+ };
+ TokenTree::token(token, self.span)
+ }
+}
+
+fn strip_underscores(symbol: Symbol) -> Symbol {
+ // Do not allocate a new string unless necessary.
+ let s = symbol.as_str();
+ if s.contains('_') {
+ let mut s = s.to_string();
+ s.retain(|c| c != '_');
+ return Symbol::intern(&s);
+ }
+ symbol
+}
+
+fn filtered_float_lit(symbol: Symbol, suffix: Option<Symbol>, base: u32)
+ -> Result<LitKind, LitError> {
+ debug!("filtered_float_lit: {:?}, {:?}, {:?}", symbol, suffix, base);
+ if base != 10 {
+ return Err(LitError::NonDecimalFloat(base));
+ }
+ Ok(match suffix {
+ Some(suf) => LitKind::Float(symbol, ast::LitFloatType::Suffixed(match suf {
+ sym::f32 => ast::FloatTy::F32,
+ sym::f64 => ast::FloatTy::F64,
+ _ => return Err(LitError::InvalidFloatSuffix),
+ })),
+ None => LitKind::Float(symbol, ast::LitFloatType::Unsuffixed)
+ })
+}
+
+fn float_lit(symbol: Symbol, suffix: Option<Symbol>) -> Result<LitKind, LitError> {
+ debug!("float_lit: {:?}, {:?}", symbol, suffix);
+ filtered_float_lit(strip_underscores(symbol), suffix, 10)
+}
+
+fn integer_lit(symbol: Symbol, suffix: Option<Symbol>) -> Result<LitKind, LitError> {
+ debug!("integer_lit: {:?}, {:?}", symbol, suffix);
+ let symbol = strip_underscores(symbol);
+ let s = symbol.as_str();
+
+ let base = match s.as_bytes() {
+ [b'0', b'x', ..] => 16,
+ [b'0', b'o', ..] => 8,
+ [b'0', b'b', ..] => 2,
+ _ => 10,
+ };
+
+ let ty = match suffix {
+ Some(suf) => match suf {
+ sym::isize => ast::LitIntType::Signed(ast::IntTy::Isize),
+ sym::i8 => ast::LitIntType::Signed(ast::IntTy::I8),
+ sym::i16 => ast::LitIntType::Signed(ast::IntTy::I16),
+ sym::i32 => ast::LitIntType::Signed(ast::IntTy::I32),
+ sym::i64 => ast::LitIntType::Signed(ast::IntTy::I64),
+ sym::i128 => ast::LitIntType::Signed(ast::IntTy::I128),
+ sym::usize => ast::LitIntType::Unsigned(ast::UintTy::Usize),
+ sym::u8 => ast::LitIntType::Unsigned(ast::UintTy::U8),
+ sym::u16 => ast::LitIntType::Unsigned(ast::UintTy::U16),
+ sym::u32 => ast::LitIntType::Unsigned(ast::UintTy::U32),
+ sym::u64 => ast::LitIntType::Unsigned(ast::UintTy::U64),
+ sym::u128 => ast::LitIntType::Unsigned(ast::UintTy::U128),
+ // `1f64` and `2f32` etc. are valid float literals, and
+ // `fxxx` looks more like an invalid float literal than invalid integer literal.
+ _ if suf.as_str().starts_with('f') => return filtered_float_lit(symbol, suffix, base),
+ _ => return Err(LitError::InvalidIntSuffix),
+ }
+ _ => ast::LitIntType::Unsuffixed
+ };
+
+ let s = &s[if base != 10 { 2 } else { 0 } ..];
+ u128::from_str_radix(s, base).map(|i| LitKind::Int(i, ty)).map_err(|_| {
+ // Small bases are lexed as if they were base 10, e.g, the string
+ // might be `0b10201`. This will cause the conversion above to fail,
+ // but these kinds of errors are already reported by the lexer.
+ let from_lexer =
+ base < 10 && s.chars().any(|c| c.to_digit(10).map_or(false, |d| d >= base));
+ if from_lexer { LitError::LexerError } else { LitError::IntTooLarge }
+ })
+}
-use crate::parse::token::{self, Token, BinOpToken};
+use crate::token::{self, Token, BinOpToken};
use crate::symbol::kw;
use crate::ast::{self, BinOpKind};
//! those that are created by the expansion of a macro.
use crate::ast::*;
-use crate::parse::token::Token;
+use crate::token::Token;
use crate::tokenstream::{TokenTree, TokenStream};
use syntax_pos::Span;
}
pub fn walk_attribute<'a, V: Visitor<'a>>(visitor: &mut V, attr: &'a Attribute) {
- visitor.visit_tts(attr.tokens.clone());
+ match attr.kind {
+ AttrKind::Normal(ref item) => visitor.visit_tts(item.tokens.clone()),
+ AttrKind::DocComment(_) => {}
+ }
}
pub fn walk_tt<'a, V: Visitor<'a>>(visitor: &mut V, tt: TokenTree) {
rustc_data_structures = { path = "../librustc_data_structures" }
rustc_index = { path = "../librustc_index" }
rustc_lexer = { path = "../librustc_lexer" }
-rustc_target = { path = "../librustc_target" }
smallvec = { version = "1.0", features = ["union", "may_dangle"] }
syntax = { path = "../libsyntax" }
use syntax::edition::Edition;
use syntax::mut_visit::{self, MutVisitor};
use syntax::parse::{self, parser, DirectoryOwnership};
-use syntax::parse::token;
use syntax::ptr::P;
use syntax::sess::ParseSess;
use syntax::symbol::{kw, sym, Ident, Symbol};
use syntax::{ThinVec, MACRO_ARGUMENTS};
+use syntax::token;
use syntax::tokenstream::{self, TokenStream};
use syntax::visit::Visitor;
use syntax::config::StripUnconfigured;
use syntax::feature_gate::{self, Features, GateIssue, is_builtin_attr, emit_feature_err};
use syntax::mut_visit::*;
-use syntax::parse::{DirectoryOwnership, PResult};
-use syntax::parse::token;
+use syntax::parse::DirectoryOwnership;
use syntax::parse::parser::Parser;
use syntax::print::pprust;
use syntax::ptr::P;
use syntax::sess::ParseSess;
use syntax::symbol::{sym, Symbol};
+use syntax::token;
use syntax::tokenstream::{TokenStream, TokenTree};
use syntax::visit::{self, Visitor};
use syntax::util::map_in_place::MapInPlace;
-use errors::{Applicability, FatalError};
+use errors::{PResult, Applicability, FatalError};
use smallvec::{smallvec, SmallVec};
use syntax_pos::{Span, DUMMY_SP, FileName};
}
let mut item = self.fully_configure(item);
- item.visit_attrs(|attrs| attrs.retain(|a| a.path != sym::derive));
+ item.visit_attrs(|attrs| attrs.retain(|a| !a.has_name(sym::derive)));
let mut helper_attrs = Vec::new();
let mut has_copy = false;
for ext in exts {
| Annotatable::Variant(..)
=> panic!("unexpected annotatable"),
})), DUMMY_SP).into();
- let input = self.extract_proc_macro_attr_input(attr.item.tokens, span);
+ let item = attr.unwrap_normal_item();
+ let input = self.extract_proc_macro_attr_input(item.tokens, span);
let tok_result = expander.expand(self.cx, span, input, item_tok);
- self.parse_ast_fragment(tok_result, fragment_kind, &attr.item.path, span)
+ self.parse_ast_fragment(tok_result, fragment_kind, &item.path, span)
}
SyntaxExtensionKind::LegacyAttr(expander) => {
match attr.parse_meta(self.cx.parse_sess) {
-> Option<ast::Attribute> {
let attr = attrs.iter()
.position(|a| {
- if a.path == sym::derive {
+ if a.has_name(sym::derive) {
*after_derive = true;
}
!attr::is_known(a) && !is_builtin_attr(a)
.map(|i| attrs.remove(i));
if let Some(attr) = &attr {
if !self.cx.ecfg.custom_inner_attributes() &&
- attr.style == ast::AttrStyle::Inner && attr.path != sym::test {
+ attr.style == ast::AttrStyle::Inner && !attr.has_name(sym::test) {
emit_feature_err(&self.cx.parse_sess, sym::custom_inner_attributes,
attr.span, GateIssue::Language,
"non-builtin inner attributes are unstable");
feature_gate::check_attribute(attr, self.cx.parse_sess, features);
// macros are expanded before any lint passes so this warning has to be hardcoded
- if attr.path == sym::derive {
+ if attr.has_name(sym::derive) {
self.cx.struct_span_warn(attr.span, "`#[derive]` does nothing on macro invocations")
.note("this may become a hard error in a future release")
.emit();
let meta = attr::mk_list_item(Ident::with_dummy_span(sym::doc), items);
*at = attr::Attribute {
- item: AttrItem { path: meta.path, tokens: meta.kind.tokens(meta.span) },
+ kind: ast::AttrKind::Normal(
+ AttrItem { path: meta.path, tokens: meta.kind.tokens(meta.span) },
+ ),
span: at.span,
id: at.id,
style: at.style,
- is_sugared_doc: false,
};
} else {
noop_visit_attribute(at, self)
crate mod quoted;
use syntax::ast;
-use syntax::parse::token::{self, Token, TokenKind};
+use syntax::token::{self, Token, TokenKind};
use syntax::tokenstream::{DelimSpan};
use syntax_pos::Span;
use syntax::ast::NodeId;
use syntax::early_buffered_lints::BufferedEarlyLintId;
-use syntax::parse::token::{DelimToken, Token, TokenKind};
+use syntax::token::{DelimToken, Token, TokenKind};
use syntax::sess::ParseSess;
use syntax::symbol::{kw, sym};
use crate::mbe::{self, TokenTree};
use syntax::ast::{Ident, Name};
-use syntax::parse::{Directory, PResult};
+use syntax::parse::Directory;
use syntax::parse::parser::{Parser, PathStyle};
-use syntax::parse::token::{self, DocComment, Nonterminal, Token};
use syntax::print::pprust;
use syntax::sess::ParseSess;
use syntax::symbol::{kw, sym, Symbol};
+use syntax::token::{self, DocComment, Nonterminal, Token};
use syntax::tokenstream::{DelimSpan, TokenStream};
-use errors::FatalError;
+use errors::{PResult, FatalError};
use smallvec::{smallvec, SmallVec};
use syntax_pos::Span;
use syntax::edition::Edition;
use syntax::feature_gate::Features;
use syntax::parse::parser::Parser;
-use syntax::parse::token::TokenKind::*;
-use syntax::parse::token::{self, NtTT, Token};
use syntax::parse::Directory;
use syntax::print::pprust;
use syntax::sess::ParseSess;
use syntax::symbol::{kw, sym, Symbol};
+use syntax::token::{self, NtTT, Token, TokenKind::*};
use syntax::tokenstream::{DelimSpan, TokenStream};
use errors::{DiagnosticBuilder, FatalError};
use rustc_data_structures::fx::FxHashMap;
use std::borrow::Cow;
use std::collections::hash_map::Entry;
-use std::slice;
+use std::{mem, slice};
use errors::Applicability;
use rustc_data_structures::sync::Lrc;
// Which arm's failure should we report? (the one furthest along)
let mut best_failure: Option<(Token, &str)> = None;
-
for (i, lhs) in lhses.iter().enumerate() {
// try each arm's matchers
let lhs_tt = match *lhs {
_ => cx.span_bug(sp, "malformed macro lhs"),
};
+ // Take a snapshot of the state of pre-expansion gating at this point.
+ // This is used so that if a matcher is not `Success(..)`ful,
+ // then the spans which became gated when parsing the unsucessful matcher
+ // are not recorded. On the first `Success(..)`ful matcher, the spans are merged.
+ let mut gated_spans_snaphot = mem::take(&mut *cx.parse_sess.gated_spans.spans.borrow_mut());
+
match parse_tt(cx, lhs_tt, arg.clone()) {
Success(named_matches) => {
+ // The matcher was `Success(..)`ful.
+ // Merge the gated spans from parsing the matcher with the pre-existing ones.
+ cx.parse_sess.gated_spans.merge(gated_spans_snaphot);
+
let rhs = match rhses[i] {
// ignore delimiters
mbe::TokenTree::Delimited(_, ref delimed) => delimed.tts.clone(),
},
Error(err_sp, ref msg) => cx.span_fatal(err_sp.substitute_dummy(sp), &msg[..]),
}
+
+ // The matcher was not `Success(..)`ful.
+ // Restore to the state before snapshotting and maybe try again.
+ mem::swap(&mut gated_spans_snaphot, &mut cx.parse_sess.gated_spans.spans.borrow_mut());
}
let (token, label) = best_failure.expect("ran no matchers");
use crate::mbe::{TokenTree, KleeneOp, KleeneToken, SequenceRepetition, Delimited};
use syntax::ast;
-use syntax::parse::token::{self, Token};
use syntax::print::pprust;
use syntax::sess::ParseSess;
use syntax::symbol::kw;
+use syntax::token::{self, Token};
use syntax::tokenstream;
use syntax_pos::Span;
use syntax::ast::{Ident, Mac};
use syntax::mut_visit::{self, MutVisitor};
-use syntax::parse::token::{self, NtTT, Token};
+use syntax::token::{self, NtTT, Token};
use syntax::tokenstream::{DelimSpan, TokenStream, TokenTree, TreeAndJoint};
use smallvec::{smallvec, SmallVec};
use syntax::ast::{self, ItemKind, Attribute, Mac};
use syntax::attr::{mark_used, mark_known};
use syntax::errors::{Applicability, FatalError};
-use syntax::parse::{self, token};
+use syntax::parse;
use syntax::symbol::sym;
+use syntax::token;
use syntax::tokenstream::{self, TokenStream};
use syntax::visit::Visitor;
crate fn collect_derives(cx: &mut ExtCtxt<'_>, attrs: &mut Vec<ast::Attribute>) -> Vec<ast::Path> {
let mut result = Vec::new();
attrs.retain(|attr| {
- if attr.path != sym::derive {
+ if !attr.has_name(sym::derive) {
return true;
}
if !attr.is_meta_item_list() {
}
let parse_derive_paths = |attr: &ast::Attribute| {
- if attr.tokens.is_empty() {
+ if attr.get_normal_item().tokens.is_empty() {
return Ok(Vec::new());
}
parse::parse_in_attr(cx.parse_sess, attr, |p| p.parse_derive_paths())
use crate::base::ExtCtxt;
use syntax::ast;
-use syntax::parse::{self, token};
-use syntax::parse::lexer::comments;
+use syntax::parse;
+use syntax::util::comments;
use syntax::print::pprust;
use syntax::sess::ParseSess;
+use syntax::token;
use syntax::tokenstream::{self, DelimSpan, IsJoint::*, TokenStream, TreeAndJoint};
use errors::Diagnostic;
{
fn from_internal(((tree, is_joint), sess, stack): (TreeAndJoint, &ParseSess, &mut Vec<Self>))
-> Self {
- use syntax::parse::token::*;
+ use syntax::token::*;
let joint = is_joint == Joint;
let Token { kind, span } = match tree {
impl ToInternal<TokenStream> for TokenTree<Group, Punct, Ident, Literal> {
fn to_internal(self) -> TokenStream {
- use syntax::parse::token::*;
+ use syntax::token::*;
let (ch, joint, span) = match self {
TokenTree::Punct(Punct { ch, joint, span }) => (ch, joint, span),
use syntax::ast;
use syntax_expand::base::{self, *};
-use syntax::parse::token::{self, Token};
+use syntax::token::{self, Token};
use syntax::ptr::P;
use syntax::symbol::{kw, sym, Symbol};
use syntax::ast::AsmDialect;
use syntax::ast::{self, *};
use syntax_expand::base::*;
-use syntax::parse::token::{self, TokenKind};
+use syntax::token::{self, TokenKind};
use syntax::parse::parser::Parser;
use syntax::print::pprust;
use syntax::ptr::P;
use syntax_expand::base::{self, *};
use syntax::attr;
use syntax::tokenstream::TokenStream;
-use syntax::parse::token;
+use syntax::token;
use syntax_pos::Span;
pub fn expand_cfg(
use syntax::ast::{self, AttrItem, AttrStyle};
use syntax::attr::mk_attr;
-use syntax::parse::{self, token};
+use syntax::parse;
+use syntax::token;
use syntax::sess::ParseSess;
use syntax_expand::panictry;
use syntax_pos::FileName;
match e.kind {
ast::ExprKind::Lit(ref lit) => match lit.kind {
ast::LitKind::Str(ref s, _)
- | ast::LitKind::Float(ref s, _)
- | ast::LitKind::FloatUnsuffixed(ref s) => {
+ | ast::LitKind::Float(ref s, _) => {
accumulator.push_str(&s.as_str());
}
ast::LitKind::Char(c) => {
use syntax::ast;
use syntax_expand::base::{self, *};
-use syntax::parse::token::{self, Token};
+use syntax::token::{self, Token};
use syntax::ptr::P;
use syntax_pos::Span;
use syntax_pos::symbol::Symbol;
use std::vec;
use rustc_data_structures::thin_vec::ThinVec;
-use rustc_target::spec::abi::Abi;
-use syntax::ast::{self, BinOpKind, EnumDef, Expr, Generics, Ident, PatKind};
+use syntax::ast::{self, Abi, BinOpKind, EnumDef, Expr, Generics, Ident, PatKind};
use syntax::ast::{VariantData, GenericParamKind, GenericArg};
use syntax::attr;
use syntax::source_map::respan;
self,
type_ident,
generics,
- Abi::Rust,
+ sym::Rust,
explicit_self,
tys,
body)
self,
type_ident,
generics,
- Abi::Rust,
+ sym::Rust,
explicit_self,
tys,
body)
trait_: &TraitDef<'_>,
type_ident: Ident,
generics: &Generics,
- abi: Abi,
+ abi: Symbol,
explicit_self: Option<ast::ExplicitSelf>,
arg_types: Vec<(Ident, P<ast::Ty>)>,
body: P<Expr>)
ast::Unsafety::Normal
};
+ let trait_lo_sp = trait_.span.shrink_to_lo();
+
+ let sig = ast::MethodSig {
+ header: ast::FnHeader {
+ unsafety,
+ abi: Abi::new(abi, trait_lo_sp),
+ ..ast::FnHeader::default()
+ },
+ decl: fn_decl,
+ };
+
// Create the method.
ast::ImplItem {
id: ast::DUMMY_NODE_ID,
attrs: self.attributes.clone(),
generics: fn_generics,
span: trait_.span,
- vis: respan(trait_.span.shrink_to_lo(), ast::VisibilityKind::Inherited),
+ vis: respan(trait_lo_sp, ast::VisibilityKind::Inherited),
defaultness: ast::Defaultness::Final,
ident: method_ident,
- kind: ast::ImplItemKind::Method(ast::MethodSig {
- header: ast::FnHeader {
- unsafety, abi,
- ..ast::FnHeader::default()
- },
- decl: fn_decl,
- },
- body_block),
+ kind: ast::ImplItemKind::Method(sig, body_block),
tokens: None,
}
}
use syntax::ast;
use syntax_expand::base::{self, *};
-use syntax::parse::token;
+use syntax::token;
use syntax::ptr::P;
use syntax::symbol::{Symbol, sym};
use syntax::tokenstream::TokenStream;
use syntax::ast;
use syntax::source_map::respan;
use syntax_expand::base::{self, *};
-use syntax::parse::token;
+use syntax::token;
use syntax::ptr::P;
use syntax_pos::Span;
use syntax::tokenstream::TokenStream;
use syntax::ast::*;
use syntax::attr;
use syntax::edition::Edition;
-use syntax_expand::base::{Resolver, NamedSyntaxExtension};
-use syntax::parse::token;
use syntax::ptr::P;
use syntax::source_map::respan;
use syntax::symbol::sym;
+use syntax::token;
use syntax::tokenstream::*;
+use syntax_expand::base::{Resolver, NamedSyntaxExtension};
use syntax_pos::{Span, DUMMY_SP};
use syntax_pos::hygiene::{ExpnData, ExpnKind, AstPass};
for attr in &item.attrs {
if is_proc_macro_attr(&attr) {
if let Some(prev_attr) = found_attr {
- let path_str = pprust::path_to_string(&attr.path);
- let msg = if attr.path.segments[0].ident.name ==
- prev_attr.path.segments[0].ident.name {
+ let prev_item = prev_attr.get_normal_item();
+ let item = attr.get_normal_item();
+ let path_str = pprust::path_to_string(&item.path);
+ let msg = if item.path.segments[0].ident.name ==
+ prev_item.path.segments[0].ident.name {
format!(
"only one `#[{}]` attribute is allowed on any given function",
path_str,
"`#[{}]` and `#[{}]` attributes cannot both be applied
to the same function",
path_str,
- pprust::path_to_string(&prev_attr.path),
+ pprust::path_to_string(&prev_item.path),
)
};
if !is_fn {
let msg = format!(
"the `#[{}]` attribute may only be used on bare functions",
- pprust::path_to_string(&attr.path),
+ pprust::path_to_string(&attr.get_normal_item().path),
);
self.handler.span_err(attr.span, &msg);
if !self.is_proc_macro_crate {
let msg = format!(
"the `#[{}]` attribute is only usable with crates of the `proc-macro` crate type",
- pprust::path_to_string(&attr.path),
+ pprust::path_to_string(&attr.get_normal_item().path),
);
self.handler.span_err(attr.span, &msg);
use syntax_expand::panictry;
use syntax_expand::base::{self, *};
use syntax::ast;
-use syntax::parse::{self, token, DirectoryOwnership};
+use syntax::parse::{self, DirectoryOwnership};
use syntax::print::pprust;
use syntax::ptr::P;
use syntax::symbol::Symbol;
+use syntax::token;
use syntax::tokenstream::TokenStream;
use syntax::early_buffered_lints::BufferedEarlyLintId;
associated_type_bounds,
associated_type_defaults,
associated_types,
+ assume_init,
async_await,
async_closure,
attr,
match_beginning_vert,
match_default_bindings,
may_dangle,
+ maybe_uninit,
+ MaybeUninit,
mem,
member_constraints,
message,
rust_2018_preview,
rust_begin_unwind,
rustc,
+ Rust,
RustcDecodable,
RustcEncodable,
rustc_allocator,
underscore_imports,
underscore_lifetimes,
uniform_paths,
+ uninit,
uninitialized,
universal_impl_trait,
unmarked_api,
fn test_error_on_exceed() {
let types = [TestType::UnitTest, TestType::IntegrationTest, TestType::DocTest];
- for test_type in types.into_iter() {
+ for test_type in types.iter() {
let result = time_test_failure_template(*test_type);
assert_eq!(result, TestResult::TrTimedFail);
(TestType::DocTest, doc.critical.as_millis(), true, true),
];
- for (test_type, time, expected_warn, expected_critical) in test_vector.into_iter() {
+ for (test_type, time, expected_warn, expected_critical) in test_vector.iter() {
let test_desc = typed_test_desc(*test_type);
let exec_time = test_exec_time(*time as u64);
// Checks if the correct annotation for the efiapi ABI is passed to llvm.
-// revisions:x86_64 i686 aarch64 arm riscv
+// revisions:x86_64 i686 arm
// min-llvm-version 9.0
//[x86_64] compile-flags: --target x86_64-unknown-uefi
//[i686] compile-flags: --target i686-unknown-linux-musl
-//[aarch64] compile-flags: --target aarch64-unknown-none
//[arm] compile-flags: --target armv7r-none-eabi
-//[riscv] compile-flags: --target riscv64gc-unknown-none-elf
// compile-flags: -C no-prepopulate-passes
#![crate_type = "lib"]
//x86_64: define win64cc void @has_efiapi
//i686: define void @has_efiapi
-//aarch64: define void @has_efiapi
//arm: define void @has_efiapi
-//riscv: define void @has_efiapi
#[no_mangle]
pub extern "efiapi" fn has_efiapi() {}
// revisions: cfail1 cfail2
// build-pass (FIXME(62277): could be check-pass?)
-#![feature(on_unimplemented)]
+#![feature(rustc_attrs)]
#![deny(unused_attributes)]
#[rustc_on_unimplemented = "invalid"]
$(RUSTC) bar.rs --crate-type=rlib
$(RUSTC) bar.rs --crate-type=rlib -C extra-filename=-a
$(RUSTC) bar-alt.rs --crate-type=rlib
- $(RUSTC) foo.rs --extern hello && exit 1 || exit 0
$(RUSTC) foo.rs --extern bar=no-exist && exit 1 || exit 0
$(RUSTC) foo.rs --extern bar=foo.rs && exit 1 || exit 0
$(RUSTC) foo.rs \
--extern bar=$(TMPDIR)/libbar.rlib \
--extern bar=$(TMPDIR)/libbar-a.rlib
$(RUSTC) foo.rs --extern bar=$(TMPDIR)/libbar.rlib
+ # Try to be sneaky and load a private crate from with a non-private name.
+ $(RUSTC) rustc.rs -Zforce-unstable-if-unmarked --crate-type=rlib
+ $(RUSTC) gated_unstable.rs --extern alloc=$(TMPDIR)/librustc.rlib 2>&1 | $(CGREP) 'rustc_private'
--- /dev/null
+extern crate alloc;
+
+fn main() {}
--- /dev/null
+pub fn foo() {}
--- /dev/null
+-include ../tools.mk
+
+# Test mixing pathless --extern with paths.
+
+all:
+ $(RUSTC) bar-static.rs --crate-name=bar --crate-type=rlib
+ $(RUSTC) bar-dynamic.rs --crate-name=bar --crate-type=dylib -C prefer-dynamic
+ # rlib preferred over dylib
+ $(RUSTC) foo.rs --extern bar
+ $(call RUN,foo) | $(CGREP) 'static'
+ $(RUSTC) foo.rs --extern bar=$(TMPDIR)/libbar.rlib --extern bar
+ $(call RUN,foo) | $(CGREP) 'static'
+ # explicit --extern overrides pathless
+ $(RUSTC) foo.rs --extern bar=$(call DYLIB,bar) --extern bar
+ $(call RUN,foo) | $(CGREP) 'dynamic'
+ # prefer-dynamic does what it says
+ $(RUSTC) foo.rs --extern bar -C prefer-dynamic
+ $(call RUN,foo) | $(CGREP) 'dynamic'
--- /dev/null
+pub fn f() {
+ println!("dynamic");
+}
--- /dev/null
+pub fn f() {
+ println!("static");
+}
--- /dev/null
+fn main() {
+ bar::f();
+}
// Eventually we should see an argument that looks like `@` as we switch from
// passing literal arguments to passing everything in the file.
+use std::collections::HashSet;
use std::env;
use std::fs::{self, File};
-use std::io::{BufWriter, Write, Read};
-use std::path::PathBuf;
+use std::io::{BufWriter, Write};
+use std::path::{Path, PathBuf};
use std::process::Command;
+fn write_test_case(file: &Path, n: usize) -> HashSet<String> {
+ let mut libs = HashSet::new();
+ let mut f = BufWriter::new(File::create(&file).unwrap());
+ let mut prefix = String::new();
+ for _ in 0..n {
+ prefix.push_str("foo");
+ }
+ for i in 0..n {
+ writeln!(f, "#[link(name = \"S{}{}S\")]", prefix, i).unwrap();
+ libs.insert(format!("{}{}", prefix, i));
+ }
+ writeln!(f, "extern {{}}\nfn main() {{}}").unwrap();
+ f.into_inner().unwrap();
+
+ libs
+}
+
+fn read_linker_args(path: &Path) -> String {
+ let contents = fs::read(path).unwrap();
+ if cfg!(target_env = "msvc") {
+ let mut i = contents.chunks(2).map(|c| {
+ c[0] as u16 | ((c[1] as u16) << 8)
+ });
+ assert_eq!(i.next(), Some(0xfeff), "Expected UTF-16 BOM");
+ String::from_utf16(&i.collect::<Vec<u16>>()).unwrap()
+ } else {
+ String::from_utf8(contents).unwrap()
+ }
+}
+
fn main() {
let tmpdir = PathBuf::from(env::var_os("TMPDIR").unwrap());
let ok = tmpdir.join("ok");
for i in (1..).map(|i| i * 100) {
println!("attempt: {}", i);
let file = tmpdir.join("bar.rs");
- let mut f = BufWriter::new(File::create(&file).unwrap());
- let mut lib_name = String::new();
- for _ in 0..i {
- lib_name.push_str("foo");
- }
- for j in 0..i {
- writeln!(f, "#[link(name = \"{}{}\")]", lib_name, j).unwrap();
- }
- writeln!(f, "extern {{}}\nfn main() {{}}").unwrap();
- f.into_inner().unwrap();
+ let mut expected_libs = write_test_case(&file, i);
drop(fs::remove_file(&ok));
let output = Command::new(&rustc)
continue
}
- let mut contents = Vec::new();
- File::open(&ok).unwrap().read_to_end(&mut contents).unwrap();
-
- for j in 0..i {
- let exp = format!("{}{}", lib_name, j);
- let exp = if cfg!(target_env = "msvc") {
- let mut out = Vec::with_capacity(exp.len() * 2);
- for c in exp.encode_utf16() {
- // encode in little endian
- out.push(c as u8);
- out.push((c >> 8) as u8);
- }
- out
- } else {
- exp.into_bytes()
- };
- assert!(contents.windows(exp.len()).any(|w| w == &exp[..]));
+ let linker_args = read_linker_args(&ok);
+ for mut arg in linker_args.split('S') {
+ expected_libs.remove(arg);
}
+ assert!(
+ expected_libs.is_empty(),
+ "expected but missing libraries: {:#?}\nlinker arguments: \n{}",
+ expected_libs,
+ linker_args,
+ );
+
break
}
}
-include ../tools.mk
all: extern_absolute_paths.rs krate2
- $(RUSTC) extern_absolute_paths.rs -Zsave-analysis --edition=2018 \
- -Z unstable-options --extern krate2
+ $(RUSTC) extern_absolute_paths.rs -Zsave-analysis --edition=2018 --extern krate2
cat $(TMPDIR)/save-analysis/extern_absolute_paths.json | "$(PYTHON)" validate_json.py
krate2: krate2.rs
// aux-build:use_crate_2.rs
// build-aux-docs
// edition:2018
-// compile-flags:--extern use_crate --extern use_crate_2 -Z unstable-options
+// compile-flags:--extern use_crate --extern use_crate_2
// During the buildup to Rust 2018, rustdoc would eagerly inline `pub use some_crate;` as if it
// were a module, so we changed it to make `pub use`ing crate roots remain as a `pub use` statement
#![feature(rustc_private)]
extern crate syntax;
+extern crate rustc_errors;
+use rustc_errors::PResult;
use syntax::ast::*;
use syntax::attr::*;
use syntax::ast;
use syntax::sess::ParseSess;
use syntax::source_map::{FilePathMapping, FileName};
use syntax::parse;
-use syntax::parse::PResult;
use syntax::parse::new_parser_from_source_str;
use syntax::parse::parser::Parser;
-use syntax::parse::token;
+use syntax::token;
use syntax::ptr::P;
use syntax::parse::parser::attr::*;
use syntax::print::pprust;
extern crate rustc;
extern crate rustc_driver;
-use syntax::parse::token::{self, Token};
+use syntax::token::{self, Token};
use syntax::tokenstream::{TokenTree, TokenStream};
use syntax_expand::base::{ExtCtxt, MacResult, DummyResult, MacEager};
use syntax_pos::Span;
--- /dev/null
+// ignore-stage1
+// edition:2018
+// compile-flags:--extern rustc
+
+// Test that `--extern rustc` fails with `rustc_private`.
+
+pub use rustc;
+//~^ ERROR use of unstable library feature 'rustc_private'
+
+fn main() {}
--- /dev/null
+error[E0658]: use of unstable library feature 'rustc_private': this crate is being loaded from the sysroot, an unstable location; did you mean to load this crate from crates.io via `Cargo.toml` instead?
+ --> $DIR/pathless-extern-unstable.rs:7:9
+ |
+LL | pub use rustc;
+ | ^^^^^
+ |
+ = note: for more information, see https://github.com/rust-lang/rust/issues/27812
+ = help: add `#![feature(rustc_private)]` to the crate attributes to enable
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0658`.
// this struct contains 8 i64's, while only 6 can be passed in registers.
#[cfg(target_arch = "x86_64")]
+#[repr(C)]
#[derive(PartialEq, Eq, Debug)]
pub struct LargeStruct(i64, i64, i64, i64, i64, i64, i64, i64);
#![feature(repr_align)]
-#[repr(align(16))]
+#[repr(align(16), C)]
pub struct A(i64);
pub extern "C" fn foo(x: A) {}
// while those two fields were at the same offset (which is impossible).
// That is, memory ordering of `(X, ())`, but offsets of `((), X)`.
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
// edition:2018
async fn foo<F>(_: &(), _: F) {}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// edition:2018
#![feature(arbitrary_self_types)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// edition:2018
use std::sync::Arc;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// edition:2018
// This is a regression test to ensure that simple bindings (where replacement arguments aren't
// edition:2018
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
async fn print_dur() {}
error[E0728]: `await` is only allowed inside `async` functions and blocks
- --> $DIR/issue-62009-1.rs:8:5
+ --> $DIR/issue-62009-1.rs:7:5
|
LL | fn main() {
| ---- this is not `async`
| ^^^^^^^^^^^^^^^^^^^^^^^ only allowed inside `async` functions and blocks
error[E0728]: `await` is only allowed inside `async` functions and blocks
- --> $DIR/issue-62009-1.rs:10:5
+ --> $DIR/issue-62009-1.rs:9:5
|
LL | fn main() {
| ---- this is not `async`
| |___________^ only allowed inside `async` functions and blocks
error[E0728]: `await` is only allowed inside `async` functions and blocks
- --> $DIR/issue-62009-1.rs:14:5
+ --> $DIR/issue-62009-1.rs:13:5
|
LL | fn main() {
| ---- this is not `async`
LL | (|_| 2333).await;
| ^^^^^^^^^^^^^^^^ only allowed inside `async` functions and blocks
-error[E0277]: the trait bound `[closure@$DIR/issue-62009-1.rs:14:5: 14:15]: std::future::Future` is not satisfied
- --> $DIR/issue-62009-1.rs:14:5
+error[E0277]: the trait bound `[closure@$DIR/issue-62009-1.rs:13:5: 13:15]: std::future::Future` is not satisfied
+ --> $DIR/issue-62009-1.rs:13:5
|
LL | (|_| 2333).await;
- | ^^^^^^^^^^^^^^^^ the trait `std::future::Future` is not implemented for `[closure@$DIR/issue-62009-1.rs:14:5: 14:15]`
+ | ^^^^^^^^^^^^^^^^ the trait `std::future::Future` is not implemented for `[closure@$DIR/issue-62009-1.rs:13:5: 13:15]`
|
::: $SRC_DIR/libstd/future.rs:LL:COL
|
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// rust-lang/rust#55552: The strategy pnkfelix landed in PR #55274
// (for ensuring that NLL respects user-provided lifetime annotations)
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn test<F: Fn(&u64, &u64)>(f: F) {}
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
use std::thread;
use std::sync::mpsc::channel;
error[E0277]: `std::sync::mpsc::Receiver<()>` cannot be shared between threads safely
- --> $DIR/closure-move-sync.rs:8:13
+ --> $DIR/closure-move-sync.rs:7:13
|
LL | let t = thread::spawn(|| {
| ^^^^^^^^^^^^^ `std::sync::mpsc::Receiver<()>` cannot be shared between threads safely
|
= help: the trait `std::marker::Sync` is not implemented for `std::sync::mpsc::Receiver<()>`
= note: required because of the requirements on the impl of `std::marker::Send` for `&std::sync::mpsc::Receiver<()>`
- = note: required because it appears within the type `[closure@$DIR/closure-move-sync.rs:8:27: 11:6 recv:&std::sync::mpsc::Receiver<()>]`
+ = note: required because it appears within the type `[closure@$DIR/closure-move-sync.rs:7:27: 10:6 recv:&std::sync::mpsc::Receiver<()>]`
error[E0277]: `std::sync::mpsc::Sender<()>` cannot be shared between threads safely
- --> $DIR/closure-move-sync.rs:20:5
+ --> $DIR/closure-move-sync.rs:19:5
|
LL | thread::spawn(|| tx.send(()).unwrap());
| ^^^^^^^^^^^^^ `std::sync::mpsc::Sender<()>` cannot be shared between threads safely
|
= help: the trait `std::marker::Sync` is not implemented for `std::sync::mpsc::Sender<()>`
= note: required because of the requirements on the impl of `std::marker::Send` for `&std::sync::mpsc::Sender<()>`
- = note: required because it appears within the type `[closure@$DIR/closure-move-sync.rs:20:19: 20:42 tx:&std::sync::mpsc::Sender<()>]`
+ = note: required because it appears within the type `[closure@$DIR/closure-move-sync.rs:19:19: 19:42 tx:&std::sync::mpsc::Sender<()>]`
error: aborting due to 2 previous errors
--> $DIR/ub-wide-ptr.rs:107:1
|
LL | const SLICE_LENGTH_UNINIT: &[u8] = unsafe { SliceTransmute { addr: 42 }.slice};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered uninitialized data in wide pointer metadata
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered undefined pointer
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
--> $DIR/ub-wide-ptr.rs:133:1
|
LL | const RAW_SLICE_LENGTH_UNINIT: *const [u8] = unsafe { SliceTransmute { addr: 42 }.raw_slice};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered uninitialized data in wide pointer metadata
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered undefined pointer
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
-// ignore-musl
-// ignore-x86
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// error-pattern: cycle detected
struct Foo {
error[E0391]: cycle detected when const-evaluating + checking `Foo::bytes::{{constant}}#0`
- --> $DIR/const-size_of-cycle.rs:6:17
+ --> $DIR/const-size_of-cycle.rs:5:17
|
LL | bytes: [u8; std::mem::size_of::<Foo>()]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: ...which requires const-evaluating + checking `Foo::bytes::{{constant}}#0`...
- --> $DIR/const-size_of-cycle.rs:6:17
+ --> $DIR/const-size_of-cycle.rs:5:17
|
LL | bytes: [u8; std::mem::size_of::<Foo>()]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
= note: ...which requires normalizing `ParamEnvAnd { param_env: ParamEnv { caller_bounds: [], reveal: All, def_id: None }, value: [u8; _] }`...
= note: ...which again requires const-evaluating + checking `Foo::bytes::{{constant}}#0`, completing the cycle
note: cycle used when processing `Foo`
- --> $DIR/const-size_of-cycle.rs:5:1
+ --> $DIR/const-size_of-cycle.rs:4:1
|
LL | struct Foo {
| ^^^^^^^^^^
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn main() {
assert_eq!(&mut [0; 1][..], &mut []);
--- /dev/null
+// run-pass
+
+// From https://github.com/rust-lang/rust/issues/65727
+
+const _: &i32 = {
+ let x = &(5, false).0;
+ x
+};
+
+fn main() {
+ let _: &'static i32 = &(5, false).0;
+}
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: the trait bound `Error: std::clone::Clone` is not satisfied
- --> $DIR/derives-span-Clone-enum-struct-variant.rs:9:6
+ --> $DIR/derives-span-Clone-enum-struct-variant.rs:10:6
|
LL | x: Error
| ^^^^^^^^ the trait `std::clone::Clone` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: the trait bound `Error: std::clone::Clone` is not satisfied
- --> $DIR/derives-span-Clone-enum.rs:9:6
+ --> $DIR/derives-span-Clone-enum.rs:10:6
|
LL | Error
| ^^^^^ the trait `std::clone::Clone` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: the trait bound `Error: std::clone::Clone` is not satisfied
- --> $DIR/derives-span-Clone-struct.rs:8:5
+ --> $DIR/derives-span-Clone-struct.rs:9:5
|
LL | x: Error
| ^^^^^^^^ the trait `std::clone::Clone` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: the trait bound `Error: std::clone::Clone` is not satisfied
- --> $DIR/derives-span-Clone-tuple-struct.rs:8:5
+ --> $DIR/derives-span-Clone-tuple-struct.rs:9:5
|
LL | Error
| ^^^^^ the trait `std::clone::Clone` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: `Error` doesn't implement `std::fmt::Debug`
- --> $DIR/derives-span-Debug-enum-struct-variant.rs:9:6
+ --> $DIR/derives-span-Debug-enum-struct-variant.rs:10:6
|
LL | x: Error
| ^^^^^^^^ `Error` cannot be formatted using `{:?}`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: `Error` doesn't implement `std::fmt::Debug`
- --> $DIR/derives-span-Debug-enum.rs:9:6
+ --> $DIR/derives-span-Debug-enum.rs:10:6
|
LL | Error
| ^^^^^ `Error` cannot be formatted using `{:?}`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: `Error` doesn't implement `std::fmt::Debug`
- --> $DIR/derives-span-Debug-struct.rs:8:5
+ --> $DIR/derives-span-Debug-struct.rs:9:5
|
LL | x: Error
| ^^^^^^^^ `Error` cannot be formatted using `{:?}`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: `Error` doesn't implement `std::fmt::Debug`
- --> $DIR/derives-span-Debug-tuple-struct.rs:8:5
+ --> $DIR/derives-span-Debug-tuple-struct.rs:9:5
|
LL | Error
| ^^^^^ `Error` cannot be formatted using `{:?}`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: the trait bound `Error: std::default::Default` is not satisfied
- --> $DIR/derives-span-Default-struct.rs:8:5
+ --> $DIR/derives-span-Default-struct.rs:9:5
|
LL | x: Error
| ^^^^^^^^ the trait `std::default::Default` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: the trait bound `Error: std::default::Default` is not satisfied
- --> $DIR/derives-span-Default-tuple-struct.rs:8:5
+ --> $DIR/derives-span-Default-tuple-struct.rs:9:5
|
LL | Error
| ^^^^^ the trait `std::default::Default` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
#[derive(PartialEq)]
error[E0277]: the trait bound `Error: std::cmp::Eq` is not satisfied
- --> $DIR/derives-span-Eq-enum-struct-variant.rs:9:6
+ --> $DIR/derives-span-Eq-enum-struct-variant.rs:10:6
|
LL | x: Error
| ^^^^^^^^ the trait `std::cmp::Eq` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
#[derive(PartialEq)]
error[E0277]: the trait bound `Error: std::cmp::Eq` is not satisfied
- --> $DIR/derives-span-Eq-enum.rs:9:6
+ --> $DIR/derives-span-Eq-enum.rs:10:6
|
LL | Error
| ^^^^^ the trait `std::cmp::Eq` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
#[derive(PartialEq)]
error[E0277]: the trait bound `Error: std::cmp::Eq` is not satisfied
- --> $DIR/derives-span-Eq-struct.rs:8:5
+ --> $DIR/derives-span-Eq-struct.rs:9:5
|
LL | x: Error
| ^^^^^^^^ the trait `std::cmp::Eq` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
#[derive(PartialEq)]
error[E0277]: the trait bound `Error: std::cmp::Eq` is not satisfied
- --> $DIR/derives-span-Eq-tuple-struct.rs:8:5
+ --> $DIR/derives-span-Eq-tuple-struct.rs:9:5
|
LL | Error
| ^^^^^ the trait `std::cmp::Eq` is not implemented for `Error`
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: the trait bound `Error: std::hash::Hash` is not satisfied
- --> $DIR/derives-span-Hash-enum-struct-variant.rs:11:6
+ --> $DIR/derives-span-Hash-enum-struct-variant.rs:10:6
|
LL | x: Error
| ^^^^^^^^ the trait `std::hash::Hash` is not implemented for `Error`
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: the trait bound `Error: std::hash::Hash` is not satisfied
- --> $DIR/derives-span-Hash-enum.rs:11:6
+ --> $DIR/derives-span-Hash-enum.rs:10:6
|
LL | Error
| ^^^^^ the trait `std::hash::Hash` is not implemented for `Error`
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: the trait bound `Error: std::hash::Hash` is not satisfied
- --> $DIR/derives-span-Hash-struct.rs:10:5
+ --> $DIR/derives-span-Hash-struct.rs:9:5
|
LL | x: Error
| ^^^^^^^^ the trait `std::hash::Hash` is not implemented for `Error`
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0277]: the trait bound `Error: std::hash::Hash` is not satisfied
- --> $DIR/derives-span-Hash-tuple-struct.rs:10:5
+ --> $DIR/derives-span-Hash-tuple-struct.rs:9:5
|
LL | Error
| ^^^^^ the trait `std::hash::Hash` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
#[derive(Eq,PartialOrd,PartialEq)]
error[E0277]: the trait bound `Error: std::cmp::Ord` is not satisfied
- --> $DIR/derives-span-Ord-enum-struct-variant.rs:9:6
+ --> $DIR/derives-span-Ord-enum-struct-variant.rs:10:6
|
LL | x: Error
| ^^^^^^^^ the trait `std::cmp::Ord` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
#[derive(Eq,PartialOrd,PartialEq)]
error[E0277]: the trait bound `Error: std::cmp::Ord` is not satisfied
- --> $DIR/derives-span-Ord-enum.rs:9:6
+ --> $DIR/derives-span-Ord-enum.rs:10:6
|
LL | Error
| ^^^^^ the trait `std::cmp::Ord` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
#[derive(Eq,PartialOrd,PartialEq)]
error[E0277]: the trait bound `Error: std::cmp::Ord` is not satisfied
- --> $DIR/derives-span-Ord-struct.rs:8:5
+ --> $DIR/derives-span-Ord-struct.rs:9:5
|
LL | x: Error
| ^^^^^^^^ the trait `std::cmp::Ord` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
#[derive(Eq,PartialOrd,PartialEq)]
error[E0277]: the trait bound `Error: std::cmp::Ord` is not satisfied
- --> $DIR/derives-span-Ord-tuple-struct.rs:8:5
+ --> $DIR/derives-span-Ord-tuple-struct.rs:9:5
|
LL | Error
| ^^^^^ the trait `std::cmp::Ord` is not implemented for `Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0369]: binary operation `==` cannot be applied to type `Error`
- --> $DIR/derives-span-PartialEq-enum-struct-variant.rs:9:6
+ --> $DIR/derives-span-PartialEq-enum-struct-variant.rs:10:6
|
LL | x: Error
| ^^^^^^^^
= note: an implementation of `std::cmp::PartialEq` might be missing for `Error`
error[E0369]: binary operation `!=` cannot be applied to type `Error`
- --> $DIR/derives-span-PartialEq-enum-struct-variant.rs:9:6
+ --> $DIR/derives-span-PartialEq-enum-struct-variant.rs:10:6
|
LL | x: Error
| ^^^^^^^^
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0369]: binary operation `==` cannot be applied to type `Error`
- --> $DIR/derives-span-PartialEq-enum.rs:9:6
+ --> $DIR/derives-span-PartialEq-enum.rs:10:6
|
LL | Error
| ^^^^^
= note: an implementation of `std::cmp::PartialEq` might be missing for `Error`
error[E0369]: binary operation `!=` cannot be applied to type `Error`
- --> $DIR/derives-span-PartialEq-enum.rs:9:6
+ --> $DIR/derives-span-PartialEq-enum.rs:10:6
|
LL | Error
| ^^^^^
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0369]: binary operation `==` cannot be applied to type `Error`
- --> $DIR/derives-span-PartialEq-struct.rs:8:5
+ --> $DIR/derives-span-PartialEq-struct.rs:9:5
|
LL | x: Error
| ^^^^^^^^
= note: an implementation of `std::cmp::PartialEq` might be missing for `Error`
error[E0369]: binary operation `!=` cannot be applied to type `Error`
- --> $DIR/derives-span-PartialEq-struct.rs:8:5
+ --> $DIR/derives-span-PartialEq-struct.rs:9:5
|
LL | x: Error
| ^^^^^^^^
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
error[E0369]: binary operation `==` cannot be applied to type `Error`
- --> $DIR/derives-span-PartialEq-tuple-struct.rs:8:5
+ --> $DIR/derives-span-PartialEq-tuple-struct.rs:9:5
|
LL | Error
| ^^^^^
= note: an implementation of `std::cmp::PartialEq` might be missing for `Error`
error[E0369]: binary operation `!=` cannot be applied to type `Error`
- --> $DIR/derives-span-PartialEq-tuple-struct.rs:8:5
+ --> $DIR/derives-span-PartialEq-tuple-struct.rs:9:5
|
LL | Error
| ^^^^^
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
#[derive(PartialEq)]
error[E0277]: can't compare `Error` with `Error`
- --> $DIR/derives-span-PartialOrd-enum-struct-variant.rs:9:6
+ --> $DIR/derives-span-PartialOrd-enum-struct-variant.rs:10:6
|
LL | x: Error
| ^^^^^^^^ no implementation for `Error < Error` and `Error > Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
#[derive(PartialEq)]
error[E0277]: can't compare `Error` with `Error`
- --> $DIR/derives-span-PartialOrd-enum.rs:9:6
+ --> $DIR/derives-span-PartialOrd-enum.rs:10:6
|
LL | Error
| ^^^^^ no implementation for `Error < Error` and `Error > Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
#[derive(PartialEq)]
error[E0277]: can't compare `Error` with `Error`
- --> $DIR/derives-span-PartialOrd-struct.rs:8:5
+ --> $DIR/derives-span-PartialOrd-struct.rs:9:5
|
LL | x: Error
| ^^^^^^^^ no implementation for `Error < Error` and `Error > Error`
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
#[derive(PartialEq)]
error[E0277]: can't compare `Error` with `Error`
- --> $DIR/derives-span-PartialOrd-tuple-struct.rs:8:5
+ --> $DIR/derives-span-PartialOrd-tuple-struct.rs:9:5
|
LL | Error
| ^^^^^ no implementation for `Error < Error` and `Error > Error`
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// rust-lang/rust#56327: Some occurrences of `dyn` within a macro are
// not instances of identifiers, and thus should *not* be caught by the
}
pub fn main() {
- unsafe {
+ assert_eq!(discriminant_value(&CLike1::A), 0);
+ assert_eq!(discriminant_value(&CLike1::B), 1);
+ assert_eq!(discriminant_value(&CLike1::C), 2);
+ assert_eq!(discriminant_value(&CLike1::D), 3);
- assert_eq!(discriminant_value(&CLike1::A), 0);
- assert_eq!(discriminant_value(&CLike1::B), 1);
- assert_eq!(discriminant_value(&CLike1::C), 2);
- assert_eq!(discriminant_value(&CLike1::D), 3);
+ assert_eq!(discriminant_value(&CLike2::A), 5);
+ assert_eq!(discriminant_value(&CLike2::B), 2);
+ assert_eq!(discriminant_value(&CLike2::C), 19);
+ assert_eq!(discriminant_value(&CLike2::D), 20);
- assert_eq!(discriminant_value(&CLike2::A), 5);
- assert_eq!(discriminant_value(&CLike2::B), 2);
- assert_eq!(discriminant_value(&CLike2::C), 19);
- assert_eq!(discriminant_value(&CLike2::D), 20);
+ assert_eq!(discriminant_value(&CLike3::A), 5);
+ assert_eq!(discriminant_value(&CLike3::B), 6);
+ assert_eq!(discriminant_value(&CLike3::C), -1_i8 as u64);
+ assert_eq!(discriminant_value(&CLike3::D), 0);
- assert_eq!(discriminant_value(&CLike3::A), 5);
- assert_eq!(discriminant_value(&CLike3::B), 6);
- assert_eq!(discriminant_value(&CLike3::C), -1_i8 as u64);
- assert_eq!(discriminant_value(&CLike3::D), 0);
+ assert_eq!(discriminant_value(&ADT::First(0,0)), 0);
+ assert_eq!(discriminant_value(&ADT::Second(5)), 1);
- assert_eq!(discriminant_value(&ADT::First(0,0)), 0);
- assert_eq!(discriminant_value(&ADT::Second(5)), 1);
+ assert_eq!(discriminant_value(&NullablePointer::Nothing), 1);
+ assert_eq!(discriminant_value(&NullablePointer::Something(&CONST)), 0);
- assert_eq!(discriminant_value(&NullablePointer::Nothing), 1);
- assert_eq!(discriminant_value(&NullablePointer::Something(&CONST)), 0);
+ assert_eq!(discriminant_value(&10), 0);
+ assert_eq!(discriminant_value(&"test"), 0);
- assert_eq!(discriminant_value(&10), 0);
- assert_eq!(discriminant_value(&"test"), 0);
-
- assert_eq!(3, discriminant_value(&Mixed::Unit));
- assert_eq!(2, discriminant_value(&Mixed::Tuple(5)));
- assert_eq!(1, discriminant_value(&Mixed::Struct{a: 7, b: 11}));
- }
+ assert_eq!(3, discriminant_value(&Mixed::Unit));
+ assert_eq!(2, discriminant_value(&Mixed::Tuple(5)));
+ assert_eq!(1, discriminant_value(&Mixed::Struct{a: 7, b: 11}));
}
// `#![macro_escape]` is incompatible with crate-level `#![macro_use]`
// already present in issue-43106-gating-of-builtin-attrs.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![macro_escape]
//~^ WARN macro_escape is a deprecated synonym for macro_use
error[E0658]: intrinsics are subject to change
- --> $DIR/feature-gated-feature-in-macro-arg.rs:8:9
+ --> $DIR/feature-gated-feature-in-macro-arg.rs:8:16
|
-LL | / extern "rust-intrinsic" {
-LL | | fn atomic_fence();
-LL | | }
- | |_________^
+LL | extern "rust-intrinsic" {
+ | ^^^^^^^^^^^^^^^^
|
= help: add `#![feature(intrinsics)]` to the crate attributes to enable
error[E0658]: msp430-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi-msp430-interrupt.rs:4:1
+ --> $DIR/feature-gate-abi-msp430-interrupt.rs:4:8
|
LL | extern "msp430-interrupt" fn foo() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38487
= help: add `#![feature(abi_msp430_interrupt)]` to the crate attributes to enable
error[E0658]: intrinsics are subject to change
- --> $DIR/feature-gate-abi.rs:13:1
+ --> $DIR/feature-gate-abi.rs:13:8
|
LL | extern "rust-intrinsic" fn f1() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^
|
= help: add `#![feature(intrinsics)]` to the crate attributes to enable
error[E0658]: platform intrinsics are experimental and possibly buggy
- --> $DIR/feature-gate-abi.rs:15:1
+ --> $DIR/feature-gate-abi.rs:15:8
|
LL | extern "platform-intrinsic" fn f2() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/27731
= help: add `#![feature(platform_intrinsics)]` to the crate attributes to enable
error[E0658]: vectorcall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:17:1
+ --> $DIR/feature-gate-abi.rs:17:8
|
LL | extern "vectorcall" fn f3() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= help: add `#![feature(abi_vectorcall)]` to the crate attributes to enable
error[E0658]: rust-call ABI is subject to change
- --> $DIR/feature-gate-abi.rs:18:1
+ --> $DIR/feature-gate-abi.rs:18:8
|
LL | extern "rust-call" fn f4() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29625
= help: add `#![feature(unboxed_closures)]` to the crate attributes to enable
error[E0658]: msp430-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:19:1
+ --> $DIR/feature-gate-abi.rs:19:8
|
LL | extern "msp430-interrupt" fn f5() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38487
= help: add `#![feature(abi_msp430_interrupt)]` to the crate attributes to enable
error[E0658]: PTX ABIs are experimental and subject to change
- --> $DIR/feature-gate-abi.rs:20:1
+ --> $DIR/feature-gate-abi.rs:20:8
|
LL | extern "ptx-kernel" fn f6() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38788
= help: add `#![feature(abi_ptx)]` to the crate attributes to enable
error[E0658]: x86-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:21:1
+ --> $DIR/feature-gate-abi.rs:21:8
|
LL | extern "x86-interrupt" fn f7() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/40180
= help: add `#![feature(abi_x86_interrupt)]` to the crate attributes to enable
error[E0658]: thiscall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:22:1
+ --> $DIR/feature-gate-abi.rs:22:8
|
LL | extern "thiscall" fn f8() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^
|
= help: add `#![feature(abi_thiscall)]` to the crate attributes to enable
error[E0658]: amdgpu-kernel ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:23:1
+ --> $DIR/feature-gate-abi.rs:23:8
|
LL | extern "amdgpu-kernel" fn f9() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/51575
= help: add `#![feature(abi_amdgpu_kernel)]` to the crate attributes to enable
error[E0658]: efiapi ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:24:1
+ --> $DIR/feature-gate-abi.rs:24:8
|
LL | extern "efiapi" fn f10() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/65815
= help: add `#![feature(abi_efiapi)]` to the crate attributes to enable
error[E0658]: intrinsics are subject to change
- --> $DIR/feature-gate-abi.rs:28:5
+ --> $DIR/feature-gate-abi.rs:28:12
|
LL | extern "rust-intrinsic" fn m1();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^
|
= help: add `#![feature(intrinsics)]` to the crate attributes to enable
error[E0658]: platform intrinsics are experimental and possibly buggy
- --> $DIR/feature-gate-abi.rs:30:5
+ --> $DIR/feature-gate-abi.rs:30:12
|
LL | extern "platform-intrinsic" fn m2();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/27731
= help: add `#![feature(platform_intrinsics)]` to the crate attributes to enable
error[E0658]: vectorcall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:32:5
+ --> $DIR/feature-gate-abi.rs:32:12
|
LL | extern "vectorcall" fn m3();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= help: add `#![feature(abi_vectorcall)]` to the crate attributes to enable
error[E0658]: rust-call ABI is subject to change
- --> $DIR/feature-gate-abi.rs:33:5
+ --> $DIR/feature-gate-abi.rs:33:12
|
LL | extern "rust-call" fn m4();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29625
= help: add `#![feature(unboxed_closures)]` to the crate attributes to enable
error[E0658]: msp430-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:34:5
+ --> $DIR/feature-gate-abi.rs:34:12
|
LL | extern "msp430-interrupt" fn m5();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38487
= help: add `#![feature(abi_msp430_interrupt)]` to the crate attributes to enable
error[E0658]: PTX ABIs are experimental and subject to change
- --> $DIR/feature-gate-abi.rs:35:5
+ --> $DIR/feature-gate-abi.rs:35:12
|
LL | extern "ptx-kernel" fn m6();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38788
= help: add `#![feature(abi_ptx)]` to the crate attributes to enable
error[E0658]: x86-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:36:5
+ --> $DIR/feature-gate-abi.rs:36:12
|
LL | extern "x86-interrupt" fn m7();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/40180
= help: add `#![feature(abi_x86_interrupt)]` to the crate attributes to enable
error[E0658]: thiscall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:37:5
+ --> $DIR/feature-gate-abi.rs:37:12
|
LL | extern "thiscall" fn m8();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^
|
= help: add `#![feature(abi_thiscall)]` to the crate attributes to enable
error[E0658]: amdgpu-kernel ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:38:5
+ --> $DIR/feature-gate-abi.rs:38:12
|
LL | extern "amdgpu-kernel" fn m9();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/51575
= help: add `#![feature(abi_amdgpu_kernel)]` to the crate attributes to enable
error[E0658]: efiapi ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:39:5
+ --> $DIR/feature-gate-abi.rs:39:12
|
LL | extern "efiapi" fn m10();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/65815
= help: add `#![feature(abi_efiapi)]` to the crate attributes to enable
error[E0658]: vectorcall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:41:5
+ --> $DIR/feature-gate-abi.rs:41:12
|
LL | extern "vectorcall" fn dm3() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= help: add `#![feature(abi_vectorcall)]` to the crate attributes to enable
error[E0658]: rust-call ABI is subject to change
- --> $DIR/feature-gate-abi.rs:42:5
+ --> $DIR/feature-gate-abi.rs:42:12
|
LL | extern "rust-call" fn dm4() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29625
= help: add `#![feature(unboxed_closures)]` to the crate attributes to enable
error[E0658]: msp430-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:43:5
+ --> $DIR/feature-gate-abi.rs:43:12
|
LL | extern "msp430-interrupt" fn dm5() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38487
= help: add `#![feature(abi_msp430_interrupt)]` to the crate attributes to enable
error[E0658]: PTX ABIs are experimental and subject to change
- --> $DIR/feature-gate-abi.rs:44:5
+ --> $DIR/feature-gate-abi.rs:44:12
|
LL | extern "ptx-kernel" fn dm6() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38788
= help: add `#![feature(abi_ptx)]` to the crate attributes to enable
error[E0658]: x86-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:45:5
+ --> $DIR/feature-gate-abi.rs:45:12
|
LL | extern "x86-interrupt" fn dm7() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/40180
= help: add `#![feature(abi_x86_interrupt)]` to the crate attributes to enable
error[E0658]: thiscall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:46:5
+ --> $DIR/feature-gate-abi.rs:46:12
|
LL | extern "thiscall" fn dm8() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^
|
= help: add `#![feature(abi_thiscall)]` to the crate attributes to enable
error[E0658]: amdgpu-kernel ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:47:5
+ --> $DIR/feature-gate-abi.rs:47:12
|
LL | extern "amdgpu-kernel" fn dm9() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/51575
= help: add `#![feature(abi_amdgpu_kernel)]` to the crate attributes to enable
error[E0658]: efiapi ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:48:5
+ --> $DIR/feature-gate-abi.rs:48:12
|
LL | extern "efiapi" fn dm10() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/65815
= help: add `#![feature(abi_efiapi)]` to the crate attributes to enable
error[E0658]: intrinsics are subject to change
- --> $DIR/feature-gate-abi.rs:55:5
+ --> $DIR/feature-gate-abi.rs:55:12
|
LL | extern "rust-intrinsic" fn m1() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^
|
= help: add `#![feature(intrinsics)]` to the crate attributes to enable
error[E0658]: platform intrinsics are experimental and possibly buggy
- --> $DIR/feature-gate-abi.rs:57:5
+ --> $DIR/feature-gate-abi.rs:57:12
|
LL | extern "platform-intrinsic" fn m2() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/27731
= help: add `#![feature(platform_intrinsics)]` to the crate attributes to enable
error[E0658]: vectorcall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:59:5
+ --> $DIR/feature-gate-abi.rs:59:12
|
LL | extern "vectorcall" fn m3() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= help: add `#![feature(abi_vectorcall)]` to the crate attributes to enable
error[E0658]: rust-call ABI is subject to change
- --> $DIR/feature-gate-abi.rs:60:5
+ --> $DIR/feature-gate-abi.rs:60:12
|
LL | extern "rust-call" fn m4() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29625
= help: add `#![feature(unboxed_closures)]` to the crate attributes to enable
error[E0658]: msp430-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:61:5
+ --> $DIR/feature-gate-abi.rs:61:12
|
LL | extern "msp430-interrupt" fn m5() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38487
= help: add `#![feature(abi_msp430_interrupt)]` to the crate attributes to enable
error[E0658]: PTX ABIs are experimental and subject to change
- --> $DIR/feature-gate-abi.rs:62:5
+ --> $DIR/feature-gate-abi.rs:62:12
|
LL | extern "ptx-kernel" fn m6() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38788
= help: add `#![feature(abi_ptx)]` to the crate attributes to enable
error[E0658]: x86-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:63:5
+ --> $DIR/feature-gate-abi.rs:63:12
|
LL | extern "x86-interrupt" fn m7() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/40180
= help: add `#![feature(abi_x86_interrupt)]` to the crate attributes to enable
error[E0658]: thiscall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:64:5
+ --> $DIR/feature-gate-abi.rs:64:12
|
LL | extern "thiscall" fn m8() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^
|
= help: add `#![feature(abi_thiscall)]` to the crate attributes to enable
error[E0658]: amdgpu-kernel ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:65:5
+ --> $DIR/feature-gate-abi.rs:65:12
|
LL | extern "amdgpu-kernel" fn m9() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/51575
= help: add `#![feature(abi_amdgpu_kernel)]` to the crate attributes to enable
error[E0658]: efiapi ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:66:5
+ --> $DIR/feature-gate-abi.rs:66:12
|
LL | extern "efiapi" fn m10() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/65815
= help: add `#![feature(abi_efiapi)]` to the crate attributes to enable
error[E0658]: intrinsics are subject to change
- --> $DIR/feature-gate-abi.rs:71:5
+ --> $DIR/feature-gate-abi.rs:71:12
|
LL | extern "rust-intrinsic" fn im1() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^
|
= help: add `#![feature(intrinsics)]` to the crate attributes to enable
error[E0658]: platform intrinsics are experimental and possibly buggy
- --> $DIR/feature-gate-abi.rs:73:5
+ --> $DIR/feature-gate-abi.rs:73:12
|
LL | extern "platform-intrinsic" fn im2() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/27731
= help: add `#![feature(platform_intrinsics)]` to the crate attributes to enable
error[E0658]: vectorcall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:75:5
+ --> $DIR/feature-gate-abi.rs:75:12
|
LL | extern "vectorcall" fn im3() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= help: add `#![feature(abi_vectorcall)]` to the crate attributes to enable
error[E0658]: rust-call ABI is subject to change
- --> $DIR/feature-gate-abi.rs:76:5
+ --> $DIR/feature-gate-abi.rs:76:12
|
LL | extern "rust-call" fn im4() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29625
= help: add `#![feature(unboxed_closures)]` to the crate attributes to enable
error[E0658]: msp430-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:77:5
+ --> $DIR/feature-gate-abi.rs:77:12
|
LL | extern "msp430-interrupt" fn im5() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38487
= help: add `#![feature(abi_msp430_interrupt)]` to the crate attributes to enable
error[E0658]: PTX ABIs are experimental and subject to change
- --> $DIR/feature-gate-abi.rs:78:5
+ --> $DIR/feature-gate-abi.rs:78:12
|
LL | extern "ptx-kernel" fn im6() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38788
= help: add `#![feature(abi_ptx)]` to the crate attributes to enable
error[E0658]: x86-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:79:5
+ --> $DIR/feature-gate-abi.rs:79:12
|
LL | extern "x86-interrupt" fn im7() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/40180
= help: add `#![feature(abi_x86_interrupt)]` to the crate attributes to enable
error[E0658]: thiscall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:80:5
+ --> $DIR/feature-gate-abi.rs:80:12
|
LL | extern "thiscall" fn im8() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^
|
= help: add `#![feature(abi_thiscall)]` to the crate attributes to enable
error[E0658]: amdgpu-kernel ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:81:5
+ --> $DIR/feature-gate-abi.rs:81:12
|
LL | extern "amdgpu-kernel" fn im9() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/51575
= help: add `#![feature(abi_amdgpu_kernel)]` to the crate attributes to enable
error[E0658]: efiapi ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:82:5
+ --> $DIR/feature-gate-abi.rs:82:12
|
LL | extern "efiapi" fn im10() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/65815
= help: add `#![feature(abi_efiapi)]` to the crate attributes to enable
error[E0658]: intrinsics are subject to change
- --> $DIR/feature-gate-abi.rs:86:11
+ --> $DIR/feature-gate-abi.rs:86:18
|
LL | type A1 = extern "rust-intrinsic" fn();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^
|
= help: add `#![feature(intrinsics)]` to the crate attributes to enable
error[E0658]: platform intrinsics are experimental and possibly buggy
- --> $DIR/feature-gate-abi.rs:87:11
+ --> $DIR/feature-gate-abi.rs:87:18
|
LL | type A2 = extern "platform-intrinsic" fn();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/27731
= help: add `#![feature(platform_intrinsics)]` to the crate attributes to enable
error[E0658]: vectorcall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:88:11
+ --> $DIR/feature-gate-abi.rs:88:18
|
LL | type A3 = extern "vectorcall" fn();
- | ^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= help: add `#![feature(abi_vectorcall)]` to the crate attributes to enable
error[E0658]: rust-call ABI is subject to change
- --> $DIR/feature-gate-abi.rs:89:11
+ --> $DIR/feature-gate-abi.rs:89:18
|
LL | type A4 = extern "rust-call" fn();
- | ^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29625
= help: add `#![feature(unboxed_closures)]` to the crate attributes to enable
error[E0658]: msp430-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:90:11
+ --> $DIR/feature-gate-abi.rs:90:18
|
LL | type A5 = extern "msp430-interrupt" fn();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38487
= help: add `#![feature(abi_msp430_interrupt)]` to the crate attributes to enable
error[E0658]: PTX ABIs are experimental and subject to change
- --> $DIR/feature-gate-abi.rs:91:11
+ --> $DIR/feature-gate-abi.rs:91:18
|
LL | type A6 = extern "ptx-kernel" fn ();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38788
= help: add `#![feature(abi_ptx)]` to the crate attributes to enable
error[E0658]: x86-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:92:11
+ --> $DIR/feature-gate-abi.rs:92:18
|
LL | type A7 = extern "x86-interrupt" fn();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/40180
= help: add `#![feature(abi_x86_interrupt)]` to the crate attributes to enable
error[E0658]: thiscall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:93:11
+ --> $DIR/feature-gate-abi.rs:93:18
|
LL | type A8 = extern "thiscall" fn();
- | ^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^
|
= help: add `#![feature(abi_thiscall)]` to the crate attributes to enable
error[E0658]: amdgpu-kernel ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:94:11
+ --> $DIR/feature-gate-abi.rs:94:18
|
LL | type A9 = extern "amdgpu-kernel" fn();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/51575
= help: add `#![feature(abi_amdgpu_kernel)]` to the crate attributes to enable
error[E0658]: efiapi ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:95:12
+ --> $DIR/feature-gate-abi.rs:95:19
|
LL | type A10 = extern "efiapi" fn();
- | ^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/65815
= help: add `#![feature(abi_efiapi)]` to the crate attributes to enable
error[E0658]: intrinsics are subject to change
- --> $DIR/feature-gate-abi.rs:98:1
+ --> $DIR/feature-gate-abi.rs:98:8
|
LL | extern "rust-intrinsic" {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^
|
= help: add `#![feature(intrinsics)]` to the crate attributes to enable
error[E0658]: platform intrinsics are experimental and possibly buggy
- --> $DIR/feature-gate-abi.rs:99:1
+ --> $DIR/feature-gate-abi.rs:99:8
|
LL | extern "platform-intrinsic" {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/27731
= help: add `#![feature(platform_intrinsics)]` to the crate attributes to enable
error[E0658]: vectorcall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:100:1
+ --> $DIR/feature-gate-abi.rs:100:8
|
LL | extern "vectorcall" {}
- | ^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= help: add `#![feature(abi_vectorcall)]` to the crate attributes to enable
error[E0658]: rust-call ABI is subject to change
- --> $DIR/feature-gate-abi.rs:101:1
+ --> $DIR/feature-gate-abi.rs:101:8
|
LL | extern "rust-call" {}
- | ^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29625
= help: add `#![feature(unboxed_closures)]` to the crate attributes to enable
error[E0658]: msp430-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:102:1
+ --> $DIR/feature-gate-abi.rs:102:8
|
LL | extern "msp430-interrupt" {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38487
= help: add `#![feature(abi_msp430_interrupt)]` to the crate attributes to enable
error[E0658]: PTX ABIs are experimental and subject to change
- --> $DIR/feature-gate-abi.rs:103:1
+ --> $DIR/feature-gate-abi.rs:103:8
|
LL | extern "ptx-kernel" {}
- | ^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/38788
= help: add `#![feature(abi_ptx)]` to the crate attributes to enable
error[E0658]: x86-interrupt ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:104:1
+ --> $DIR/feature-gate-abi.rs:104:8
|
LL | extern "x86-interrupt" {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/40180
= help: add `#![feature(abi_x86_interrupt)]` to the crate attributes to enable
error[E0658]: thiscall is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:105:1
+ --> $DIR/feature-gate-abi.rs:105:8
|
LL | extern "thiscall" {}
- | ^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^
|
= help: add `#![feature(abi_thiscall)]` to the crate attributes to enable
error[E0658]: amdgpu-kernel ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:106:1
+ --> $DIR/feature-gate-abi.rs:106:8
|
LL | extern "amdgpu-kernel" {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/51575
= help: add `#![feature(abi_amdgpu_kernel)]` to the crate attributes to enable
error[E0658]: efiapi ABI is experimental and subject to change
- --> $DIR/feature-gate-abi.rs:107:1
+ --> $DIR/feature-gate-abi.rs:107:8
|
LL | extern "efiapi" {}
- | ^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/65815
= help: add `#![feature(abi_efiapi)]` to the crate attributes to enable
error[E0658]: unadjusted ABI is an implementation detail and perma-unstable
- --> $DIR/feature-gate-abi_unadjusted.rs:1:1
+ --> $DIR/feature-gate-abi_unadjusted.rs:1:8
|
-LL | / extern "unadjusted" fn foo() {
-LL | |
-LL | | }
- | |_^
+LL | extern "unadjusted" fn foo() {
+ | ^^^^^^^^^^^^
|
= help: add `#![feature(abi_unadjusted)]` to the crate attributes to enable
error[E0658]: intrinsics are subject to change
- --> $DIR/feature-gate-intrinsics.rs:1:1
+ --> $DIR/feature-gate-intrinsics.rs:1:8
|
-LL | / extern "rust-intrinsic" {
-LL | | fn bar();
-LL | | }
- | |_^
+LL | extern "rust-intrinsic" {
+ | ^^^^^^^^^^^^^^^^
|
= help: add `#![feature(intrinsics)]` to the crate attributes to enable
error[E0658]: intrinsics are subject to change
- --> $DIR/feature-gate-intrinsics.rs:5:1
+ --> $DIR/feature-gate-intrinsics.rs:5:8
|
LL | extern "rust-intrinsic" fn baz() {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^
|
= help: add `#![feature(intrinsics)]` to the crate attributes to enable
+++ /dev/null
-// Test that `#[rustc_on_unimplemented]` is gated by `on_unimplemented` feature
-// gate.
-
-#[rustc_on_unimplemented = "test error `{Self}` with `{Bar}`"]
-//~^ ERROR the `#[rustc_on_unimplemented]` attribute is an experimental feature
-trait Foo<Bar>
-{}
-
-fn main() {}
+++ /dev/null
-error[E0658]: the `#[rustc_on_unimplemented]` attribute is an experimental feature
- --> $DIR/feature-gate-on-unimplemented.rs:4:1
- |
-LL | #[rustc_on_unimplemented = "test error `{Self}` with `{Bar}`"]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: for more information, see https://github.com/rust-lang/rust/issues/29628
- = help: add `#![feature(on_unimplemented)]` to the crate attributes to enable
-
-error: aborting due to previous error
-
-For more information about this error, try `rustc --explain E0658`.
error[E0658]: rust-call ABI is subject to change
- --> $DIR/feature-gate-unboxed-closures-manual-impls.rs:11:5
+ --> $DIR/feature-gate-unboxed-closures-manual-impls.rs:11:12
|
LL | extern "rust-call" fn call(self, args: ()) -> () {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29625
= help: add `#![feature(unboxed_closures)]` to the crate attributes to enable
error[E0658]: rust-call ABI is subject to change
- --> $DIR/feature-gate-unboxed-closures-manual-impls.rs:17:5
+ --> $DIR/feature-gate-unboxed-closures-manual-impls.rs:17:12
|
LL | extern "rust-call" fn call_once(self, args: ()) -> () {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29625
= help: add `#![feature(unboxed_closures)]` to the crate attributes to enable
error[E0658]: rust-call ABI is subject to change
- --> $DIR/feature-gate-unboxed-closures-manual-impls.rs:23:5
+ --> $DIR/feature-gate-unboxed-closures-manual-impls.rs:23:12
|
LL | extern "rust-call" fn call_mut(&self, args: ()) -> () {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29625
= help: add `#![feature(unboxed_closures)]` to the crate attributes to enable
error[E0658]: rust-call ABI is subject to change
- --> $DIR/feature-gate-unboxed-closures-manual-impls.rs:29:5
+ --> $DIR/feature-gate-unboxed-closures-manual-impls.rs:29:12
|
LL | extern "rust-call" fn call_once(&self, args: ()) -> () {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29625
= help: add `#![feature(unboxed_closures)]` to the crate attributes to enable
error[E0658]: rust-call ABI is subject to change
- --> $DIR/feature-gate-unboxed-closures.rs:9:5
+ --> $DIR/feature-gate-unboxed-closures.rs:9:12
|
-LL | / extern "rust-call" fn call_once(self, (a, b): (u32, u32)) -> u32 {
-LL | | a + b
-LL | | }
- | |_____^
+LL | extern "rust-call" fn call_once(self, (a, b): (u32, u32)) -> u32 {
+ | ^^^^^^^^^^^
|
= note: for more information, see https://github.com/rust-lang/rust/issues/29625
= help: add `#![feature(unboxed_closures)]` to the crate attributes to enable
// but which encountered the same ICE/error. See `issue-53548.rs`
// for details.
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
use std::cell::RefCell;
use std::rc::Rc;
// also analogous to what we would do for higher-ranked regions
// appearing within the trait in other positions).
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![feature(generators)]
-// ignore-musl
-// ignore-x86
+// ignore-x86 FIXME: missing sysroot spans (#53081)
use std::fmt::Debug;
error[E0643]: method `foo` has incompatible signature for trait
- --> $DIR/impl-generic-mismatch.rs:11:12
+ --> $DIR/impl-generic-mismatch.rs:10:12
|
LL | fn foo(&self, _: &impl Debug);
| ---------- declaration in trait here
| -- ^^^^^^^^^^
error[E0643]: method `bar` has incompatible signature for trait
- --> $DIR/impl-generic-mismatch.rs:20:23
+ --> $DIR/impl-generic-mismatch.rs:19:23
|
LL | fn bar<U: Debug>(&self, _: &U);
| - declaration in trait here
| ^^^^^^^^^^ ^
error[E0643]: method `hash` has incompatible signature for trait
- --> $DIR/impl-generic-mismatch.rs:31:33
+ --> $DIR/impl-generic-mismatch.rs:30:33
|
LL | fn hash(&self, hasher: &mut impl Hasher) {}
| ^^^^^^^^^^^ expected generic parameter, found `impl Trait`
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
use std::iter::once;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// Tests for nested self-reference which caused a stack overflow.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![deny(warnings)]
// This used to ICE because it creates an `impl Trait` that captures a
// hidden empty region.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn server() -> impl FilterBase2 {
segment2(|| { loop { } }).map2(|| "")
// opaque type. As all regions are now required to outlive the bound in an
// opaque type we avoid the issue here.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
struct A<F>(F);
LL | impl std::fmt::Display for MyType4 {}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ missing `fmt` in implementation
|
- = note: `fmt` from trait: `fn(&Self, &mut std::fmt::Formatter<'_>) -> std::result::Result<(), std::fmt::Error>`
+ = help: implement the missing item: `fn fmt(&self, _: &mut std::fmt::Formatter<'_>) -> std::result::Result<(), std::fmt::Error> { unimplemented!() }`
error: aborting due to 4 previous errors
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// aux-build:two_macros.rs
macro_rules! define_vec {
error: macro-expanded `extern crate` items cannot shadow names passed with `--extern`
- --> $DIR/extern-prelude-extern-crate-restricted-shadowing.rs:21:9
+ --> $DIR/extern-prelude-extern-crate-restricted-shadowing.rs:20:9
|
LL | extern crate std as core;
| ^^^^^^^^^^^^^^^^^^^^^^^^^
| --------------------- in this macro invocation
error[E0659]: `Vec` is ambiguous (macro-expanded name vs less macro-expanded name from outer scope during import/macro resolution)
- --> $DIR/extern-prelude-extern-crate-restricted-shadowing.rs:15:9
+ --> $DIR/extern-prelude-extern-crate-restricted-shadowing.rs:14:9
|
LL | Vec::panic!();
| ^^^ ambiguous name
|
note: `Vec` could refer to the crate imported here
- --> $DIR/extern-prelude-extern-crate-restricted-shadowing.rs:7:9
+ --> $DIR/extern-prelude-extern-crate-restricted-shadowing.rs:6:9
|
LL | extern crate std as Vec;
| ^^^^^^^^^^^^^^^^^^^^^^^^
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
mod m {
pub struct S(u8);
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// aux-build:issue-55811.rs
extern crate issue_55811;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// edition:2018
use ::std;
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
use std::ops::Deref;
trait Trait {}
error: `impl` item signature doesn't match `trait` item signature
- --> $DIR/mismatched_trait_impl-2.rs:10:5
+ --> $DIR/mismatched_trait_impl-2.rs:9:5
|
LL | fn deref(&self) -> &dyn Trait {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ found fn(&Struct) -> &dyn Trait
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
use std::cell::Cell;
use std::panic::catch_unwind;
fn main() {
error[E0277]: the type `std::cell::UnsafeCell<i32>` may contain interior mutability and a reference may not be safely transferrable across a catch_unwind boundary
- --> $DIR/interior-mutability.rs:7:5
+ --> $DIR/interior-mutability.rs:6:5
|
LL | catch_unwind(|| { x.set(23); });
| ^^^^^^^^^^^^ `std::cell::UnsafeCell<i32>` may contain interior mutability and a reference may not be safely transferrable across a catch_unwind boundary
= help: within `std::cell::Cell<i32>`, the trait `std::panic::RefUnwindSafe` is not implemented for `std::cell::UnsafeCell<i32>`
= note: required because it appears within the type `std::cell::Cell<i32>`
= note: required because of the requirements on the impl of `std::panic::UnwindSafe` for `&std::cell::Cell<i32>`
- = note: required because it appears within the type `[closure@$DIR/interior-mutability.rs:7:18: 7:35 x:&std::cell::Cell<i32>]`
+ = note: required because it appears within the type `[closure@$DIR/interior-mutability.rs:6:18: 6:35 x:&std::cell::Cell<i32>]`
error: aborting due to previous error
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#[derive(Debug)]
enum Foo<'s> {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
pub struct Foo;
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
#![deny(missing_docs)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
trait Common { fn dummy(&self) { } }
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
//! Ensure the private trait Bar isn't complained about.
#![deny(missing_docs)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// #11612
// We weren't updating the auto adjustments with all the resolved
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(unused_attributes)]
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
trait Foo {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
use std::slice;
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
// defining static with struct that contains enum
// with &'static str variant used to cause ICE
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(unused_variables)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
pub struct Foo<'a, 'b: 'a> { foo: &'a &'b isize }
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
trait Foo {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
#![allow(unused_imports, dead_code)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
trait Foo: Sized {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(unused_imports)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
#[deny(dead_code)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub trait Reader {}
enum Wrapper<'a> {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
pub type BigRat<T = isize> = T;
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(unused_macros)]
#![allow(dead_code)]
#![feature(asm)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
#![feature(fn_traits, unboxed_closures)]
+++ /dev/null
-// build-pass (FIXME(62277): could be check-pass?)
-// pretty-expanded FIXME #23616
-
-fn main() {}
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
#![allow(non_upper_case_globals)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
struct A<'a> {
a: &'a i32,
struct Empty;
// This used to cause an ICE
+#[allow(improper_ctypes)]
extern "C" fn ice(_a: Empty) {}
fn main() {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
trait MatrixRow { fn dummy(&self) { }}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
struct Parser<'a, I, O> {
parse: Box<dyn FnMut(I) -> Result<O, String> + 'a>
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
// ignore-cloudabi no std::fs
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![feature(box_syntax)]
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(unused_must_use)]
#[allow(dead_code)]
fn check(a: &str) {
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code, warnings)]
static mut x: isize = 3;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
struct A;
impl Drop for A {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// Regression test for #17746
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// Test that we can parse where clauses on various forms of tuple
// structs.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(unused_imports)]
// These crossed imports should resolve fine, and not block on
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub trait Indexable<T>: std::ops::Index<usize, Output = T> {
fn index2(&self, i: usize) -> &T {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
pub trait Promisable: Send + Sync {}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(non_camel_case_types)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// Test that methods in trait impls should override default methods.
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
#![allow(non_camel_case_types)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#[derive(Eq, PartialEq, PartialOrd, Ord)]
enum Test<'a> {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
trait Tup {
type T0;
type T1;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
pub trait Foo : Send { }
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub trait Hasher {
type State;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// regression test for #19097
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub trait Handler {
fn handle(&self, _: &mut String);
}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(unused_imports)]
#![deny(unused_qualifications)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
trait Trait<Input> {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(unused_variables)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
trait T {
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
#![allow(unused_variables)]
use std::any::TypeId;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
trait Base {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(unused_variables)]
// Test that `<Type as Trait>::Output` and `Self::Output` are accepted as type annotations in let
// bindings
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![feature(fn_traits, unboxed_closures)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// Check that associated types are `Sized`
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// run-pass
#![allow(stable_features)]
// ignore-cloudabi no processes
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(unused_variables)]
struct Foo;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
#![allow(dead_code)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(unused_must_use)]
use std::thread;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// test that autoderef of a type like this does not
// cause compiler to loop. Note that no instances
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
#![allow(unused_imports)]
#![allow(stable_features)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
// ignore-cloudabi no std::fs
// Regression test for #20797.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub trait Subscriber {
type Input;
}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub trait Trait where Self::Out: std::fmt::Display {
type Out;
}
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
struct Bar;
impl Bar {
error[E0277]: the trait bound `Bar: std::hash::Hash` is not satisfied
- --> $DIR/issue-21160.rs:10:12
+ --> $DIR/issue-21160.rs:9:12
|
LL | struct Foo(Bar);
| ^^^ the trait `std::hash::Hash` is not implemented for `Bar`
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(unused_variables)]
trait Trait<'a> {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// Regression test for issue #21245. Check that we are able to infer
// the types in these examples correctly. It used to be that
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
#![no_implicit_prelude]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// Test that the requirement (in `Bar`) that `T::Bar : 'static` does
// not wind up propagating to `T`.
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
#![allow(non_upper_case_globals)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(unused_variables)]
-// build-pass (FIXME(62277): could be check-pass?)
+// run-pass
#![allow(stable_features)]
#![feature(cfg_target_feature)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// Regression test for #21726: an issue arose around the rules for
// subtyping of projection types that resulted in an unconstrained
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
#![allow(non_upper_case_globals)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub trait LineFormatter<'a> {
type Iter: Iterator<Item=&'a str> + 'a;
fn iter(&'a self, line: &'a str) -> Self::Iter;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(type_alias_bounds)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
trait A<T: A<T>> {}
fn main() {}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(type_alias_bounds)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// This test is reduced from libsyntax. It is just checking that we
// can successfully deal with a "deep" structure, which the drop-check
// was hitting a recursion limit on at one point.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(unused_variables)]
use std::collections::HashMap;
use std::collections::hash_map::Entry::Vacant;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
trait Test {}
macro_rules! test {
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#[allow(dead_code)]
static X: &'static str = &*"";
fn main() {}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(non_camel_case_types)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(non_camel_case_types)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(non_camel_case_types)]
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
trait Inner {
type T;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
use std::marker::PhantomData;
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![feature(core_intrinsics)]
#![allow(warnings)]
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#[derive(PartialEq)]
struct Slice { slice: [u8] }
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// Regression test for #24085. Errors were occurring in region
// inference due to the requirement that `'a:b'`, which was getting
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#[derive(Copy,Clone)]
struct Functions {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// This resulted in an ICE. Test for future-proofing
// Issue #24227
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
trait DictLike<'a> {
type ItemsIterator: Iterator<Item=u8>;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
struct Foo;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// compile-flags:--cfg set1
#![cfg_attr(set1, feature(rustc_attrs))]
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
#![allow(non_camel_case_types)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(non_camel_case_types)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(non_upper_case_globals)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#[derive(Debug)]
struct Row<T>([T]);
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
enum Sexpression {
Num(()),
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(non_upper_case_globals)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// Tests that impls are allowed to have looser, more permissive bounds
// than the traits require.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
use std::ops::{Deref, DerefMut};
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![deny(unused_attributes)]
#[repr(C)]
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
pub struct Foo {
x: isize,
}
impl Foo {
+ #[allow(improper_ctypes)]
pub extern fn foo_new() -> Foo {
Foo { x: 21, y: 33 }
}
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
fn main() {
match Some(1) {
None @ _ => {} //~ ERROR match bindings cannot shadow unit variants
error[E0530]: match bindings cannot shadow unit variants
- --> $DIR/issue-27033.rs:5:9
+ --> $DIR/issue-27033.rs:4:9
|
LL | None @ _ => {}
| ^^^^ cannot be named the same as a unit variant
| ---- the unit variant `None` is defined here
error[E0530]: match bindings cannot shadow constants
- --> $DIR/issue-27033.rs:9:9
+ --> $DIR/issue-27033.rs:8:9
|
LL | const C: u8 = 1;
| ---------------- the constant `C` is defined here
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
use std::cell::RefCell;
use std::rc::Rc;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub trait Trait<'a> {
type T;
type U;
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
#![allow(non_snake_case)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// Regression test for issue #27583. Unclear how useful this will be
// going forward, since the issue in question was EXTREMELY sensitive
// to compiler internals (like the precise numbering of nodes), but
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(unused_assignments)]
#![allow(unused_variables)]
// Test that a field can have the same name in different variants
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// Minimized version of issue-2804.rs. Both check that callee IDs don't
// clobber the previous node ID in a macro expr
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
use std::rc::Rc;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#[derive(Debug, Default, Eq, Hash, Ord, PartialEq, PartialOrd)]
struct Array<T> {
f00: [T; 00],
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
// #28600 ICE: pub extern fn with parameter type &str inside struct impl
struct Test;
impl Test {
+ #[allow(improper_ctypes)]
#[allow(dead_code)]
#[allow(unused_variables)]
pub extern fn test(val: &str) {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
fn main() {}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// Regression test for #28871. The problem is that rustc encountered
// two ways to project, one from a where clause and one from the where
// clauses on the trait definition. (In fact, in this case, the where
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub type Session = i32;
pub struct StreamParser<'a, T> {
_tokens: T,
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub struct Xyz<'a, V> {
pub v: (V, &'a u32),
}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#[derive(Debug)]
struct Message<'a, P: 'a = &'a [u8]> {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// This test ensures that each pointer type `P<X>` is covariant in `X`.
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(unused_must_use)]
#![allow(dead_code)]
#![allow(unused_mut)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub struct Chan;
pub struct ChanSelect<'c, T> {
chans: Vec<(&'c Chan, T)>,
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(non_upper_case_globals)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
struct S([u8; { struct Z; 0 }]);
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![feature(optin_builtin_traits)]
auto trait NotSame {}
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#[derive(Debug)]
pub struct Config {
pub name: String,
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![deny(unused_results)]
#![allow(dead_code)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// Regression test for #29740. Inefficient MIR matching algorithms
// generated way too much code for this sort of case, leading to OOM.
-//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn main() {
let mut i = [1, 2, 3];
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
pub struct Struct<K: 'static> {
pub field: K,
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(non_snake_case)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
trait Make {
type Out;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
trait Resources {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![warn(order_dependent_trait_objects)]
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
// only-x86_64
#![allow(dead_code, non_upper_case_globals)]
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
#![allow(unused_variables)]
const A: [u32; 1] = [0];
LL | impl PartialOrd for Thing {
| ^^^^^^^^^^^^^^^^^^^^^^^^^ missing `partial_cmp` in implementation
|
- = note: `partial_cmp` from trait: `fn(&Self, &Rhs) -> std::option::Option<std::cmp::Ordering>`
+ = help: implement the missing item: `fn partial_cmp(&self, _: &Rhs) -> std::option::Option<std::cmp::Ordering> { unimplemented!() }`
error: aborting due to previous error
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// Issue 33903:
// Built-in indexing should be used even when the index is not
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
struct A {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(non_camel_case_types)]
// rustc --test ignores2.rs && ./ignores2
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// #34751 ICE: 'rustc' panicked at 'assertion failed: !substs.has_regions_escaping_depth(0)'
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(stable_features)]
#![feature(associated_consts)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![feature(specialization)]
fn main() {}
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
// Regression test for #35546. Check that we are able to codegen
// this. Before we had problems because of the drop glue signature
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
trait Canvas {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
trait DeclarationParser {
type Declaration;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(unused_must_use)]
#![allow(dead_code)]
#![allow(unused_mut)]
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
// Tests for an LLVM abort when storing a lifetime-parametric fn into
// context that is expecting one that is not lifetime-parametric
// (i.e., has no `for <'_>`).
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
+// compile-flags: -Zsave-analysis
#![feature(rustc_attrs)]
#![allow(warnings)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// Regression test for #37655. The problem was a false edge created by
// coercion that wound up requiring that `'a` (in `split()`) outlive
// `'b`, which shouldn't be necessary.
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
trait Foo {
fn foo(&self);
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
type A = for<> fn();
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
#[repr(u64)]
enum A {
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
#[repr(C)]
pub struct Foo(i128);
+#[allow(improper_ctypes)]
#[no_mangle]
pub extern "C" fn foo(x: Foo) -> Foo { x }
// aux-build:issue-38875-b.rs
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
extern crate issue_38875_b;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
fn f<T: ?for<'a> Sized>() {}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
macro_rules! expr { () => { () } }
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
trait A {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(unreachable_code)]
// Regression test for issue #39984.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
macro_rules! m { () => { 0 } }
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(unused_mut)]
/*
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(unused)]
fn f() {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(unused)]
fn f() {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
macro_rules! m {
($i:meta) => {
#[derive($i)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
struct Foo;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
struct Function<T, F> { t: T, f: F }
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![deny(dead_code)]
#[used]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// Regression test for #41936. The coerce-unsized trait check in
// coherence was using subtyping, which triggered variance
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn main() {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
struct Foo<T>(T);
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(stable_features)]
#![feature(associated_consts)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(unused)]
macro_rules! column {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
trait Trait {
type Output;
enum Big { A, B }
fn main() {
- unsafe {
- println!("{} {:?}",
- std::intrinsics::discriminant_value(&Big::A),
- std::mem::discriminant(&Big::B));
- }
+ println!("{} {:?}",
+ std::intrinsics::discriminant_value(&Big::A),
+ std::mem::discriminant(&Big::B));
}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(unused_variables)]
trait VecN {
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
pub trait Foo<'a> {
type Bar;
fn foo(&'a self) -> Self::Bar;
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass (FIXME(55996): should be run on targets supporting avx)
// only-x86_64
// no-prefer-dynamic
// compile-flags: -Ctarget-feature=+avx -Clto
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
trait T {
type X;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
struct Foo(bool);
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
//! dox
#![deny(missing_docs)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
macro_rules! a {
() => { "a" }
}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
use std::ops::Add;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![deny(non_camel_case_types)]
#[allow(dead_code)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
mod my_mod {
#[derive(Clone, Copy, Eq, PartialEq, PartialOrd, Ord, Hash)]
pub struct Name<'a> {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
-#[repr(C,u8)]
+#[repr(C,u8)] //~ WARNING conflicting representation hints
enum Foo {
A,
B,
}
-#[repr(C)]
+#[repr(C)] //~ WARNING conflicting representation hints
#[repr(u8)]
enum Bar {
A,
// See https://github.com/rust-lang/rust/issues/47309
// compile-flags:-Clink-dead-code
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![crate_type="rlib"]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(unused_imports)]
use {{}, {}};
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
struct AtomicRefMut<'a> {
value: &'a mut i32,
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
struct WithDrop;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
struct MyStruct<'a> {
field: &'a mut (),
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// Tests that automatic coercions from &mut T to *mut T
// allow borrows of T to expire immediately - essentially, that
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(non_upper_case_globals)]
static mut x: &'static u32 = &0;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// Regression test for #48551. Covers a case where duplicate candidates
// arose during associated type projection.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn iter<'a>(data: &'a [usize]) -> impl Iterator<Item = usize> + 'a {
data.iter()
.map(
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn fibs(n: u32) -> impl Iterator<Item=u128> {
(0 .. n)
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![feature(decl_macro)]
// second time. Uncool.
// compile-flags:-Zmir-opt-level=3
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
fn main() {
let _ = (0 .. 1).filter(|_| [1].iter().all(|_| true)).count();
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn main() {
assert!({false});
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
use std::marker::PhantomData;
struct Meta<A> {
// Confirm that we don't accidentally divide or mod by zero in llvm_type
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
mod a {
pub trait A {}
// compile-flags: --crate-type dylib --target thumbv7em-none-eabihf
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
// error-pattern: dropping unsupported crate type `dylib` for target `thumbv7em-none-eabihf`
#![feature(no_core)]
assert_eq!(1, make_b() as u8);
assert_eq!(1, make_b() as i32);
assert_eq!(1, make_b() as u32);
- assert_eq!(1, unsafe { std::intrinsics::discriminant_value(&make_b()) });
+ assert_eq!(1, std::intrinsics::discriminant_value(&make_b()));
}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
const PATH_DOT: &[u8] = &[b'.'];
struct Bar;
impl Foo for Bar {
+ #[allow(improper_ctypes)]
extern fn borrow(&self) {}
+ #[allow(improper_ctypes)]
extern fn take(self: Box<Self>) {}
}
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![crate_type = "lib"]
#![feature(linkage)]
// implied bounds was causing outlives relations that were not
// properly handled.
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn main() {}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
struct Foo {
bar: dyn for<'r> Fn(usize, &'r dyn FnMut())
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
// Regression test for an NLL-related ICE (#53568) -- we failed to
// resolve inference variables in "custom type-ops".
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
trait Future {
type Item;
// rust-lang/rust#53675: At one point the compiler errored when a test
// named `panic` used the `assert!` macro in expression position.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// compile-flags: --test
mod in_expression_position {
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
// This test is the same code as in ui/symbol-names/issue-60925.rs but this checks that the
// reproduction compiles successfully and doesn't segfault, whereas that test just checks that the
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub struct GstRc {
_obj: *const (),
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// This test checks that the `remove extra angle brackets` error doesn't happen for some
// potential edge-cases..
// This test is a minimal version of an ICE in the dropck-eyepatch tests
// found in the fix for #54943.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn foo<T>(_t: T) {
}
// found in the fix for #54943. In particular, this test is in unreachable
// code as the initial fix for this ICE only worked if the code was reachable.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn foo<T>(_t: T) {
}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// FIXME(#54943) This test targets the scenario where proving the WF requirements requires
// knowing the value of the `_` type present in the user type annotation - unfortunately, figuring
// out the value of that `_` requires type-checking the surrounding code, but that code is dead,
// is OK because the test is here to check that the compiler doesn't ICE (cf.
// #5500).
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
struct TrieMapIterator<'a> {
node: &'a usize
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
// Regression test for #56128. When this `pub(super) use...` gets
// exploded in the HIR, we were not handling ids correctly.
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
mod bar {
pub(super) use self::baz::{x, y};
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
trait FooTrait {}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
struct T {}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
trait Foo<Args> {
type Output;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
trait Foo {}
impl Foo for dyn Send {}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// Originally from #53925.
// Tests that the `unreachable_pub` lint doesn't fire for `pub self::bar::Bar`.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// Tests that the `unreachable_pub` lint doesn't fire for `pub self::imp::f`.
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
#![allow(improper_ctypes)]
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// compile-flags: -Z unpretty=hir
#![feature(type_alias_impl_trait)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// compile-flags: -Z unpretty=hir
#![feature(type_alias_impl_trait)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
#[derive(PartialEq)]
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
#![allow(improper_ctypes)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// pretty-expanded FIXME #23616
use std::mem;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(non_upper_case_globals)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
trait Foo1 {}
trait A {}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
#![allow(non_camel_case_types)]
#![allow(non_upper_case_globals)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
#![allow(non_snake_case)]
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
// Regression test for issue 9243
#![allow(non_upper_case_globals)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
-// build-pass (FIXME(62277): could be check-pass?)
+// build-pass
#![allow(dead_code)]
// pretty-expanded FIXME #23616
--- /dev/null
+// run-pass
+// run-rustfix
+
+fn main() {
+ let small = [1, 2];
+ let big = [0u8; 33];
+
+ // Expressions that should trigger the lint
+ small.iter();
+ //~^ WARNING this method call currently resolves to `<&[T; N] as IntoIterator>::into_iter`
+ //~| WARNING this was previously accepted by the compiler but is being phased out
+ [1, 2].iter();
+ //~^ WARNING this method call currently resolves to `<&[T; N] as IntoIterator>::into_iter`
+ //~| WARNING this was previously accepted by the compiler but is being phased out
+ big.iter();
+ //~^ WARNING this method call currently resolves to `<&[T] as IntoIterator>::into_iter`
+ //~| WARNING this was previously accepted by the compiler but is being phased out
+ [0u8; 33].iter();
+ //~^ WARNING this method call currently resolves to `<&[T] as IntoIterator>::into_iter`
+ //~| WARNING this was previously accepted by the compiler but is being phased out
+
+
+ // Expressions that should not
+ (&[1, 2]).into_iter();
+ (&small).into_iter();
+ (&[0u8; 33]).into_iter();
+ (&big).into_iter();
+
+ for _ in &[1, 2] {}
+ (&small as &[_]).into_iter();
+ small[..].into_iter();
+ std::iter::IntoIterator::into_iter(&[1, 2]);
+}
--- /dev/null
+// run-pass
+// run-rustfix
+
+fn main() {
+ let small = [1, 2];
+ let big = [0u8; 33];
+
+ // Expressions that should trigger the lint
+ small.into_iter();
+ //~^ WARNING this method call currently resolves to `<&[T; N] as IntoIterator>::into_iter`
+ //~| WARNING this was previously accepted by the compiler but is being phased out
+ [1, 2].into_iter();
+ //~^ WARNING this method call currently resolves to `<&[T; N] as IntoIterator>::into_iter`
+ //~| WARNING this was previously accepted by the compiler but is being phased out
+ big.into_iter();
+ //~^ WARNING this method call currently resolves to `<&[T] as IntoIterator>::into_iter`
+ //~| WARNING this was previously accepted by the compiler but is being phased out
+ [0u8; 33].into_iter();
+ //~^ WARNING this method call currently resolves to `<&[T] as IntoIterator>::into_iter`
+ //~| WARNING this was previously accepted by the compiler but is being phased out
+
+
+ // Expressions that should not
+ (&[1, 2]).into_iter();
+ (&small).into_iter();
+ (&[0u8; 33]).into_iter();
+ (&big).into_iter();
+
+ for _ in &[1, 2] {}
+ (&small as &[_]).into_iter();
+ small[..].into_iter();
+ std::iter::IntoIterator::into_iter(&[1, 2]);
+}
--- /dev/null
+warning: this method call currently resolves to `<&[T; N] as IntoIterator>::into_iter` (due to autoref coercions), but that might change in the future when `IntoIterator` impls for arrays are added.
+ --> $DIR/into-iter-on-arrays-lint.rs:9:11
+ |
+LL | small.into_iter();
+ | ^^^^^^^^^ help: use `.iter()` instead of `.into_iter()` to avoid ambiguity: `iter`
+ |
+ = note: `#[warn(array_into_iter)]` on by default
+ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
+ = note: for more information, see issue #66145 <https://github.com/rust-lang/rust/issues/66145>
+
+warning: this method call currently resolves to `<&[T; N] as IntoIterator>::into_iter` (due to autoref coercions), but that might change in the future when `IntoIterator` impls for arrays are added.
+ --> $DIR/into-iter-on-arrays-lint.rs:12:12
+ |
+LL | [1, 2].into_iter();
+ | ^^^^^^^^^ help: use `.iter()` instead of `.into_iter()` to avoid ambiguity: `iter`
+ |
+ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
+ = note: for more information, see issue #66145 <https://github.com/rust-lang/rust/issues/66145>
+
+warning: this method call currently resolves to `<&[T] as IntoIterator>::into_iter` (due to autoref coercions), but that might change in the future when `IntoIterator` impls for arrays are added.
+ --> $DIR/into-iter-on-arrays-lint.rs:15:9
+ |
+LL | big.into_iter();
+ | ^^^^^^^^^ help: use `.iter()` instead of `.into_iter()` to avoid ambiguity: `iter`
+ |
+ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
+ = note: for more information, see issue #66145 <https://github.com/rust-lang/rust/issues/66145>
+
+warning: this method call currently resolves to `<&[T] as IntoIterator>::into_iter` (due to autoref coercions), but that might change in the future when `IntoIterator` impls for arrays are added.
+ --> $DIR/into-iter-on-arrays-lint.rs:18:15
+ |
+LL | [0u8; 33].into_iter();
+ | ^^^^^^^^^ help: use `.iter()` instead of `.into_iter()` to avoid ambiguity: `iter`
+ |
+ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
+ = note: for more information, see issue #66145 <https://github.com/rust-lang/rust/issues/66145>
+
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![feature(box_syntax)]
#![feature(box_patterns)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![warn(unused_parens)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![forbid(non_camel_case_types)]
#![allow(dead_code)]
--- /dev/null
+#![feature(rustc_private)]
+
+#![allow(private_in_public)]
+#![deny(improper_ctypes)]
+
+extern crate libc;
+
+use std::default::Default;
+use std::marker::PhantomData;
+
+trait Mirror { type It: ?Sized; }
+
+impl<T: ?Sized> Mirror for T { type It = Self; }
+
+#[repr(C)]
+pub struct StructWithProjection(*mut <StructWithProjection as Mirror>::It);
+
+#[repr(C)]
+pub struct StructWithProjectionAndLifetime<'a>(
+ &'a mut <StructWithProjectionAndLifetime<'a> as Mirror>::It
+);
+
+pub type I32Pair = (i32, i32);
+
+#[repr(C)]
+pub struct ZeroSize;
+
+pub type RustFn = fn();
+
+pub type RustBadRet = extern fn() -> Box<u32>;
+
+pub type CVoidRet = ();
+
+pub struct Foo;
+
+#[repr(transparent)]
+pub struct TransparentI128(i128);
+
+#[repr(transparent)]
+pub struct TransparentStr(&'static str);
+
+#[repr(transparent)]
+pub struct TransparentBadFn(RustBadRet);
+
+#[repr(transparent)]
+pub struct TransparentInt(u32);
+
+#[repr(transparent)]
+pub struct TransparentRef<'a>(&'a TransparentInt);
+
+#[repr(transparent)]
+pub struct TransparentLifetime<'a>(*const u8, PhantomData<&'a ()>);
+
+#[repr(transparent)]
+pub struct TransparentUnit<U>(f32, PhantomData<U>);
+
+#[repr(transparent)]
+pub struct TransparentCustomZst(i32, ZeroSize);
+
+#[repr(C)]
+pub struct ZeroSizeWithPhantomData(PhantomData<i32>);
+
+pub extern "C" fn ptr_type1(size: *const Foo) { }
+//~^ ERROR: uses type `Foo`
+
+pub extern "C" fn ptr_type2(size: *const Foo) { }
+//~^ ERROR: uses type `Foo`
+
+pub extern "C" fn slice_type(p: &[u32]) { }
+//~^ ERROR: uses type `[u32]`
+
+pub extern "C" fn str_type(p: &str) { }
+//~^ ERROR: uses type `str`
+
+pub extern "C" fn box_type(p: Box<u32>) { }
+//~^ ERROR uses type `std::boxed::Box<u32>`
+
+pub extern "C" fn char_type(p: char) { }
+//~^ ERROR uses type `char`
+
+pub extern "C" fn i128_type(p: i128) { }
+//~^ ERROR uses type `i128`
+
+pub extern "C" fn u128_type(p: u128) { }
+//~^ ERROR uses type `u128`
+
+pub extern "C" fn tuple_type(p: (i32, i32)) { }
+//~^ ERROR uses type `(i32, i32)`
+
+pub extern "C" fn tuple_type2(p: I32Pair) { }
+//~^ ERROR uses type `(i32, i32)`
+
+pub extern "C" fn zero_size(p: ZeroSize) { }
+//~^ ERROR uses type `ZeroSize`
+
+pub extern "C" fn zero_size_phantom(p: ZeroSizeWithPhantomData) { }
+//~^ ERROR uses type `ZeroSizeWithPhantomData`
+
+pub extern "C" fn zero_size_phantom_toplevel() -> PhantomData<bool> {
+//~^ ERROR uses type `std::marker::PhantomData<bool>`
+ Default::default()
+}
+
+pub extern "C" fn fn_type(p: RustFn) { }
+//~^ ERROR uses type `fn()`
+
+pub extern "C" fn fn_type2(p: fn()) { }
+//~^ ERROR uses type `fn()`
+
+pub extern "C" fn fn_contained(p: RustBadRet) { }
+//~^ ERROR: uses type `std::boxed::Box<u32>`
+
+pub extern "C" fn transparent_i128(p: TransparentI128) { }
+//~^ ERROR: uses type `i128`
+
+pub extern "C" fn transparent_str(p: TransparentStr) { }
+//~^ ERROR: uses type `str`
+
+pub extern "C" fn transparent_fn(p: TransparentBadFn) { }
+//~^ ERROR: uses type `std::boxed::Box<u32>`
+
+pub extern "C" fn good3(fptr: Option<extern fn()>) { }
+
+pub extern "C" fn good4(aptr: &[u8; 4 as usize]) { }
+
+pub extern "C" fn good5(s: StructWithProjection) { }
+
+pub extern "C" fn good6(s: StructWithProjectionAndLifetime) { }
+
+pub extern "C" fn good7(fptr: extern fn() -> ()) { }
+
+pub extern "C" fn good8(fptr: extern fn() -> !) { }
+
+pub extern "C" fn good9() -> () { }
+
+pub extern "C" fn good10() -> CVoidRet { }
+
+pub extern "C" fn good11(size: isize) { }
+
+pub extern "C" fn good12(size: usize) { }
+
+pub extern "C" fn good13(n: TransparentInt) { }
+
+pub extern "C" fn good14(p: TransparentRef) { }
+
+pub extern "C" fn good15(p: TransparentLifetime) { }
+
+pub extern "C" fn good16(p: TransparentUnit<ZeroSize>) { }
+
+pub extern "C" fn good17(p: TransparentCustomZst) { }
+
+#[allow(improper_ctypes)]
+pub extern "C" fn good18(_: &String) { }
+
+#[cfg(not(target_arch = "wasm32"))]
+pub extern "C" fn good1(size: *const libc::c_int) { }
+
+#[cfg(not(target_arch = "wasm32"))]
+pub extern "C" fn good2(size: *const libc::c_uint) { }
+
+pub extern "C" fn unused_generic1<T>(size: *const Foo) { }
+//~^ ERROR: uses type `Foo`
+
+pub extern "C" fn unused_generic2<T>() -> PhantomData<bool> {
+//~^ ERROR uses type `std::marker::PhantomData<bool>`
+ Default::default()
+}
+
+pub extern "C" fn used_generic1<T>(x: T) { }
+
+pub extern "C" fn used_generic2<T>(x: T, size: *const Foo) { }
+//~^ ERROR: uses type `Foo`
+
+pub extern "C" fn used_generic3<T: Default>() -> T {
+ Default::default()
+}
+
+pub extern "C" fn used_generic4<T>(x: Vec<T>) { }
+//~^ ERROR: uses type `std::vec::Vec<T>`
+
+pub extern "C" fn used_generic5<T>() -> Vec<T> {
+//~^ ERROR: uses type `std::vec::Vec<T>`
+ Default::default()
+}
+
+fn main() {}
--- /dev/null
+error: `extern` fn uses type `Foo`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:63:35
+ |
+LL | pub extern "C" fn ptr_type1(size: *const Foo) { }
+ | ^^^^^^^^^^ not FFI-safe
+ |
+note: lint level defined here
+ --> $DIR/lint-ctypes-fn.rs:4:9
+ |
+LL | #![deny(improper_ctypes)]
+ | ^^^^^^^^^^^^^^^
+ = help: consider adding a `#[repr(C)]` or `#[repr(transparent)]` attribute to this struct
+ = note: this struct has unspecified layout
+note: type defined here
+ --> $DIR/lint-ctypes-fn.rs:34:1
+ |
+LL | pub struct Foo;
+ | ^^^^^^^^^^^^^^^
+
+error: `extern` fn uses type `Foo`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:66:35
+ |
+LL | pub extern "C" fn ptr_type2(size: *const Foo) { }
+ | ^^^^^^^^^^ not FFI-safe
+ |
+ = help: consider adding a `#[repr(C)]` or `#[repr(transparent)]` attribute to this struct
+ = note: this struct has unspecified layout
+note: type defined here
+ --> $DIR/lint-ctypes-fn.rs:34:1
+ |
+LL | pub struct Foo;
+ | ^^^^^^^^^^^^^^^
+
+error: `extern` fn uses type `[u32]`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:69:33
+ |
+LL | pub extern "C" fn slice_type(p: &[u32]) { }
+ | ^^^^^^ not FFI-safe
+ |
+ = help: consider using a raw pointer instead
+ = note: slices have no C equivalent
+
+error: `extern` fn uses type `str`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:72:31
+ |
+LL | pub extern "C" fn str_type(p: &str) { }
+ | ^^^^ not FFI-safe
+ |
+ = help: consider using `*const u8` and a length instead
+ = note: string slices have no C equivalent
+
+error: `extern` fn uses type `std::boxed::Box<u32>`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:75:31
+ |
+LL | pub extern "C" fn box_type(p: Box<u32>) { }
+ | ^^^^^^^^ not FFI-safe
+ |
+ = help: consider adding a `#[repr(C)]` or `#[repr(transparent)]` attribute to this struct
+ = note: this struct has unspecified layout
+
+error: `extern` fn uses type `char`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:78:32
+ |
+LL | pub extern "C" fn char_type(p: char) { }
+ | ^^^^ not FFI-safe
+ |
+ = help: consider using `u32` or `libc::wchar_t` instead
+ = note: the `char` type has no C equivalent
+
+error: `extern` fn uses type `i128`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:81:32
+ |
+LL | pub extern "C" fn i128_type(p: i128) { }
+ | ^^^^ not FFI-safe
+ |
+ = note: 128-bit integers don't currently have a known stable ABI
+
+error: `extern` fn uses type `u128`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:84:32
+ |
+LL | pub extern "C" fn u128_type(p: u128) { }
+ | ^^^^ not FFI-safe
+ |
+ = note: 128-bit integers don't currently have a known stable ABI
+
+error: `extern` fn uses type `(i32, i32)`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:87:33
+ |
+LL | pub extern "C" fn tuple_type(p: (i32, i32)) { }
+ | ^^^^^^^^^^ not FFI-safe
+ |
+ = help: consider using a struct instead
+ = note: tuples have unspecified layout
+
+error: `extern` fn uses type `(i32, i32)`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:90:34
+ |
+LL | pub extern "C" fn tuple_type2(p: I32Pair) { }
+ | ^^^^^^^ not FFI-safe
+ |
+ = help: consider using a struct instead
+ = note: tuples have unspecified layout
+
+error: `extern` fn uses type `ZeroSize`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:93:32
+ |
+LL | pub extern "C" fn zero_size(p: ZeroSize) { }
+ | ^^^^^^^^ not FFI-safe
+ |
+ = help: consider adding a member to this struct
+ = note: this struct has no fields
+note: type defined here
+ --> $DIR/lint-ctypes-fn.rs:26:1
+ |
+LL | pub struct ZeroSize;
+ | ^^^^^^^^^^^^^^^^^^^^
+
+error: `extern` fn uses type `ZeroSizeWithPhantomData`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:96:40
+ |
+LL | pub extern "C" fn zero_size_phantom(p: ZeroSizeWithPhantomData) { }
+ | ^^^^^^^^^^^^^^^^^^^^^^^ not FFI-safe
+ |
+ = note: composed only of `PhantomData`
+note: type defined here
+ --> $DIR/lint-ctypes-fn.rs:61:1
+ |
+LL | pub struct ZeroSizeWithPhantomData(PhantomData<i32>);
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+error: `extern` fn uses type `std::marker::PhantomData<bool>`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:99:51
+ |
+LL | pub extern "C" fn zero_size_phantom_toplevel() -> PhantomData<bool> {
+ | ^^^^^^^^^^^^^^^^^ not FFI-safe
+ |
+ = note: composed only of `PhantomData`
+
+error: `extern` fn uses type `fn()`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:104:30
+ |
+LL | pub extern "C" fn fn_type(p: RustFn) { }
+ | ^^^^^^ not FFI-safe
+ |
+ = help: consider using an `extern fn(...) -> ...` function pointer instead
+ = note: this function pointer has Rust-specific calling convention
+
+error: `extern` fn uses type `fn()`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:107:31
+ |
+LL | pub extern "C" fn fn_type2(p: fn()) { }
+ | ^^^^ not FFI-safe
+ |
+ = help: consider using an `extern fn(...) -> ...` function pointer instead
+ = note: this function pointer has Rust-specific calling convention
+
+error: `extern` fn uses type `std::boxed::Box<u32>`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:110:35
+ |
+LL | pub extern "C" fn fn_contained(p: RustBadRet) { }
+ | ^^^^^^^^^^ not FFI-safe
+ |
+ = help: consider adding a `#[repr(C)]` or `#[repr(transparent)]` attribute to this struct
+ = note: this struct has unspecified layout
+
+error: `extern` fn uses type `i128`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:113:39
+ |
+LL | pub extern "C" fn transparent_i128(p: TransparentI128) { }
+ | ^^^^^^^^^^^^^^^ not FFI-safe
+ |
+ = note: 128-bit integers don't currently have a known stable ABI
+
+error: `extern` fn uses type `str`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:116:38
+ |
+LL | pub extern "C" fn transparent_str(p: TransparentStr) { }
+ | ^^^^^^^^^^^^^^ not FFI-safe
+ |
+ = help: consider using `*const u8` and a length instead
+ = note: string slices have no C equivalent
+
+error: `extern` fn uses type `std::boxed::Box<u32>`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:119:37
+ |
+LL | pub extern "C" fn transparent_fn(p: TransparentBadFn) { }
+ | ^^^^^^^^^^^^^^^^ not FFI-safe
+ |
+ = help: consider adding a `#[repr(C)]` or `#[repr(transparent)]` attribute to this struct
+ = note: this struct has unspecified layout
+
+error: `extern` fn uses type `Foo`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:161:44
+ |
+LL | pub extern "C" fn unused_generic1<T>(size: *const Foo) { }
+ | ^^^^^^^^^^ not FFI-safe
+ |
+ = help: consider adding a `#[repr(C)]` or `#[repr(transparent)]` attribute to this struct
+ = note: this struct has unspecified layout
+note: type defined here
+ --> $DIR/lint-ctypes-fn.rs:34:1
+ |
+LL | pub struct Foo;
+ | ^^^^^^^^^^^^^^^
+
+error: `extern` fn uses type `std::marker::PhantomData<bool>`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:164:43
+ |
+LL | pub extern "C" fn unused_generic2<T>() -> PhantomData<bool> {
+ | ^^^^^^^^^^^^^^^^^ not FFI-safe
+ |
+ = note: composed only of `PhantomData`
+
+error: `extern` fn uses type `Foo`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:171:48
+ |
+LL | pub extern "C" fn used_generic2<T>(x: T, size: *const Foo) { }
+ | ^^^^^^^^^^ not FFI-safe
+ |
+ = help: consider adding a `#[repr(C)]` or `#[repr(transparent)]` attribute to this struct
+ = note: this struct has unspecified layout
+note: type defined here
+ --> $DIR/lint-ctypes-fn.rs:34:1
+ |
+LL | pub struct Foo;
+ | ^^^^^^^^^^^^^^^
+
+error: `extern` fn uses type `std::vec::Vec<T>`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:178:39
+ |
+LL | pub extern "C" fn used_generic4<T>(x: Vec<T>) { }
+ | ^^^^^^ not FFI-safe
+ |
+ = help: consider adding a `#[repr(C)]` or `#[repr(transparent)]` attribute to this struct
+ = note: this struct has unspecified layout
+
+error: `extern` fn uses type `std::vec::Vec<T>`, which is not FFI-safe
+ --> $DIR/lint-ctypes-fn.rs:181:41
+ |
+LL | pub extern "C" fn used_generic5<T>() -> Vec<T> {
+ | ^^^^^^ not FFI-safe
+ |
+ = help: consider adding a `#[repr(C)]` or `#[repr(transparent)]` attribute to this struct
+ = note: this struct has unspecified layout
+
+error: aborting due to 24 previous errors
+
let _val: NonNull<i32> = mem::zeroed(); //~ ERROR: does not permit zero-initialization
let _val: NonNull<i32> = mem::uninitialized(); //~ ERROR: does not permit being left uninitialized
+ let _val: *const dyn Send = mem::zeroed(); //~ ERROR: does not permit zero-initialization
+ let _val: *const dyn Send = mem::uninitialized(); //~ ERROR: does not permit being left uninitialized
+
// Things that can be zero, but not uninit.
let _val: bool = mem::zeroed();
let _val: bool = mem::uninitialized(); //~ ERROR: does not permit being left uninitialized
let _val: &'static [i32] = mem::transmute((0usize, 0usize)); //~ ERROR: does not permit zero-initialization
let _val: NonZeroU32 = mem::transmute(0); //~ ERROR: does not permit zero-initialization
+ // `MaybeUninit` cases
+ let _val: NonNull<i32> = MaybeUninit::zeroed().assume_init(); //~ ERROR: does not permit zero-initialization
+ let _val: NonNull<i32> = MaybeUninit::uninit().assume_init(); //~ ERROR: does not permit being left uninitialized
+ let _val: bool = MaybeUninit::uninit().assume_init(); //~ ERROR: does not permit being left uninitialized
+
// Some more types that should work just fine.
let _val: Option<&'static i32> = mem::zeroed();
let _val: Option<fn()> = mem::zeroed();
let _val: MaybeUninit<&'static i32> = mem::zeroed();
let _val: i32 = mem::zeroed();
+ let _val: bool = MaybeUninit::zeroed().assume_init();
}
}
| ^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
note: lint level defined here
--> $DIR/uninitialized-zeroed.rs:7:9
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: References must be non-null
| ^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
note: References must be non-null (in this struct field)
--> $DIR/uninitialized-zeroed.rs:18:18
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
note: References must be non-null (in this struct field)
--> $DIR/uninitialized-zeroed.rs:18:18
| ^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: The never type (`!`) has no valid value
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: The never type (`!`) has no valid value
| ^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: The never type (`!`) has no valid value
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: The never type (`!`) has no valid value
| ^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: 0-variant enums have no valid value
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: 0-variant enums have no valid value
| ^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: References must be non-null
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: References must be non-null
| ^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
note: References must be non-null (in this struct field)
--> $DIR/uninitialized-zeroed.rs:15:12
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
note: References must be non-null (in this struct field)
--> $DIR/uninitialized-zeroed.rs:15:12
| ^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: Function pointers must be non-null
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: Function pointers must be non-null
| ^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
note: Function pointers must be non-null (in this struct field)
--> $DIR/uninitialized-zeroed.rs:18:18
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
note: Function pointers must be non-null (in this struct field)
--> $DIR/uninitialized-zeroed.rs:18:18
| ^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
note: Function pointers must be non-null (in this enum field)
--> $DIR/uninitialized-zeroed.rs:19:28
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
note: Function pointers must be non-null (in this enum field)
--> $DIR/uninitialized-zeroed.rs:19:28
| ^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
note: References must be non-null (in this struct field)
--> $DIR/uninitialized-zeroed.rs:16:16
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
note: References must be non-null (in this struct field)
--> $DIR/uninitialized-zeroed.rs:16:16
| ^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: std::ptr::NonNull<i32> must be non-null
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: std::ptr::NonNull<i32> must be non-null
+error: the type `*const dyn std::marker::Send` does not permit zero-initialization
+ --> $DIR/uninitialized-zeroed.rs:70:37
+ |
+LL | let _val: *const dyn Send = mem::zeroed();
+ | ^^^^^^^^^^^^^
+ | |
+ | this code causes undefined behavior when executed
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
+ |
+ = note: The vtable of a wide raw pointer must be non-null
+
+error: the type `*const dyn std::marker::Send` does not permit being left uninitialized
+ --> $DIR/uninitialized-zeroed.rs:71:37
+ |
+LL | let _val: *const dyn Send = mem::uninitialized();
+ | ^^^^^^^^^^^^^^^^^^^^
+ | |
+ | this code causes undefined behavior when executed
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
+ |
+ = note: The vtable of a wide raw pointer must be non-null
+
error: the type `bool` does not permit being left uninitialized
- --> $DIR/uninitialized-zeroed.rs:72:26
+ --> $DIR/uninitialized-zeroed.rs:75:26
|
LL | let _val: bool = mem::uninitialized();
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: Booleans must be `true` or `false`
error: the type `Wrap<char>` does not permit being left uninitialized
- --> $DIR/uninitialized-zeroed.rs:75:32
+ --> $DIR/uninitialized-zeroed.rs:78:32
|
LL | let _val: Wrap<char> = mem::uninitialized();
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
note: Characters must be a valid unicode codepoint (in this struct field)
--> $DIR/uninitialized-zeroed.rs:18:18
| ^^^^^^^^^^
error: the type `NonBig` does not permit being left uninitialized
- --> $DIR/uninitialized-zeroed.rs:78:28
+ --> $DIR/uninitialized-zeroed.rs:81:28
|
LL | let _val: NonBig = mem::uninitialized();
| ^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: NonBig must be initialized inside its custom valid range
error: the type `&'static i32` does not permit zero-initialization
- --> $DIR/uninitialized-zeroed.rs:81:34
+ --> $DIR/uninitialized-zeroed.rs:84:34
|
LL | let _val: &'static i32 = mem::transmute(0usize);
| ^^^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: References must be non-null
error: the type `&'static [i32]` does not permit zero-initialization
- --> $DIR/uninitialized-zeroed.rs:82:36
+ --> $DIR/uninitialized-zeroed.rs:85:36
|
LL | let _val: &'static [i32] = mem::transmute((0usize, 0usize));
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: References must be non-null
error: the type `std::num::NonZeroU32` does not permit zero-initialization
- --> $DIR/uninitialized-zeroed.rs:83:32
+ --> $DIR/uninitialized-zeroed.rs:86:32
|
LL | let _val: NonZeroU32 = mem::transmute(0);
| ^^^^^^^^^^^^^^^^^
| |
| this code causes undefined behavior when executed
- | help: use `MaybeUninit<T>` instead
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
|
= note: std::num::NonZeroU32 must be non-null
-error: aborting due to 30 previous errors
+error: the type `std::ptr::NonNull<i32>` does not permit zero-initialization
+ --> $DIR/uninitialized-zeroed.rs:89:34
+ |
+LL | let _val: NonNull<i32> = MaybeUninit::zeroed().assume_init();
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | |
+ | this code causes undefined behavior when executed
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
+ |
+ = note: std::ptr::NonNull<i32> must be non-null
+
+error: the type `std::ptr::NonNull<i32>` does not permit being left uninitialized
+ --> $DIR/uninitialized-zeroed.rs:90:34
+ |
+LL | let _val: NonNull<i32> = MaybeUninit::uninit().assume_init();
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | |
+ | this code causes undefined behavior when executed
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
+ |
+ = note: std::ptr::NonNull<i32> must be non-null
+
+error: the type `bool` does not permit being left uninitialized
+ --> $DIR/uninitialized-zeroed.rs:91:26
+ |
+LL | let _val: bool = MaybeUninit::uninit().assume_init();
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | |
+ | this code causes undefined behavior when executed
+ | help: use `MaybeUninit<T>` instead, and only call `assume_init` after initialization is done
+ |
+ = note: Booleans must be `true` or `false`
+
+error: aborting due to 35 previous errors
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![warn(unused_imports)] // Warning explanation here, it's OK
// run-pass
+#[allow(improper_ctypes)]
pub extern "C" fn tuple2() -> (u16, u8) {
(1, 2)
}
+#[allow(improper_ctypes)]
pub extern "C" fn tuple3() -> (u8, u8, u8) {
(1, 2, 3)
}
Two::two()
}
+#[allow(improper_ctypes)]
extern fn simple_extern(x: u32, y: (u32, u32)) -> u32 {
x + y.0 * y.1
}
LL | impl m1::X for X {
| ^^^^^^^^^^^^^^^^ missing `CONSTANT`, `Type`, `method` in implementation
|
- = note: `CONSTANT` from trait: `const CONSTANT: u32;`
- = note: `Type` from trait: `type Type;`
- = note: `method` from trait: `fn(&Self, std::string::String) -> <Self as m1::X>::Type`
+ = help: implement the missing item: `const CONSTANT: u32 = 42;`
+ = help: implement the missing item: `type Type = Type;`
+ = help: implement the missing item: `fn method(&self, _: std::string::String) -> <Self as m1::X>::Type { unimplemented!() }`
error: aborting due to previous error
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// Test that we propagate region relations from closures precisely when there is
// more than one non-local lower bound.
// |
// = note: move occurs because the value has type `A`, which does not implement the `Copy` trait
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![feature(box_patterns)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
use std::collections::HashMap;
use std::sync::Mutex;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn from_stdin(min: u64) -> Vec<u64> {
use std::io::BufRead;
// rust-lang/rust#22323: regression test demonstrating that NLL
// precisely tracks temporary destruction order.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn main() {
let _s = construct().borrow().consume_borrowed();
// Regression test for #30104
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
use std::ops::{Deref, DerefMut};
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// rust-lang/rust#32382: Borrow checker used to complain about
// `foobar_3` in the `impl` below, presumably due to some interaction
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
use std::borrow::Cow;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
struct LoadedObject {
bodies: Vec<Body>,
// bounds derived from `Sized` requirements” that checks that the fixed compiler
// accepts this code fragment with both AST and MIR borrow checkers.
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
struct Qey<Q: ?Sized>(Q);
// of the closure, as they were not present in the closure's generic
// declarations otherwise.
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn creash<'a>() {
let x: &'a () = &();
// between `'a` and `'b` below due to inference variables introduced
// during the normalization process.
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
struct Drain<'a, T: 'a> {
_marker: ::std::marker::PhantomData<&'a T>,
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
use std::ops::Deref;
// parameter `x` -- since `'b` cannot be expressed in the caller's
// space, that got promoted th `'static`.
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
use std::cell::{RefCell, Ref};
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![deny(unused_mut)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![feature(untagged_unions)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
trait Foo<'a> {
const C: &'a u32;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
trait Foo {
const BLAH: &'static str;
// Regression test for #61311
// We would ICE after failing to normalize `Self::Proj` in the `impl` below.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub struct Unit;
trait Obj {}
// Regression test for #61320
// This is the same issue as #61311, just a larger test case.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub struct AndThen<A, B, F>
where
// placeholder region, but in NLL land it would fail because we had
// rewritten `'static` to a region variable.
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
trait Foo {
fn foo(&self) { }
// Regression test for #53789.
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
use std::collections::BTreeMap;
// Regression test for #53789.
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
use std::collections::BTreeMap;
use std::ops::Range;
// Fixed by tweaking the solver to recognize that the constraint from
// the environment duplicates one from the trait.
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![crate_type="lib"]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// This test is reduced from a scenario pnkfelix encountered while
// bootstrapping the compiler.
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
use std::thread;
use std::rc::Rc;
error[E0277]: `std::rc::Rc<()>` cannot be sent between threads safely
- --> $DIR/no-send-res-ports.rs:27:5
+ --> $DIR/no-send-res-ports.rs:26:5
|
LL | thread::spawn(move|| {
| ^^^^^^^^^^^^^ `std::rc::Rc<()>` cannot be sent between threads safely
LL | F: FnOnce() -> T, F: Send + 'static, T: Send + 'static
| ---- required by this bound in `std::thread::spawn`
|
- = help: within `[closure@$DIR/no-send-res-ports.rs:27:19: 31:6 x:main::Foo]`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<()>`
+ = help: within `[closure@$DIR/no-send-res-ports.rs:26:19: 30:6 x:main::Foo]`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<()>`
= note: required because it appears within the type `Port<()>`
= note: required because it appears within the type `main::Foo`
- = note: required because it appears within the type `[closure@$DIR/no-send-res-ports.rs:27:19: 31:6 x:main::Foo]`
+ = note: required because it appears within the type `[closure@$DIR/no-send-res-ports.rs:26:19: 30:6 x:main::Foo]`
error: aborting due to previous error
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
struct Foo(String);
// ignore-tidy-linelength
-#![feature(on_unimplemented)]
+#![feature(rustc_attrs)]
#![allow(unused)]
// access to the variable, whether that mutable access be used
// for direct assignment or for taking mutable ref. Issue #6801.
-#![feature(on_unimplemented)]
+#![feature(rustc_attrs)]
#[rustc_on_unimplemented(
message="the message"
--- /dev/null
+// Test that `#[rustc_on_unimplemented]` is gated by `rustc_attrs` feature gate.
+
+#[rustc_on_unimplemented = "test error `{Self}` with `{Bar}`"]
+//~^ ERROR this is an internal attribute that will never be stable
+trait Foo<Bar>
+{}
+
+fn main() {}
--- /dev/null
+error[E0658]: this is an internal attribute that will never be stable
+ --> $DIR/feature-gate-on-unimplemented.rs:3:1
+ |
+LL | #[rustc_on_unimplemented = "test error `{Self}` with `{Bar}`"]
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+ = note: for more information, see https://github.com/rust-lang/rust/issues/29642
+ = help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0658`.
// Test if the on_unimplemented message override works
-#![feature(on_unimplemented)]
+#![feature(rustc_attrs)]
struct Foo<T>(T);
// Test if the on_unimplemented message override works
-#![feature(on_unimplemented)]
+#![feature(rustc_attrs)]
#[rustc_on_unimplemented = "invalid"]
// ignore-tidy-linelength
-#![feature(on_unimplemented)]
+#![feature(rustc_attrs)]
pub mod Bar {
#[rustc_on_unimplemented = "test error `{Self}` with `{Bar}` `{Baz}` `{Quux}` in `{Foo}`"]
extern
- "C"suffix //~ ERROR suffixes on an ABI spec are invalid
+ "C"suffix //~ ERROR suffixes on a string literal are invalid
fn foo() {}
extern
- "C"suffix //~ ERROR suffixes on an ABI spec are invalid
+ "C"suffix //~ ERROR suffixes on a string literal are invalid
{}
fn main() {
-error: suffixes on an ABI spec are invalid
+error: suffixes on a string literal are invalid
--> $DIR/bad-lit-suffixes.rs:2:5
|
LL | "C"suffix
| ^^^^^^^^^ invalid suffix `suffix`
-error: suffixes on an ABI spec are invalid
+error: suffixes on a string literal are invalid
--> $DIR/bad-lit-suffixes.rs:6:5
|
LL | "C"suffix
--- /dev/null
+// check-pass
+
+// In this test we check that the parser accepts an ABI string when it
+// comes from a macro `literal` fragment as opposed to a hardcoded string.
+
+fn main() {}
+
+macro_rules! abi_from_lit_frag {
+ ($abi:literal) => {
+ extern $abi {
+ fn _import();
+ }
+
+ extern $abi fn _export() {}
+
+ type _PTR = extern $abi fn();
+ }
+}
+
+mod rust {
+ abi_from_lit_frag!("Rust");
+}
+
+mod c {
+ abi_from_lit_frag!("C");
+}
--- /dev/null
+// check-pass
+
+// Check that the string literal in `extern lit` will accept raw strings.
+
+fn main() {}
+
+extern r#"C"# fn foo() {}
+
+extern r#"C"# {
+ fn bar();
+}
+
+type T = extern r#"C"# fn();
--- /dev/null
+// check-pass
+
+// Check that the string literal in `extern lit` will escapes.
+
+fn main() {}
+
+extern "\x43" fn foo() {}
+
+extern "\x43" {
+ fn bar();
+}
+
+type T = extern "\x43" fn();
--- /dev/null
+// check-pass
+
+// Check that from the grammar's point of view,
+// the specific set of ABIs is not part of it.
+
+fn main() {}
+
+#[cfg(FALSE)]
+extern "some_abi_that_we_are_sure_does_not_exist_semantically" fn foo() {}
+
+#[cfg(FALSE)]
+extern "some_abi_that_we_are_sure_does_not_exist_semantically" {
+ fn foo();
+}
+
+#[cfg(FALSE)]
+type T = extern "some_abi_that_we_are_sure_does_not_exist_semantically" fn();
--- /dev/null
+// run-pass
+
+// Test that failing macro matchers will not cause pre-expansion errors
+// even though they use a feature that is pre-expansion gated.
+
+macro_rules! m {
+ ($e:expr) => { 0 }; // This fails on the input below due to `, foo`.
+ ($e:expr,) => { 1 }; // This also fails to match due to `foo`.
+ (box $e:expr, foo) => { 2 }; // Successful matcher, we should get `2`.
+}
+
+fn main() {
+ assert_eq!(2, m!(box 42, foo));
+}
--- /dev/null
+// edition:2018
+// compile-flags:--extern alloc
+// build-pass
+
+// Test that `--extern alloc` will load from the sysroot without error.
+
+fn main() {
+ let _: Vec<i32> = alloc::vec::Vec::new();
+}
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// aux-build:issue-57264-1.rs
extern crate issue_57264_1;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// aux-build:issue-57264-2.rs
extern crate issue_57264_2;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// edition:2018
// aux-build:test-macros.rs
// follow the same lifetime-elision rules used elsehwere. See
// rust-lang/rust#56537
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn willy_no_annot<'w>(p: &'w str, q: &str) -> &'w str {
let free_dumb = |_x| { p }; // no type annotation at all
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
struct S(u8);
// compile-flags: --test
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
use std::num::ParseFloatError;
error[E0277]: `main` has invalid return type `std::result::Result<f32, std::num::ParseFloatError>`
- --> $DIR/termination-trait-test-wrong-type.rs:8:1
+ --> $DIR/termination-trait-test-wrong-type.rs:7:1
|
LL | / fn can_parse_zero_as_f32() -> Result<f32, ParseFloatError> {
LL | | "0".parse()
// strange errors. This test ensures that we do not give compilation
// errors.
//
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
trait MyIterator<'a>: Iterator where Self::Item: 'a { }
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// edition:2018
// compile-flags: --extern issue_56596_2
// aux-build:issue-56596-2.rs
LL | impl Debug for FooTypeForMethod {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ missing `fmt` in implementation
|
- = note: `fmt` from trait: `fn(&Self, &mut std::fmt::Formatter<'_>) -> std::result::Result<(), std::fmt::Error>`
+ = help: implement the missing item: `fn fmt(&self, _: &mut std::fmt::Formatter<'_>) -> std::result::Result<(), std::fmt::Error> { unimplemented!() }`
error: aborting due to 8 previous errors
LL | impl Iterator for Recurrence {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ missing `Item` in implementation
|
- = note: `Item` from trait: `type Item;`
+ = help: implement the missing item: `type Item = Type;`
error: aborting due to previous error
LL | impl<C: Component> FnOnce<(C,)> for Prototype {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ missing `Output` in implementation
|
- = note: `Output` from trait: `type Output;`
+ = help: implement the missing item: `type Output = Type;`
error: aborting due to previous error
LL | impl Deref for Thing {
| ^^^^^^^^^^^^^^^^^^^^ missing `Target` in implementation
|
- = note: `Target` from trait: `type Target;`
+ = help: implement the missing item: `type Target = Type;`
error: aborting due to previous error
//! A test to ensure that helpful `note` messages aren't emitted more often
//! than necessary.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// Although there are three warnings, we should only get two "lint level defined
// here" notes pointing at the `warnings` span, one for each error type.
// aux-build:foo.rs
// compile-flags:--extern foo
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// edition:2018
#![deny(unused_extern_crates)]
--- /dev/null
+// run-rustfix
+
+trait T {
+ unsafe fn foo(a: &usize, b: &usize) -> usize;
+ fn bar(&self, a: &usize, b: &usize) -> usize;
+}
+
+mod foo {
+ use super::T;
+ impl T for () { fn bar(&self, _: &usize, _: &usize) -> usize { unimplemented!() }
+ unsafe fn foo(_: &usize, _: &usize) -> usize { unimplemented!() }
+ } //~ ERROR not all trait items
+
+ impl T for usize { //~ ERROR not all trait items
+ fn bar(&self, _: &usize, _: &usize) -> usize { unimplemented!() }
+ unsafe fn foo(_: &usize, _: &usize) -> usize { unimplemented!() }
+ }
+}
+
+fn main() {}
--- /dev/null
+// run-rustfix
+
+trait T {
+ unsafe fn foo(a: &usize, b: &usize) -> usize;
+ fn bar(&self, a: &usize, b: &usize) -> usize;
+}
+
+mod foo {
+ use super::T;
+ impl T for () {} //~ ERROR not all trait items
+
+ impl T for usize { //~ ERROR not all trait items
+ }
+}
+
+fn main() {}
--- /dev/null
+error[E0046]: not all trait items implemented, missing: `foo`, `bar`
+ --> $DIR/missing-trait-item.rs:10:5
+ |
+LL | unsafe fn foo(a: &usize, b: &usize) -> usize;
+ | --------------------------------------------- `foo` from trait
+LL | fn bar(&self, a: &usize, b: &usize) -> usize;
+ | --------------------------------------------- `bar` from trait
+...
+LL | impl T for () {}
+ | ^^^^^^^^^^^^^ missing `foo`, `bar` in implementation
+
+error[E0046]: not all trait items implemented, missing: `foo`, `bar`
+ --> $DIR/missing-trait-item.rs:12:5
+ |
+LL | unsafe fn foo(a: &usize, b: &usize) -> usize;
+ | --------------------------------------------- `foo` from trait
+LL | fn bar(&self, a: &usize, b: &usize) -> usize;
+ | --------------------------------------------- `bar` from trait
+...
+LL | impl T for usize {
+ | ^^^^^^^^^^^^^^^^ missing `foo`, `bar` in implementation
+
+error: aborting due to 2 previous errors
+
+For more information about this error, try `rustc --explain E0046`.
-// ignore-x86
-// ^ due to stderr output differences
+// ignore-x86 FIXME: missing sysroot spans (#53081)
use std::mem;
struct Misc<T:?Sized>(T);
error[E0277]: the size for values of type `U` cannot be known at compilation time
- --> $DIR/trait-suggest-where-clause.rs:9:20
+ --> $DIR/trait-suggest-where-clause.rs:8:20
|
LL | fn check<T: Iterator, U: ?Sized>() {
| -- help: consider further restricting this bound: `U: std::marker::Sized +`
= note: to learn more, visit <https://doc.rust-lang.org/book/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait>
error[E0277]: the size for values of type `U` cannot be known at compilation time
- --> $DIR/trait-suggest-where-clause.rs:12:5
+ --> $DIR/trait-suggest-where-clause.rs:11:5
|
LL | fn check<T: Iterator, U: ?Sized>() {
| -- help: consider further restricting this bound: `U: std::marker::Sized +`
= note: required because it appears within the type `Misc<U>`
error[E0277]: the trait bound `u64: std::convert::From<T>` is not satisfied
- --> $DIR/trait-suggest-where-clause.rs:17:5
+ --> $DIR/trait-suggest-where-clause.rs:16:5
|
LL | <u64 as From<T>>::from;
| ^^^^^^^^^^^^^^^^^^^^^^ the trait `std::convert::From<T>` is not implemented for `u64`
= note: required by `std::convert::From::from`
error[E0277]: the trait bound `u64: std::convert::From<<T as std::iter::Iterator>::Item>` is not satisfied
- --> $DIR/trait-suggest-where-clause.rs:20:5
+ --> $DIR/trait-suggest-where-clause.rs:19:5
|
LL | <u64 as From<<T as Iterator>::Item>>::from;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `std::convert::From<<T as std::iter::Iterator>::Item>` is not implemented for `u64`
= note: required by `std::convert::From::from`
error[E0277]: the trait bound `Misc<_>: std::convert::From<T>` is not satisfied
- --> $DIR/trait-suggest-where-clause.rs:25:5
+ --> $DIR/trait-suggest-where-clause.rs:24:5
|
LL | <Misc<_> as From<T>>::from;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `std::convert::From<T>` is not implemented for `Misc<_>`
= note: required by `std::convert::From::from`
error[E0277]: the size for values of type `[T]` cannot be known at compilation time
- --> $DIR/trait-suggest-where-clause.rs:30:20
+ --> $DIR/trait-suggest-where-clause.rs:29:20
|
LL | mem::size_of::<[T]>();
| ^^^ doesn't have a size known at compile-time
= note: to learn more, visit <https://doc.rust-lang.org/book/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait>
error[E0277]: the size for values of type `[&U]` cannot be known at compilation time
- --> $DIR/trait-suggest-where-clause.rs:33:5
+ --> $DIR/trait-suggest-where-clause.rs:32:5
|
LL | mem::size_of::<[&U]>();
| ^^^^^^^^^^^^^^^^^^^^ doesn't have a size known at compile-time
// types are required. This test now just compiles fine, since the
// relevant rules that triggered the overflow were removed.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![allow(dead_code)]
use std::marker::PhantomData;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
enum Outer<T> {
A(T)
// In this regression test we check that a path pattern referring to a unit variant
// through a type alias is successful in inferring the generic argument.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
enum Opt<T> {
N,
-// ignore-musl
-// ignore-x86
+// ignore-x86 FIXME: missing sysroot spans (#53081)
// error-pattern: reached the type-length limit while instantiating
// Test that the type length limit can be changed.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// rust-lang/rust#55810: types for a binding in a match arm can be
// inferred from arms that come later in the match.
let can_contain = contents.contains("// ignore-tidy-") ||
contents.contains("# ignore-tidy-");
let mut skip_cr = contains_ignore_directive(can_contain, &contents, "cr");
+ let mut skip_undocumented_unsafe =
+ contains_ignore_directive(can_contain, &contents, "undocumented-unsafe");
let mut skip_tab = contains_ignore_directive(can_contain, &contents, "tab");
let mut skip_line_length = contains_ignore_directive(can_contain, &contents, "linelength");
let mut skip_file_length = contains_ignore_directive(can_contain, &contents, "filelength");
let mut leading_new_lines = false;
let mut trailing_new_lines = 0;
let mut lines = 0;
+ let mut last_safety_comment = false;
for (i, line) in contents.split('\n').enumerate() {
let mut err = |msg: &str| {
tidy_error!(bad, "{}:{}: {}", file.display(), i + 1, msg);
err("XXX is deprecated; use FIXME")
}
}
+ let is_test = || file.components().any(|c| c.as_os_str() == "tests");
+ // for now we just check libcore
+ if line.contains("unsafe {") && !line.trim().starts_with("//") && !last_safety_comment {
+ if file.components().any(|c| c.as_os_str() == "libcore") && !is_test() {
+ suppressible_tidy_err!(err, skip_undocumented_unsafe, "undocumented unsafe");
+ }
+ }
+ if line.contains("// SAFETY: ") || line.contains("// Safety: ") {
+ last_safety_comment = true;
+ } else if line.trim().starts_with("//") || line.trim().is_empty() {
+ // keep previous value
+ } else {
+ last_safety_comment = false;
+ }
if (line.starts_with("// Copyright") ||
line.starts_with("# Copyright") ||
line.starts_with("Copyright"))