+Version 1.29.0 (2018-09-13)
+==========================
+
+Compiler
+--------
+- [Bumped minimum LLVM version to 5.0.][51899]
+- [Added `powerpc64le-unknown-linux-musl` target.][51619]
+- [Added `aarch64-unknown-hermit` and `x86_64-unknown-hermit` targets.][52861]
+
+Libraries
+---------
+- [`Once::call_once` now no longer requires `Once` to be `'static`.][52239]
+- [`BuildHasherDefault` now implements `PartialEq` and `Eq`.][52402]
+- [`Box<CStr>`, `Box<OsStr>`, and `Box<Path>` now implement `Clone`.][51912]
+- [Implemented `PartialEq<&str>` for `OsString` and `PartialEq<OsString>`
+ for `&str`.][51178]
+- [`Cell<T>` now allows `T` to be unsized.][50494]
+- [`SocketAddr` is now stable on Redox.][52656]
+
+Stabilized APIs
+---------------
+- [`Arc::downcast`]
+- [`Iterator::flatten`]
+- [`Rc::downcast`]
+
+Cargo
+-----
+- [Cargo can silently fix some bad lockfiles ][cargo/5831] You can use
+ `--locked` to disable this behaviour.
+- [`cargo-install` will now allow you to cross compile an install
+ using `--target`][cargo/5614]
+- [Added the `cargo-fix` subcommand to automatically move project code from
+ 2015 edition to 2018.][cargo/5723]
+
+Misc
+----
+- [`rustdoc` now has the `--cap-lints` option which demotes all lints above
+ the specified level to that level.][52354] For example `--cap-lints warn`
+ will demote `deny` and `forbid` lints to `warn`.
+- [`rustc` and `rustdoc` will now have the exit code of `1` if compilation
+ fails, and `101` if there is a panic.][52197]
+
+Compatibility Notes
+-------------------
+- [`str::{slice_unchecked, slice_unchecked_mut}` are now deprecated.][51807]
+ Use `str::get_unchecked(begin..end)` instead.
+- [`std::env::home_dir` is now deprecated for its unintuitive behaviour.][51656]
+ Consider using the `home_dir` function from
+ https://crates.io/crates/dirs instead.
+- [`rustc` will no longer silently ignore invalid data in target spec.][52330]
+
+[52861]: https://github.com/rust-lang/rust/pull/52861/
+[52656]: https://github.com/rust-lang/rust/pull/52656/
+[52239]: https://github.com/rust-lang/rust/pull/52239/
+[52330]: https://github.com/rust-lang/rust/pull/52330/
+[52354]: https://github.com/rust-lang/rust/pull/52354/
+[52402]: https://github.com/rust-lang/rust/pull/52402/
+[52103]: https://github.com/rust-lang/rust/pull/52103/
+[52197]: https://github.com/rust-lang/rust/pull/52197/
+[51807]: https://github.com/rust-lang/rust/pull/51807/
+[51899]: https://github.com/rust-lang/rust/pull/51899/
+[51912]: https://github.com/rust-lang/rust/pull/51912/
+[51511]: https://github.com/rust-lang/rust/pull/51511/
+[51619]: https://github.com/rust-lang/rust/pull/51619/
+[51656]: https://github.com/rust-lang/rust/pull/51656/
+[51178]: https://github.com/rust-lang/rust/pull/51178/
+[50494]: https://github.com/rust-lang/rust/pull/50494/
+[cargo/5614]: https://github.com/rust-lang/cargo/pull/5614/
+[cargo/5723]: https://github.com/rust-lang/cargo/pull/5723/
+[cargo/5831]: https://github.com/rust-lang/cargo/pull/5831/
+[`Arc::downcast`]: https://doc.rust-lang.org/std/sync/struct.Arc.html#method.downcast
+[`Iterator::flatten`]: https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.flatten
+[`Rc::downcast`]: https://doc.rust-lang.org/std/rc/struct.Rc.html#method.downcast
+
+
Version 1.28.0 (2018-08-02)
===========================
export CFLAGS="-fPIC $CFLAGS"
-# FIXME: remove the patch when upate to 1.1.20
+# FIXME: remove the patch when updating to 1.1.20
MUSL=musl-1.1.19
# may have been downloaded in a previous run
# Use Rust
-Once you've gotten familliar with the language, these resources can help you
+Once you've gotten familiar with the language, these resources can help you
when you're actually using it day-to-day.
## The Standard Library
This flag lets you control how many threads are used when doing
code generation.
-Increasing paralellism may speed up compile times, but may also
+Increasing parallelism may speed up compile times, but may also
produce slower code.
## remark
pub struct S(u8);
fn f() {
- // this is trying to use S from the 'use' line, but becuase the `u8` is
+ // this is trying to use S from the 'use' line, but because the `u8` is
// not pub, it is private
::S;
}
## mutable-transmutes
-This lint catches transmuting from `&T` to `&mut T` becuase it is undefined
+This lint catches transmuting from `&T` to `&mut T` because it is undefined
behavior. Some example code that triggers this lint:
```rust,ignore
# Unstable features
-Rustdoc is under active developement, and like the Rust compiler, some features are only available
+Rustdoc is under active development, and like the Rust compiler, some features are only available
on the nightly releases. Some of these are new and need some more testing before they're able to get
released to the world at large, and some of them are tied to features in the Rust compiler that are
themselves unstable. Several features here require a matching `#![feature(...)]` attribute to
------------------------
The `infer_outlives_requirements` feature indicates that certain
-outlives requirements can be infered by the compiler rather than
+outlives requirements can be inferred by the compiler rather than
stating them explicitly.
For example, currently generic struct definitions that contain
references, require where-clauses of the form T: 'a. By using
-this feature the outlives predicates will be infered, although
+this feature the outlives predicates will be inferred, although
they may still be written explicitly.
```rust,ignore (pseudo-Rust)
------------------------
The `infer_static_outlives_requirements` feature indicates that certain
-`'static` outlives requirements can be infered by the compiler rather than
+`'static` outlives requirements can be inferred by the compiler rather than
stating them explicitly.
Note: It is an accompanying feature to `infer_outlives_requirements`,
For example, currently generic struct definitions that contain
references, require where-clauses of the form T: 'static. By using
-this feature the outlives predicates will be infered, although
+this feature the outlives predicates will be inferred, although
they may still be written explicitly.
```rust,ignore (pseudo-Rust)
+++ /dev/null
-# `macro_vis_matcher`
-
-The tracking issue for this feature is: [#41022]
-
-With this feature gate enabled, the [list of fragment specifiers][frags] gains one more entry:
-
-* `vis`: a visibility qualifier. Examples: nothing (default visibility); `pub`; `pub(crate)`.
-
-A `vis` variable may be followed by a comma, ident, type, or path.
-
-[#41022]: https://github.com/rust-lang/rust/issues/41022
-[frags]: ../book/first-edition/macros.html#syntactic-requirements
-
-------------------------
```rust,ignore
#![feature(plugin_registrar)]
#![feature(box_syntax, rustc_private)]
-#![feature(macro_vis_matcher)]
#![feature(macro_at_most_once_rep)]
extern crate syntax;
# Set the environment variable `RUST_GDB` to overwrite the call to a
# different/specific command (defaults to `gdb`).
RUST_GDB="${RUST_GDB:-gdb}"
-PYTHONPATH="$PYTHONPATH:$GDB_PYTHON_MODULE_DIRECTORY" ${RUST_GDB} \
+PYTHONPATH="$PYTHONPATH:$GDB_PYTHON_MODULE_DIRECTORY" exec ${RUST_GDB} \
--directory="$GDB_PYTHON_MODULE_DIRECTORY" \
-iex "add-auto-load-safe-path $GDB_PYTHON_MODULE_DIRECTORY" \
"$@"
echo "***"
fi
-# Create a tempfile containing the LLDB script we want to execute on startup
-TMPFILE=`mktemp /tmp/rust-lldb-commands.XXXXXX`
-
-# Make sure to delete the tempfile no matter what
-trap "rm -f $TMPFILE; exit" INT TERM EXIT
-
# Find out where to look for the pretty printer Python module
RUSTC_SYSROOT=`rustc --print sysroot`
-# Write the LLDB script to the tempfile
-echo "command script import \"$RUSTC_SYSROOT/lib/rustlib/etc/lldb_rust_formatters.py\"" >> $TMPFILE
-echo "type summary add --no-value --python-function lldb_rust_formatters.print_val -x \".*\" --category Rust" >> $TMPFILE
-echo "type category enable Rust" >> $TMPFILE
+# Prepare commands that will be loaded before any file on the command line has been loaded
+script_import="command script import \"$RUSTC_SYSROOT/lib/rustlib/etc/lldb_rust_formatters.py\""
+category_definition="type summary add --no-value --python-function lldb_rust_formatters.print_val -x \".*\" --category Rust"
+category_enable="type category enable Rust"
-# Call LLDB with the script added to the argument list
-lldb --source-before-file="$TMPFILE" "$@"
+# Call LLDB with the commands added to the argument list
+exec lldb --one-line-before-file="$script_import" \
+ --one-line-before-file="$category_definition" \
+ --one-line-before-file="$category_enable" \
+ "$@"
.unwrap_or_else(|_| handle_alloc_error(layout));
let mut i = ptr.cast::<u8>().as_ptr();
- let end = i.offset(layout.size() as isize);
+ let end = i.add(layout.size());
while i < end {
assert_eq!(*i, 0);
i = i.offset(1);
Box(Unique::new_unchecked(raw))
}
- /// Consumes the `Box`, returning the wrapped raw pointer.
+ /// Consumes the `Box`, returning a wrapped raw pointer.
+ ///
+ /// The pointer will be properly aligned and non-null.
///
/// After calling this function, the caller is responsible for the
/// memory previously managed by the `Box`. In particular, the
impl<T> Drop for BoxBuilder<T> {
fn drop(&mut self) {
let mut data = self.data.ptr();
- let max = unsafe { data.offset(self.len as isize) };
+ let max = unsafe { data.add(self.len) };
while data != max {
unsafe {
let new_len = self.node.len() - self.idx - 1;
ptr::copy_nonoverlapping(
- self.node.keys().as_ptr().offset(self.idx as isize + 1),
+ self.node.keys().as_ptr().add(self.idx + 1),
new_node.keys.as_mut_ptr(),
new_len
);
ptr::copy_nonoverlapping(
- self.node.vals().as_ptr().offset(self.idx as isize + 1),
+ self.node.vals().as_ptr().add(self.idx + 1),
new_node.vals.as_mut_ptr(),
new_len
);
let new_len = self.node.len() - self.idx - 1;
ptr::copy_nonoverlapping(
- self.node.keys().as_ptr().offset(self.idx as isize + 1),
+ self.node.keys().as_ptr().add(self.idx + 1),
new_node.data.keys.as_mut_ptr(),
new_len
);
ptr::copy_nonoverlapping(
- self.node.vals().as_ptr().offset(self.idx as isize + 1),
+ self.node.vals().as_ptr().add(self.idx + 1),
new_node.data.vals.as_mut_ptr(),
new_len
);
ptr::copy_nonoverlapping(
- self.node.as_internal().edges.as_ptr().offset(self.idx as isize + 1),
+ self.node.as_internal().edges.as_ptr().add(self.idx + 1),
new_node.edges.as_mut_ptr(),
new_len + 1
);
slice_remove(self.node.keys_mut(), self.idx));
ptr::copy_nonoverlapping(
right_node.keys().as_ptr(),
- left_node.keys_mut().as_mut_ptr().offset(left_len as isize + 1),
+ left_node.keys_mut().as_mut_ptr().add(left_len + 1),
right_len
);
ptr::write(left_node.vals_mut().get_unchecked_mut(left_len),
slice_remove(self.node.vals_mut(), self.idx));
ptr::copy_nonoverlapping(
right_node.vals().as_ptr(),
- left_node.vals_mut().as_mut_ptr().offset(left_len as isize + 1),
+ left_node.vals_mut().as_mut_ptr().add(left_len + 1),
right_len
);
.as_internal_mut()
.edges
.as_mut_ptr()
- .offset(left_len as isize + 1),
+ .add(left_len + 1),
right_len + 1
);
// Make room for stolen elements in the right child.
ptr::copy(right_kv.0,
- right_kv.0.offset(count as isize),
+ right_kv.0.add(count),
right_len);
ptr::copy(right_kv.1,
- right_kv.1.offset(count as isize),
+ right_kv.1.add(count),
right_len);
// Move elements from the left child to the right one.
// Make room for stolen edges.
let right_edges = right.reborrow_mut().as_internal_mut().edges.as_mut_ptr();
ptr::copy(right_edges,
- right_edges.offset(count as isize),
+ right_edges.add(count),
right_len + 1);
right.correct_childrens_parent_links(count, count + right_len + 1);
move_kv(right_kv, count - 1, parent_kv, 0, 1);
// Fix right indexing
- ptr::copy(right_kv.0.offset(count as isize),
+ ptr::copy(right_kv.0.add(count),
right_kv.0,
new_right_len);
- ptr::copy(right_kv.1.offset(count as isize),
+ ptr::copy(right_kv.1.add(count),
right_kv.1,
new_right_len);
}
// Fix right indexing.
let right_edges = right.reborrow_mut().as_internal_mut().edges.as_mut_ptr();
- ptr::copy(right_edges.offset(count as isize),
+ ptr::copy(right_edges.add(count),
right_edges,
new_right_len + 1);
right.correct_childrens_parent_links(0, new_right_len + 1);
dest: (*mut K, *mut V), dest_offset: usize,
count: usize)
{
- ptr::copy_nonoverlapping(source.0.offset(source_offset as isize),
- dest.0.offset(dest_offset as isize),
+ ptr::copy_nonoverlapping(source.0.add(source_offset),
+ dest.0.add(dest_offset),
count);
- ptr::copy_nonoverlapping(source.1.offset(source_offset as isize),
- dest.1.offset(dest_offset as isize),
+ ptr::copy_nonoverlapping(source.1.add(source_offset),
+ dest.1.add(dest_offset),
count);
}
{
let source_ptr = source.as_internal_mut().edges.as_mut_ptr();
let dest_ptr = dest.as_internal_mut().edges.as_mut_ptr();
- ptr::copy_nonoverlapping(source_ptr.offset(source_offset as isize),
- dest_ptr.offset(dest_offset as isize),
+ ptr::copy_nonoverlapping(source_ptr.add(source_offset),
+ dest_ptr.add(dest_offset),
count);
dest.correct_childrens_parent_links(dest_offset, dest_offset + count);
}
unsafe fn slice_insert<T>(slice: &mut [T], idx: usize, val: T) {
ptr::copy(
- slice.as_ptr().offset(idx as isize),
- slice.as_mut_ptr().offset(idx as isize + 1),
+ slice.as_ptr().add(idx),
+ slice.as_mut_ptr().add(idx + 1),
slice.len() - idx
);
ptr::write(slice.get_unchecked_mut(idx), val);
unsafe fn slice_remove<T>(slice: &mut [T], idx: usize) -> T {
let ret = ptr::read(slice.get_unchecked(idx));
ptr::copy(
- slice.as_ptr().offset(idx as isize + 1),
- slice.as_mut_ptr().offset(idx as isize),
+ slice.as_ptr().add(idx + 1),
+ slice.as_mut_ptr().add(idx),
slice.len() - idx - 1
);
ret
/// Moves an element out of the buffer
#[inline]
unsafe fn buffer_read(&mut self, off: usize) -> T {
- ptr::read(self.ptr().offset(off as isize))
+ ptr::read(self.ptr().add(off))
}
/// Writes an element into the buffer, moving it.
#[inline]
unsafe fn buffer_write(&mut self, off: usize, value: T) {
- ptr::write(self.ptr().offset(off as isize), value);
+ ptr::write(self.ptr().add(off), value);
}
/// Returns `true` if and only if the buffer is at full capacity.
src,
len,
self.cap());
- ptr::copy(self.ptr().offset(src as isize),
- self.ptr().offset(dst as isize),
+ ptr::copy(self.ptr().add(src),
+ self.ptr().add(dst),
len);
}
src,
len,
self.cap());
- ptr::copy_nonoverlapping(self.ptr().offset(src as isize),
- self.ptr().offset(dst as isize),
+ ptr::copy_nonoverlapping(self.ptr().add(src),
+ self.ptr().add(dst),
len);
}
pub fn get(&self, index: usize) -> Option<&T> {
if index < self.len() {
let idx = self.wrap_add(self.tail, index);
- unsafe { Some(&*self.ptr().offset(idx as isize)) }
+ unsafe { Some(&*self.ptr().add(idx)) }
} else {
None
}
pub fn get_mut(&mut self, index: usize) -> Option<&mut T> {
if index < self.len() {
let idx = self.wrap_add(self.tail, index);
- unsafe { Some(&mut *self.ptr().offset(idx as isize)) }
+ unsafe { Some(&mut *self.ptr().add(idx)) }
} else {
None
}
let ri = self.wrap_add(self.tail, i);
let rj = self.wrap_add(self.tail, j);
unsafe {
- ptr::swap(self.ptr().offset(ri as isize),
- self.ptr().offset(rj as isize))
+ ptr::swap(self.ptr().add(ri),
+ self.ptr().add(rj))
}
}
// `at` lies in the first half.
let amount_in_first = first_len - at;
- ptr::copy_nonoverlapping(first_half.as_ptr().offset(at as isize),
+ ptr::copy_nonoverlapping(first_half.as_ptr().add(at),
other.ptr(),
amount_in_first);
// just take all of the second half.
ptr::copy_nonoverlapping(second_half.as_ptr(),
- other.ptr().offset(amount_in_first as isize),
+ other.ptr().add(amount_in_first),
second_len);
} else {
// `at` lies in the second half, need to factor in the elements we skipped
// in the first half.
let offset = at - first_len;
let amount_in_second = second_len - offset;
- ptr::copy_nonoverlapping(second_half.as_ptr().offset(offset as isize),
+ ptr::copy_nonoverlapping(second_half.as_ptr().add(offset),
other.ptr(),
amount_in_second);
}
// Need to move the ring to the front of the buffer, as vec will expect this.
if other.is_contiguous() {
- ptr::copy(buf.offset(tail as isize), buf, len);
+ ptr::copy(buf.add(tail), buf, len);
} else {
if (tail - head) >= cmp::min(cap - tail, head) {
// There is enough free space in the centre for the shortest block so we can
// do this in at most three copy moves.
if (cap - tail) > head {
// right hand block is the long one; move that enough for the left
- ptr::copy(buf.offset(tail as isize),
- buf.offset((tail - head) as isize),
+ ptr::copy(buf.add(tail),
+ buf.add(tail - head),
cap - tail);
// copy left in the end
- ptr::copy(buf, buf.offset((cap - head) as isize), head);
+ ptr::copy(buf, buf.add(cap - head), head);
// shift the new thing to the start
- ptr::copy(buf.offset((tail - head) as isize), buf, len);
+ ptr::copy(buf.add(tail - head), buf, len);
} else {
// left hand block is the long one, we can do it in two!
- ptr::copy(buf, buf.offset((cap - tail) as isize), head);
- ptr::copy(buf.offset(tail as isize), buf, cap - tail);
+ ptr::copy(buf, buf.add(cap - tail), head);
+ ptr::copy(buf.add(tail), buf, cap - tail);
}
} else {
// Need to use N swaps to move the ring
for i in left_edge..right_edge {
right_offset = (i - left_edge) % (cap - right_edge);
let src: isize = (right_edge + right_offset) as isize;
- ptr::swap(buf.offset(i as isize), buf.offset(src));
+ ptr::swap(buf.add(i), buf.offset(src));
}
let n_ops = right_edge - left_edge;
left_edge += n_ops;
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![unstable(feature = "raw_vec_internals", reason = "implemention detail", issue = "0")]
+#![unstable(feature = "raw_vec_internals", reason = "implementation detail", issue = "0")]
#![doc(hidden)]
use core::cmp;
/// // double would have aborted or panicked if the len exceeded
/// // `isize::MAX` so this is safe to do unchecked now.
/// unsafe {
- /// ptr::write(self.buf.ptr().offset(self.len as isize), elem);
+ /// ptr::write(self.buf.ptr().add(self.len), elem);
/// }
/// self.len += 1;
/// }
/// // `isize::MAX` so this is safe to do unchecked now.
/// for x in elems {
/// unsafe {
- /// ptr::write(self.buf.ptr().offset(self.len as isize), x.clone());
+ /// ptr::write(self.buf.ptr().add(self.len), x.clone());
/// }
/// self.len += 1;
/// }
};
for (i, item) in v.iter().enumerate() {
- ptr::write(elems.offset(i as isize), item.clone());
+ ptr::write(elems.add(i), item.clone());
guard.n_elems += 1;
}
#[inline]
fn inc_strong(&self) {
- self.inner().strong.set(self.strong().checked_add(1).unwrap_or_else(|| unsafe { abort() }));
+ // We want to abort on overflow instead of dropping the value.
+ // The reference count will never be zero when this is called;
+ // nevertheless, we insert an abort here to hint LLVM at
+ // an otherwise missed optimization.
+ if self.strong() == 0 || self.strong() == usize::max_value() {
+ unsafe { abort(); }
+ }
+ self.inner().strong.set(self.strong() + 1);
}
#[inline]
#[inline]
fn inc_weak(&self) {
- self.inner().weak.set(self.weak().checked_add(1).unwrap_or_else(|| unsafe { abort() }));
+ // We want to abort on overflow instead of dropping the value.
+ // The reference count will never be zero when this is called;
+ // nevertheless, we insert an abort here to hint LLVM at
+ // an otherwise missed optimization.
+ if self.weak() == 0 || self.weak() == usize::max_value() {
+ unsafe { abort(); }
+ }
+ self.inner().weak.set(self.weak() + 1);
}
#[inline]
{
let len = v.len();
let v = v.as_mut_ptr();
- let v_mid = v.offset(mid as isize);
- let v_end = v.offset(len as isize);
+ let v_mid = v.add(mid);
+ let v_end = v.add(len);
// The merge process first copies the shorter run into `buf`. Then it traces the newly copied
// run and the longer run forwards (or backwards), comparing their next unconsumed elements and
ptr::copy_nonoverlapping(v, buf, mid);
hole = MergeHole {
start: buf,
- end: buf.offset(mid as isize),
+ end: buf.add(mid),
dest: v,
};
ptr::copy_nonoverlapping(v_mid, buf, len - mid);
hole = MergeHole {
start: buf,
- end: buf.offset((len - mid) as isize),
+ end: buf.add(len - mid),
dest: v_mid,
};
let next = idx + ch.len_utf8();
let len = self.len();
unsafe {
- ptr::copy(self.vec.as_ptr().offset(next as isize),
- self.vec.as_mut_ptr().offset(idx as isize),
+ ptr::copy(self.vec.as_ptr().add(next),
+ self.vec.as_mut_ptr().add(idx),
len - next);
self.vec.set_len(len - (next - idx));
}
del_bytes += ch_len;
} else if del_bytes > 0 {
unsafe {
- ptr::copy(self.vec.as_ptr().offset(idx as isize),
- self.vec.as_mut_ptr().offset((idx - del_bytes) as isize),
+ ptr::copy(self.vec.as_ptr().add(idx),
+ self.vec.as_mut_ptr().add(idx - del_bytes),
ch_len);
}
}
let amt = bytes.len();
self.vec.reserve(amt);
- ptr::copy(self.vec.as_ptr().offset(idx as isize),
- self.vec.as_mut_ptr().offset((idx + amt) as isize),
+ ptr::copy(self.vec.as_ptr().add(idx),
+ self.vec.as_mut_ptr().add(idx + amt),
len - idx);
ptr::copy(bytes.as_ptr(),
- self.vec.as_mut_ptr().offset(idx as isize),
+ self.vec.as_mut_ptr().add(idx),
amt);
self.vec.set_len(len + amt);
}
};
for (i, item) in v.iter().enumerate() {
- ptr::write(elems.offset(i as isize), item.clone());
+ ptr::write(elems.add(i), item.clone());
guard.n_elems += 1;
}
pub fn truncate(&mut self, len: usize) {
let current_len = self.len;
unsafe {
- let mut ptr = self.as_mut_ptr().offset(self.len as isize);
+ let mut ptr = self.as_mut_ptr().add(self.len);
// Set the final length at the end, keeping in mind that
// dropping an element might panic. Works around a missed
// optimization, as seen in the following issue:
// infallible
// The spot to put the new value
{
- let p = self.as_mut_ptr().offset(index as isize);
+ let p = self.as_mut_ptr().add(index);
// Shift everything over to make space. (Duplicating the
// `index`th element into two consecutive places.)
ptr::copy(p, p.offset(1), len - index);
let ret;
{
// the place we are taking from.
- let ptr = self.as_mut_ptr().offset(index as isize);
+ let ptr = self.as_mut_ptr().add(index);
// copy it out, unsafely having a copy of the value on
// the stack and in the vector at the same time.
ret = ptr::read(ptr);
let mut w: usize = 1;
while r < ln {
- let p_r = p.offset(r as isize);
- let p_wm1 = p.offset((w - 1) as isize);
+ let p_r = p.add(r);
+ let p_wm1 = p.add(w - 1);
if !same_bucket(&mut *p_r, &mut *p_wm1) {
if r != w {
let p_w = p_wm1.offset(1);
self.reserve(1);
}
unsafe {
- let end = self.as_mut_ptr().offset(self.len as isize);
+ let end = self.as_mut_ptr().add(self.len);
ptr::write(end, value);
self.len += 1;
}
self.set_len(start);
// Use the borrow in the IterMut to indicate borrowing behavior of the
// whole Drain iterator (like &mut T).
- let range_slice = slice::from_raw_parts_mut(self.as_mut_ptr().offset(start as isize),
+ let range_slice = slice::from_raw_parts_mut(self.as_mut_ptr().add(start),
end - start);
Drain {
tail_start: end,
self.set_len(at);
other.set_len(other_len);
- ptr::copy_nonoverlapping(self.as_ptr().offset(at as isize),
+ ptr::copy_nonoverlapping(self.as_ptr().add(at),
other.as_mut_ptr(),
other.len());
}
self.reserve(n);
unsafe {
- let mut ptr = self.as_mut_ptr().offset(self.len() as isize);
+ let mut ptr = self.as_mut_ptr().add(self.len());
// Use SetLenOnDrop to work around bug where compiler
// may not realize the store through `ptr` through self.set_len()
// don't alias.
let end = if mem::size_of::<T>() == 0 {
arith_offset(begin as *const i8, self.len() as isize) as *const T
} else {
- begin.offset(self.len() as isize) as *const T
+ begin.add(self.len()) as *const T
};
let cap = self.buf.cap();
mem::forget(self);
if let Some(additional) = high {
self.reserve(additional);
unsafe {
- let mut ptr = self.as_mut_ptr().offset(self.len() as isize);
+ let mut ptr = self.as_mut_ptr().add(self.len());
let mut local_len = SetLenOnDrop::new(&mut self.len);
for element in iterator {
ptr::write(ptr, element);
let start = source_vec.len();
let tail = self.tail_start;
if tail != start {
- let src = source_vec.as_ptr().offset(tail as isize);
- let dst = source_vec.as_mut_ptr().offset(start as isize);
+ let src = source_vec.as_ptr().add(tail);
+ let dst = source_vec.as_mut_ptr().add(start);
ptr::copy(src, dst, self.tail_len);
}
source_vec.set_len(start + self.tail_len);
let range_start = vec.len;
let range_end = self.tail_start;
let range_slice = slice::from_raw_parts_mut(
- vec.as_mut_ptr().offset(range_start as isize),
+ vec.as_mut_ptr().add(range_start),
range_end - range_start);
for place in range_slice {
vec.buf.reserve(used_capacity, extra_capacity);
let new_tail_start = self.tail_start + extra_capacity;
- let src = vec.as_ptr().offset(self.tail_start as isize);
- let dst = vec.as_mut_ptr().offset(new_tail_start as isize);
+ let src = vec.as_ptr().add(self.tail_start);
+ let dst = vec.as_mut_ptr().add(new_tail_start);
ptr::copy(src, dst, self.tail_len);
self.tail_start = new_tail_start;
}
}
unsafe fn align_ptr(ptr: *mut u8, align: usize) -> *mut u8 {
- let aligned = ptr.offset((align - (ptr as usize & (align - 1))) as isize);
+ let aligned = ptr.add(align - (ptr as usize & (align - 1)));
*get_header(aligned) = Header(ptr);
aligned
}
// A pointer as large as possible for zero-sized elements.
!0 as *mut T
} else {
- self.start().offset(self.storage.cap() as isize)
+ self.start().add(self.storage.cap())
}
}
}
unsafe {
let start_ptr = self.ptr.get();
let arena_slice = slice::from_raw_parts_mut(start_ptr, slice.len());
- self.ptr.set(start_ptr.offset(arena_slice.len() as isize));
+ self.ptr.set(start_ptr.add(arena_slice.len()));
arena_slice.copy_from_slice(slice);
arena_slice
}
/// - The `Future` trait is currently not object safe: The `Future::poll`
/// method makes uses the arbitrary self types feature and traits in which
/// this feature is used are currently not object safe due to current compiler
-/// limitations. (See tracking issue for arbitray self types for more
+/// limitations. (See tracking issue for arbitrary self types for more
/// information #44874)
pub struct LocalFutureObj<'a, T> {
ptr: *mut (),
/// - The `Future` trait is currently not object safe: The `Future::poll`
/// method makes uses the arbitrary self types feature and traits in which
/// this feature is used are currently not object safe due to current compiler
-/// limitations. (See tracking issue for arbitray self types for more
+/// limitations. (See tracking issue for arbitrary self types for more
/// information #44874)
pub struct FutureObj<'a, T>(LocalFutureObj<'a, T>);
/// // treat it as "dead", and therefore, you only have two real
/// // mutable slices.
/// (slice::from_raw_parts_mut(ptr, mid),
- /// slice::from_raw_parts_mut(ptr.offset(mid as isize), len - mid))
+ /// slice::from_raw_parts_mut(ptr.add(mid), len - mid))
/// }
/// }
/// ```
/// let ptr = vec.as_ptr();
/// Slice {
/// start: ptr,
-/// end: unsafe { ptr.offset(vec.len() as isize) },
+/// end: unsafe { ptr.add(vec.len()) },
/// phantom: PhantomData,
/// }
/// }
unsafe impl<'a, T: ?Sized> Freeze for &'a T {}
unsafe impl<'a, T: ?Sized> Freeze for &'a mut T {}
-/// Types which can be moved out of a `PinMut`.
+/// Types which can be safely moved after being pinned.
///
-/// The `Unpin` trait is used to control the behavior of the [`PinMut`] type. If a
-/// type implements `Unpin`, it is safe to move a value of that type out of the
-/// `PinMut` pointer.
+/// Since Rust itself has no notion of immovable types, and will consider moves to always be safe,
+/// this trait cannot prevent types from moving by itself.
+///
+/// Instead it can be used to prevent moves through the type system,
+/// by controlling the behavior of special pointer types like [`PinMut`],
+/// which "pin" the type in place by not allowing it to be moved out of them.
+///
+/// Implementing this trait lifts the restrictions of pinning off a type,
+/// which then allows it to move out with functions such as [`replace`].
+///
+/// So this, for example, can only be done on types implementing `Unpin`:
+///
+/// ```rust
+/// #![feature(pin)]
+/// use std::mem::{PinMut, replace};
+///
+/// let mut string = "this".to_string();
+/// let mut pinned_string = PinMut::new(&mut string);
+///
+/// // dereferencing the pointer mutably is only possible because String implements Unpin
+/// replace(&mut *pinned_string, "other".to_string());
+/// ```
///
/// This trait is automatically implemented for almost every type.
///
/// [`PinMut`]: ../mem/struct.PinMut.html
+/// [`replace`]: ../mem/fn.replace.html
#[unstable(feature = "pin", issue = "49150")]
pub auto trait Unpin {}
}
}
+macro_rules! doc_comment {
+ ($x:expr, $($tt:tt)*) => {
+ #[doc = $x]
+ $($tt)*
+ };
+}
+
macro_rules! nonzero_integers {
( $( $Ty: ident($Int: ty); )+ ) => {
$(
- /// An integer that is known not to equal zero.
- ///
- /// This enables some memory layout optimization.
- /// For example, `Option<NonZeroU32>` is the same size as `u32`:
- ///
- /// ```rust
- /// use std::mem::size_of;
- /// assert_eq!(size_of::<Option<std::num::NonZeroU32>>(), size_of::<u32>());
- /// ```
- #[stable(feature = "nonzero", since = "1.28.0")]
- #[derive(Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Hash)]
- #[repr(transparent)]
- pub struct $Ty(NonZero<$Int>);
+ doc_comment! {
+ concat!("An integer that is known not to equal zero.
+
+This enables some memory layout optimization.
+For example, `Option<", stringify!($Ty), ">` is the same size as `", stringify!($Int), "`:
+
+```rust
+use std::mem::size_of;
+assert_eq!(size_of::<Option<std::num::", stringify!($Ty), ">>(), size_of::<", stringify!($Int),
+">());
+```"),
+ #[stable(feature = "nonzero", since = "1.28.0")]
+ #[derive(Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Hash)]
+ #[repr(transparent)]
+ pub struct $Ty(NonZero<$Int>);
+ }
impl $Ty {
/// Create a non-zero without checking the value.
pub mod bignum;
pub mod diy_float;
-macro_rules! doc_comment {
- ($x:expr, $($tt:tt)*) => {
- #[doc = $x]
- $($tt)*
- };
-}
-
mod wrapping;
// `Int` + `SignedInt` implemented for signed integers
#[lang = "fn"]
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_paren_sugar]
+#[rustc_on_unimplemented(
+ on(Args="()", note="wrap the `{Self}` in a closure with no arguments: `|| {{ /* code */ }}"),
+ message="expected a `{Fn}<{Args}>` closure, found `{Self}`",
+ label="expected an `Fn<{Args}>` closure, found `{Self}`",
+)]
#[fundamental] // so that regex can rely that `&str: !FnMut`
pub trait Fn<Args> : FnMut<Args> {
/// Performs the call operation.
#[lang = "fn_mut"]
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_paren_sugar]
+#[rustc_on_unimplemented(
+ on(Args="()", note="wrap the `{Self}` in a closure with no arguments: `|| {{ /* code */ }}"),
+ message="expected a `{FnMut}<{Args}>` closure, found `{Self}`",
+ label="expected an `FnMut<{Args}>` closure, found `{Self}`",
+)]
#[fundamental] // so that regex can rely that `&str: !FnMut`
pub trait FnMut<Args> : FnOnce<Args> {
/// Performs the call operation.
#[lang = "fn_once"]
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_paren_sugar]
+#[rustc_on_unimplemented(
+ on(Args="()", note="wrap the `{Self}` in a closure with no arguments: `|| {{ /* code */ }}"),
+ message="expected a `{FnOnce}<{Args}>` closure, found `{Self}`",
+ label="expected an `FnOnce<{Args}>` closure, found `{Self}`",
+)]
#[fundamental] // so that regex can rely that `&str: !FnMut`
pub trait FnOnce<Args> {
/// The returned type after the call operator is used.
// Declaring `t` here avoids aligning the stack when this loop is unused
let mut t: Block = mem::uninitialized();
let t = &mut t as *mut _ as *mut u8;
- let x = x.offset(i as isize);
- let y = y.offset(i as isize);
+ let x = x.add(i);
+ let y = y.add(i);
// Swap a block of bytes of x & y, using t as a temporary buffer
// This should be optimized into efficient SIMD operations where available
let rem = len - i;
let t = &mut t as *mut _ as *mut u8;
- let x = x.offset(i as isize);
- let y = y.offset(i as isize);
+ let x = x.add(i);
+ let y = y.add(i);
copy_nonoverlapping(x, t, rem);
copy_nonoverlapping(y, x, rem);
/// The compiler and standard library generally tries to ensure allocations
/// never reach a size where an offset is a concern. For instance, `Vec`
/// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
- /// `vec.as_ptr().offset(vec.len() as isize)` is always safe.
+ /// `vec.as_ptr().add(vec.len())` is always safe.
///
/// Most platforms fundamentally can't even construct such an allocation.
/// For instance, no known 64-bit platform can ever serve a request
/// let ptr = &x[n] as *const u8;
/// let offset = ptr.align_offset(align_of::<u16>());
/// if offset < x.len() - n - 1 {
- /// let u16_ptr = ptr.offset(offset as isize) as *const u16;
+ /// let u16_ptr = ptr.add(offset) as *const u16;
/// assert_ne!(*u16_ptr, 500);
/// } else {
/// // while the pointer can be aligned via `offset`, it would point
/// The compiler and standard library generally tries to ensure allocations
/// never reach a size where an offset is a concern. For instance, `Vec`
/// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
- /// `vec.as_ptr().offset(vec.len() as isize)` is always safe.
+ /// `vec.as_ptr().add(vec.len())` is always safe.
///
/// Most platforms fundamentally can't even construct such an allocation.
/// For instance, no known 64-bit platform can ever serve a request
/// let ptr = &x[n] as *const u8;
/// let offset = ptr.align_offset(align_of::<u16>());
/// if offset < x.len() - n - 1 {
- /// let u16_ptr = ptr.offset(offset as isize) as *const u16;
+ /// let u16_ptr = ptr.add(offset) as *const u16;
/// assert_ne!(*u16_ptr, 500);
/// } else {
/// // while the pointer can be aligned via `offset`, it would point
///
/// If we ever decide to make it possible to call the intrinsic with `a` that is not a
/// power-of-two, it will probably be more prudent to just change to a naive implementation rather
-/// than trying to adapt this to accomodate that change.
+/// than trying to adapt this to accommodate that change.
///
/// Any questions go to @nagisa.
#[lang="align_offset"]
if len >= 2 * usize_bytes {
while offset <= len - 2 * usize_bytes {
unsafe {
- let u = *(ptr.offset(offset as isize) as *const usize);
- let v = *(ptr.offset((offset + usize_bytes) as isize) as *const usize);
+ let u = *(ptr.add(offset) as *const usize);
+ let v = *(ptr.add(offset + usize_bytes) as *const usize);
// break if there is a matching byte
let zu = contains_zero_byte(u ^ repeated_x);
///
/// unsafe {
/// for i in 0..x.len() {
- /// assert_eq!(x.get_unchecked(i), &*x_ptr.offset(i as isize));
+ /// assert_eq!(x.get_unchecked(i), &*x_ptr.add(i));
/// }
/// }
/// ```
///
/// unsafe {
/// for i in 0..x.len() {
- /// *x_ptr.offset(i as isize) += 2;
+ /// *x_ptr.add(i) += 2;
/// }
/// }
/// assert_eq!(x, &[3, 4, 6]);
assume(!ptr.is_null());
let end = if mem::size_of::<T>() == 0 {
- (ptr as *const u8).wrapping_offset(self.len() as isize) as *const T
+ (ptr as *const u8).wrapping_add(self.len()) as *const T
} else {
- ptr.offset(self.len() as isize)
+ ptr.add(self.len())
};
Iter {
assume(!ptr.is_null());
let end = if mem::size_of::<T>() == 0 {
- (ptr as *mut u8).wrapping_offset(self.len() as isize) as *mut T
+ (ptr as *mut u8).wrapping_add(self.len()) as *mut T
} else {
- ptr.offset(self.len() as isize)
+ ptr.add(self.len())
};
IterMut {
assert!(mid <= len);
(from_raw_parts_mut(ptr, mid),
- from_raw_parts_mut(ptr.offset(mid as isize), len - mid))
+ from_raw_parts_mut(ptr.add(mid), len - mid))
}
}
unsafe {
let p = self.as_mut_ptr();
- rotate::ptr_rotate(mid, p.offset(mid as isize), k);
+ rotate::ptr_rotate(mid, p.add(mid), k);
}
}
unsafe {
let p = self.as_mut_ptr();
- rotate::ptr_rotate(mid, p.offset(mid as isize), k);
+ rotate::ptr_rotate(mid, p.add(mid), k);
}
}
}
}
- /// Function to calculate lenghts of the middle and trailing slice for `align_to{,_mut}`.
+ /// Function to calculate lengths of the middle and trailing slice for `align_to{,_mut}`.
fn align_to_offsets<U>(&self) -> (usize, usize) {
// What we gonna do about `rest` is figure out what multiple of `U`s we can put in a
// lowest number of `T`s. And how many `T`s we need for each such "multiple".
(us_len, ts_len)
}
- /// Transmute the slice to a slice of another type, ensuring aligment of the types is
+ /// Transmute the slice to a slice of another type, ensuring alignment of the types is
/// maintained.
///
/// This method splits the slice into three distinct slices: prefix, correctly aligned middle
let (us_len, ts_len) = rest.align_to_offsets::<U>();
(left,
from_raw_parts(rest.as_ptr() as *const U, us_len),
- from_raw_parts(rest.as_ptr().offset((rest.len() - ts_len) as isize), ts_len))
+ from_raw_parts(rest.as_ptr().add(rest.len() - ts_len), ts_len))
}
}
- /// Transmute the slice to a slice of another type, ensuring aligment of the types is
+ /// Transmute the slice to a slice of another type, ensuring alignment of the types is
/// maintained.
///
/// This method splits the slice into three distinct slices: prefix, correctly aligned middle
let mut_ptr = rest.as_mut_ptr();
(left,
from_raw_parts_mut(mut_ptr as *mut U, us_len),
- from_raw_parts_mut(mut_ptr.offset((rest.len() - ts_len) as isize), ts_len))
+ from_raw_parts_mut(mut_ptr.add(rest.len() - ts_len), ts_len))
}
}
}
#[inline]
unsafe fn get_unchecked(self, slice: &[T]) -> &T {
- &*slice.as_ptr().offset(self as isize)
+ &*slice.as_ptr().add(self)
}
#[inline]
unsafe fn get_unchecked_mut(self, slice: &mut [T]) -> &mut T {
- &mut *slice.as_mut_ptr().offset(self as isize)
+ &mut *slice.as_mut_ptr().add(self)
}
#[inline]
#[inline]
unsafe fn get_unchecked(self, slice: &[T]) -> &[T] {
- from_raw_parts(slice.as_ptr().offset(self.start as isize), self.end - self.start)
+ from_raw_parts(slice.as_ptr().add(self.start), self.end - self.start)
}
#[inline]
unsafe fn get_unchecked_mut(self, slice: &mut [T]) -> &mut [T] {
- from_raw_parts_mut(slice.as_mut_ptr().offset(self.start as isize), self.end - self.start)
+ from_raw_parts_mut(slice.as_mut_ptr().add(self.start), self.end - self.start)
}
#[inline]
}
// We are in bounds. `offset` does the right thing even for ZSTs.
unsafe {
- let elem = Some(& $( $mut_ )* *self.ptr.offset(n as isize));
+ let elem = Some(& $( $mut_ )* *self.ptr.add(n));
self.post_inc_start((n as isize).wrapping_add(1));
elem
}
#[doc(hidden)]
unsafe impl<'a, T> TrustedRandomAccess for Windows<'a, T> {
unsafe fn get_unchecked(&mut self, i: usize) -> &'a [T] {
- from_raw_parts(self.v.as_ptr().offset(i as isize), self.size)
+ from_raw_parts(self.v.as_ptr().add(i), self.size)
}
fn may_have_side_effect() -> bool { false }
}
None => self.v.len(),
Some(end) => cmp::min(end, self.v.len()),
};
- from_raw_parts(self.v.as_ptr().offset(start as isize), end - start)
+ from_raw_parts(self.v.as_ptr().add(start), end - start)
}
fn may_have_side_effect() -> bool { false }
}
None => self.v.len(),
Some(end) => cmp::min(end, self.v.len()),
};
- from_raw_parts_mut(self.v.as_mut_ptr().offset(start as isize), end - start)
+ from_raw_parts_mut(self.v.as_mut_ptr().add(start), end - start)
}
fn may_have_side_effect() -> bool { false }
}
unsafe impl<'a, T> TrustedRandomAccess for ExactChunks<'a, T> {
unsafe fn get_unchecked(&mut self, i: usize) -> &'a [T] {
let start = i * self.chunk_size;
- from_raw_parts(self.v.as_ptr().offset(start as isize), self.chunk_size)
+ from_raw_parts(self.v.as_ptr().add(start), self.chunk_size)
}
fn may_have_side_effect() -> bool { false }
}
unsafe impl<'a, T> TrustedRandomAccess for ExactChunksMut<'a, T> {
unsafe fn get_unchecked(&mut self, i: usize) -> &'a mut [T] {
let start = i * self.chunk_size;
- from_raw_parts_mut(self.v.as_mut_ptr().offset(start as isize), self.chunk_size)
+ from_raw_parts_mut(self.v.as_mut_ptr().add(start), self.chunk_size)
}
fn may_have_side_effect() -> bool { false }
}
#[doc(hidden)]
unsafe impl<'a, T> TrustedRandomAccess for Iter<'a, T> {
unsafe fn get_unchecked(&mut self, i: usize) -> &'a T {
- &*self.ptr.offset(i as isize)
+ &*self.ptr.add(i)
}
fn may_have_side_effect() -> bool { false }
}
#[doc(hidden)]
unsafe impl<'a, T> TrustedRandomAccess for IterMut<'a, T> {
unsafe fn get_unchecked(&mut self, i: usize) -> &'a mut T {
- &mut *self.ptr.offset(i as isize)
+ &mut *self.ptr.add(i)
}
fn may_have_side_effect() -> bool { false }
}
}
ptr::swap_nonoverlapping(
- mid.offset(-(left as isize)),
- mid.offset((right-delta) as isize),
+ mid.sub(left),
+ mid.add(right - delta),
delta);
if left <= right {
let rawarray = RawArray::new();
let buf = rawarray.ptr();
- let dim = mid.offset(-(left as isize)).offset(right as isize);
+ let dim = mid.sub(left).add(right);
if left <= right {
- ptr::copy_nonoverlapping(mid.offset(-(left as isize)), buf, left);
- ptr::copy(mid, mid.offset(-(left as isize)), right);
+ ptr::copy_nonoverlapping(mid.sub(left), buf, left);
+ ptr::copy(mid, mid.sub(left), right);
ptr::copy_nonoverlapping(buf, dim, left);
}
else {
ptr::copy_nonoverlapping(mid, buf, right);
- ptr::copy(mid.offset(-(left as isize)), dim, left);
- ptr::copy_nonoverlapping(buf, mid.offset(-(left as isize)), right);
+ ptr::copy(mid.sub(left), dim, left);
+ ptr::copy_nonoverlapping(buf, mid.sub(left), right);
}
}
// 3. `end` - End pointer into the `offsets` array.
// 4. `offsets - Indices of out-of-order elements within the block.
- // The current block on the left side (from `l` to `l.offset(block_l)`).
+ // The current block on the left side (from `l` to `l.add(block_l)`).
let mut l = v.as_mut_ptr();
let mut block_l = BLOCK;
let mut start_l = ptr::null_mut();
let mut end_l = ptr::null_mut();
let mut offsets_l: [u8; BLOCK] = unsafe { mem::uninitialized() };
- // The current block on the right side (from `r.offset(-block_r)` to `r`).
- let mut r = unsafe { l.offset(v.len() as isize) };
+ // The current block on the right side (from `r.sub(block_r)` to `r`).
+ let mut r = unsafe { l.add(v.len()) };
let mut block_r = BLOCK;
let mut start_r = ptr::null_mut();
let mut end_r = ptr::null_mut();
let ptr = v.as_ptr();
let align = unsafe {
// the offset is safe, because `index` is guaranteed inbounds
- ptr.offset(index as isize).align_offset(usize_bytes)
+ ptr.add(index).align_offset(usize_bytes)
};
if align == 0 {
while index < blocks_end {
unsafe {
- let block = ptr.offset(index as isize) as *const usize;
+ let block = ptr.add(index) as *const usize;
// break if there is a nonascii byte
let zu = contains_nonascii(*block);
let zv = contains_nonascii(*block.offset(1));
}
#[inline]
unsafe fn get_unchecked(self, slice: &str) -> &Self::Output {
- let ptr = slice.as_ptr().offset(self.start as isize);
+ let ptr = slice.as_ptr().add(self.start);
let len = self.end - self.start;
super::from_utf8_unchecked(slice::from_raw_parts(ptr, len))
}
#[inline]
unsafe fn get_unchecked_mut(self, slice: &mut str) -> &mut Self::Output {
- let ptr = slice.as_ptr().offset(self.start as isize);
+ let ptr = slice.as_ptr().add(self.start);
let len = self.end - self.start;
super::from_utf8_unchecked_mut(slice::from_raw_parts_mut(ptr as *mut u8, len))
}
}
#[inline]
unsafe fn get_unchecked(self, slice: &str) -> &Self::Output {
- let ptr = slice.as_ptr().offset(self.start as isize);
+ let ptr = slice.as_ptr().add(self.start);
let len = slice.len() - self.start;
super::from_utf8_unchecked(slice::from_raw_parts(ptr, len))
}
#[inline]
unsafe fn get_unchecked_mut(self, slice: &mut str) -> &mut Self::Output {
- let ptr = slice.as_ptr().offset(self.start as isize);
+ let ptr = slice.as_ptr().add(self.start);
let len = slice.len() - self.start;
super::from_utf8_unchecked_mut(slice::from_raw_parts_mut(ptr as *mut u8, len))
}
unsafe {
(from_utf8_unchecked_mut(slice::from_raw_parts_mut(ptr, mid)),
from_utf8_unchecked_mut(slice::from_raw_parts_mut(
- ptr.offset(mid as isize),
+ ptr.add(mid),
len - mid
)))
}
style: Option<usize>,
/// How many newlines have been seen in the string so far, to adjust the error spans
seen_newlines: usize,
- /// Start and end byte offset of every successfuly parsed argument
+ /// Start and end byte offset of every successfully parsed argument
pub arg_places: Vec<(usize, usize)>,
}
// telling the backend to generate "misalignment-safe" code.
pub unsafe fn read<T: Copy>(&mut self) -> T {
let Unaligned(result) = *(self.ptr as *const Unaligned<T>);
- self.ptr = self.ptr.offset(mem::size_of::<T>() as isize);
+ self.ptr = self.ptr.add(mem::size_of::<T>());
result
}
#[repr(C)]
pub struct _ThrowInfo {
- pub attribues: c_uint,
+ pub attributes: c_uint,
pub pnfnUnwind: imp::ptr_t,
pub pForwardCompat: imp::ptr_t,
pub pCatchableTypeArray: imp::ptr_t,
}
static mut THROW_INFO: _ThrowInfo = _ThrowInfo {
- attribues: 0,
+ attributes: 0,
pnfnUnwind: ptr!(0),
pForwardCompat: ptr!(0),
pCatchableTypeArray: ptr!(0),
//!
//! This library, provided by the standard distribution, provides the types
//! consumed in the interfaces of procedurally defined macro definitions such as
-//! function-like macros `#[proc_macro]`, macro attribures `#[proc_macro_attribute]` and
+//! function-like macros `#[proc_macro]`, macro attributes `#[proc_macro_attribute]` and
//! custom derive attributes`#[proc_macro_derive]`.
//!
//! Note that this crate is intentionally bare-bones currently.
// queries). Making them anonymous avoids hashing the result, which
// may save a bit of time.
[anon] EraseRegionsTy { ty: Ty<'tcx> },
- [anon] ConstValueToAllocation { val: &'tcx ty::Const<'tcx> },
+ [anon] ConstToAllocation { val: &'tcx ty::Const<'tcx> },
[input] Freevars(DefId),
[input] MaybeUnusedTraitImport(DefId),
GenericArg::Type(t) => t.span,
}
}
+
+ pub fn id(&self) -> NodeId {
+ match self {
+ GenericArg::Lifetime(l) => l.id,
+ GenericArg::Type(t) => t.id,
+ }
+ }
}
#[derive(Clone, RustcEncodable, RustcDecodable, Debug)]
}
bug!("GenericArgs::inputs: not a `Fn(T) -> U`");
}
+
+ pub fn own_counts(&self) -> GenericParamCount {
+ // We could cache this as a property of `GenericParamCount`, but
+ // the aim is to refactor this away entirely eventually and the
+ // presence of this method will be a constant reminder.
+ let mut own_counts: GenericParamCount = Default::default();
+
+ for arg in &self.args {
+ match arg {
+ GenericArg::Lifetime(_) => own_counts.lifetimes += 1,
+ GenericArg::Type(_) => own_counts.types += 1,
+ };
+ }
+
+ own_counts
+ }
}
/// A modifier on a bound, currently this is only used for `?Sized`, where the
pub kind: GenericParamKind,
}
+#[derive(Default)]
pub struct GenericParamCount {
pub lifetimes: usize,
pub types: usize,
// We could cache this as a property of `GenericParamCount`, but
// the aim is to refactor this away entirely eventually and the
// presence of this method will be a constant reminder.
- let mut own_counts = GenericParamCount {
- lifetimes: 0,
- types: 0,
- };
+ let mut own_counts: GenericParamCount = Default::default();
for param in &self.params {
match param.kind {
Undef
});
-impl_stable_hash_for!(enum mir::interpret::Value {
- Scalar(v),
- ScalarPair(a, b),
- ByRef(ptr, align)
-});
-
impl_stable_hash_for!(struct mir::interpret::Pointer {
alloc_id,
offset
mod substitute;
/// A "canonicalized" type `V` is one where all free inference
-/// variables have been rewriten to "canonical vars". These are
+/// variables have been rewritten to "canonical vars". These are
/// numbered starting from 0 in order of first appearance.
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, RustcDecodable, RustcEncodable)]
pub struct Canonical<'gcx, V> {
value.push_highlighted("<");
}
- // Output the lifetimes fot the first type
+ // Output the lifetimes for the first type
let lifetimes = sub.regions()
.map(|lifetime| {
let s = lifetime.to_string();
* we're not careful, it will succeed.
*
* The reason is that when we walk through the subtyping
- * algorith, we begin by replacing `'a` with a skolemized
+ * algorithm, we begin by replacing `'a` with a skolemized
* variable `'1`. We then have `fn(_#0t) <: fn(&'1 int)`. This
* can be made true by unifying `_#0t` with `&'1 int`. In the
* process, we create a fresh variable for the skolemized
#![feature(drain_filter)]
#![feature(iterator_find_map)]
#![cfg_attr(windows, feature(libc))]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![feature(never_type)]
#![feature(exhaustive_patterns)]
#![feature(extern_types)]
// lonely orphan structs and enums looking for a better home
-#[derive(Clone, Debug, Copy)]
-pub struct LinkMeta {
- pub crate_hash: Svh,
-}
-
/// Where a crate came from on the local filesystem. One of these three options
/// must be non-None.
#[derive(PartialEq, Clone, Debug)]
// utility functions
fn encode_metadata<'a, 'tcx>(&self,
- tcx: TyCtxt<'a, 'tcx, 'tcx>,
- link_meta: &LinkMeta)
+ tcx: TyCtxt<'a, 'tcx, 'tcx>)
-> EncodedMetadata;
fn metadata_encoding_version(&self) -> &[u8];
}
use ty::TyCtxt;
use syntax::symbol::Symbol;
use syntax::ast::{Attribute, MetaItem, MetaItemKind};
-use syntax_pos::{Span, DUMMY_SP};
+use syntax_pos::Span;
use hir::intravisit::{self, NestedVisitorMap, Visitor};
use rustc_data_structures::fx::{FxHashSet, FxHashMap};
use errors::DiagnosticId;
pub fn collect<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) -> LibFeatures {
let mut collector = LibFeatureCollector::new(tcx);
- for &cnum in tcx.crates().iter() {
- for &(feature, since) in tcx.defined_lib_features(cnum).iter() {
- collector.collect_feature(feature, since, DUMMY_SP);
- }
- }
intravisit::walk_crate(&mut collector, tcx.hir.krate());
collector.lib_features
}
use hir::def::Def;
use hir::def_id::{DefId, CrateNum};
use rustc_data_structures::sync::Lrc;
-use ty::{self, TyCtxt, GenericParamDefKind};
+use ty::{self, TyCtxt};
use ty::query::Providers;
use middle::privacy;
use session::config;
use hir::itemlikevisit::ItemLikeVisitor;
use hir::intravisit;
-// Returns true if the given set of generics implies that the item it's
-// associated with must be inlined.
-fn generics_require_inlining(generics: &ty::Generics) -> bool {
- for param in &generics.params {
- match param.kind {
- GenericParamDefKind::Lifetime { .. } => {}
- GenericParamDefKind::Type { .. } => return true,
- }
- }
- false
-}
-
// Returns true if the given item must be inlined because it may be
// monomorphized or it was marked with `#[inline]`. This will only return
// true for functions.
hir::ItemKind::Impl(..) |
hir::ItemKind::Fn(..) => {
let generics = tcx.generics_of(tcx.hir.local_def_id(item.id));
- generics_require_inlining(generics)
+ generics.requires_monomorphization(tcx)
}
_ => false,
}
impl_src: DefId) -> bool {
let codegen_fn_attrs = tcx.codegen_fn_attrs(impl_item.hir_id.owner_def_id());
let generics = tcx.generics_of(tcx.hir.local_def_id(impl_item.id));
- if codegen_fn_attrs.requests_inline() || generics_require_inlining(generics) {
+ if codegen_fn_attrs.requests_inline() || generics.requires_monomorphization(tcx) {
return true
}
if let Some(impl_node_id) = tcx.hir.as_local_node_id(impl_src) {
hir::ImplItemKind::Method(..) => {
let attrs = self.tcx.codegen_fn_attrs(def_id);
let generics = self.tcx.generics_of(def_id);
- if generics_require_inlining(&generics) ||
- attrs.requests_inline() {
+ if generics.requires_monomorphization(self.tcx) || attrs.requests_inline() {
true
} else {
let impl_did = self.tcx
match self.tcx.hir.expect_item(impl_node_id).node {
hir::ItemKind::Impl(..) => {
let generics = self.tcx.generics_of(impl_did);
- generics_require_inlining(&generics)
+ generics.requires_monomorphization(self.tcx)
}
_ => false
}
remaining_lib_features.remove(&Symbol::intern("libc"));
remaining_lib_features.remove(&Symbol::intern("test"));
- for (feature, stable) in tcx.lib_features().to_vec() {
- if let Some(since) = stable {
- if let Some(span) = remaining_lib_features.get(&feature) {
- // Warn if the user has enabled an already-stable lib feature.
- unnecessary_stable_feature_lint(tcx, *span, feature, since);
+ let check_features =
+ |remaining_lib_features: &mut FxHashMap<_, _>, defined_features: &Vec<_>| {
+ for &(feature, since) in defined_features {
+ if let Some(since) = since {
+ if let Some(span) = remaining_lib_features.get(&feature) {
+ // Warn if the user has enabled an already-stable lib feature.
+ unnecessary_stable_feature_lint(tcx, *span, feature, since);
+ }
+ }
+ remaining_lib_features.remove(&feature);
+ if remaining_lib_features.is_empty() {
+ break;
+ }
+ }
+ };
+
+ // We always collect the lib features declared in the current crate, even if there are
+ // no unknown features, because the collection also does feature attribute validation.
+ let local_defined_features = tcx.lib_features().to_vec();
+ if !remaining_lib_features.is_empty() {
+ check_features(&mut remaining_lib_features, &local_defined_features);
+
+ for &cnum in &*tcx.crates() {
+ if remaining_lib_features.is_empty() {
+ break;
}
+ check_features(&mut remaining_lib_features, &tcx.defined_lib_features(cnum));
}
- remaining_lib_features.remove(&feature);
}
for (feature, span) in remaining_lib_features {
FrameInfo, ConstEvalResult,
};
-pub use self::value::{Scalar, Value, ConstValue, ScalarMaybeUndef};
+pub use self::value::{Scalar, ConstValue, ScalarMaybeUndef};
use std::fmt;
use mir;
Pointer { alloc_id, offset }
}
- pub(crate) fn wrapping_signed_offset<C: HasDataLayout>(self, i: i64, cx: C) -> Self {
+ pub fn wrapping_signed_offset<C: HasDataLayout>(self, i: i64, cx: C) -> Self {
Pointer::new(
self.alloc_id,
Size::from_bytes(cx.data_layout().wrapping_signed_offset(self.offset.bytes(), i)),
(Pointer::new(self.alloc_id, Size::from_bytes(res)), over)
}
- pub(crate) fn signed_offset<C: HasDataLayout>(self, i: i64, cx: C) -> EvalResult<'tcx, Self> {
+ pub fn signed_offset<C: HasDataLayout>(self, i: i64, cx: C) -> EvalResult<'tcx, Self> {
Ok(Pointer::new(
self.alloc_id,
Size::from_bytes(cx.data_layout().signed_offset(self.offset.bytes(), i)?),
}
}
-pub fn write_target_int(
- endianness: layout::Endian,
- mut target: &mut [u8],
- data: i128,
-) -> Result<(), io::Error> {
- let len = target.len();
- match endianness {
- layout::Endian::Little => target.write_int128::<LittleEndian>(data, len),
- layout::Endian::Big => target.write_int128::<BigEndian>(data, len),
- }
-}
-
pub fn read_target_uint(endianness: layout::Endian, mut source: &[u8]) -> Result<u128, io::Error> {
match endianness {
layout::Endian::Little => source.read_uint128::<LittleEndian>(source.len()),
}
}
+////////////////////////////////////////////////////////////////////////////////
+// Methods to faciliate working with signed integers stored in a u128
+////////////////////////////////////////////////////////////////////////////////
+
+pub fn sign_extend(value: u128, size: Size) -> u128 {
+ let size = size.bits();
+ // sign extend
+ let shift = 128 - size;
+ // shift the unsigned value to the left
+ // and back to the right as signed (essentially fills with FF on the left)
+ (((value << shift) as i128) >> shift) as u128
+}
+
+pub fn truncate(value: u128, size: Size) -> u128 {
+ let size = size.bits();
+ let shift = 128 - size;
+ // truncate (shift left to drop out leftover values, shift right to fill with zeroes)
+ (value << shift) >> shift
+}
+
////////////////////////////////////////////////////////////////////////////////
// Undefined byte tracking
////////////////////////////////////////////////////////////////////////////////
#![allow(unknown_lints)]
-use ty::layout::{Align, HasDataLayout, Size};
-use ty;
+use ty::layout::{HasDataLayout, Size};
use ty::subst::Substs;
use hir::def_id::DefId;
use super::{EvalResult, Pointer, PointerArithmetic, Allocation};
/// Represents a constant value in Rust. Scalar and ScalarPair are optimizations which
-/// matches Value's optimizations for easy conversions between these two types
+/// matches the LocalValue optimizations for easy conversions between Value and ConstValue.
#[derive(Copy, Clone, Debug, Eq, PartialEq, PartialOrd, Ord, RustcEncodable, RustcDecodable, Hash)]
pub enum ConstValue<'tcx> {
/// Never returned from the `const_eval` query, but the HIR contains these frequently in order
/// evaluation
Unevaluated(DefId, &'tcx Substs<'tcx>),
/// Used only for types with layout::abi::Scalar ABI and ZSTs
+ ///
+ /// Not using the enum `Value` to encode that this must not be `Undef`
Scalar(Scalar),
/// Used only for types with layout::abi::ScalarPair
///
}
impl<'tcx> ConstValue<'tcx> {
- #[inline]
- pub fn from_byval_value(val: Value) -> EvalResult<'static, Self> {
- Ok(match val {
- Value::ByRef(..) => bug!(),
- Value::ScalarPair(a, b) => ConstValue::ScalarPair(a.unwrap_or_err()?, b),
- Value::Scalar(val) => ConstValue::Scalar(val.unwrap_or_err()?),
- })
- }
-
- #[inline]
- pub fn to_byval_value(&self) -> Option<Value> {
- match *self {
- ConstValue::Unevaluated(..) |
- ConstValue::ByRef(..) => None,
- ConstValue::ScalarPair(a, b) => Some(Value::ScalarPair(a.into(), b)),
- ConstValue::Scalar(val) => Some(Value::Scalar(val.into())),
- }
- }
-
#[inline]
pub fn try_to_scalar(&self) -> Option<Scalar> {
match *self {
}
#[inline]
- pub fn to_bits(&self, size: Size) -> Option<u128> {
+ pub fn try_to_bits(&self, size: Size) -> Option<u128> {
self.try_to_scalar()?.to_bits(size).ok()
}
#[inline]
- pub fn to_ptr(&self) -> Option<Pointer> {
+ pub fn try_to_ptr(&self) -> Option<Pointer> {
self.try_to_scalar()?.to_ptr().ok()
}
-}
-/// A `Value` represents a single self-contained Rust value.
-///
-/// A `Value` can either refer to a block of memory inside an allocation (`ByRef`) or to a primitve
-/// value held directly, outside of any allocation (`Scalar`). For `ByRef`-values, we remember
-/// whether the pointer is supposed to be aligned or not (also see Place).
-///
-/// For optimization of a few very common cases, there is also a representation for a pair of
-/// primitive values (`ScalarPair`). It allows Miri to avoid making allocations for checked binary
-/// operations and fat pointers. This idea was taken from rustc's codegen.
-#[derive(Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, RustcEncodable, RustcDecodable, Hash)]
-pub enum Value {
- ByRef(Scalar, Align),
- Scalar(ScalarMaybeUndef),
- ScalarPair(ScalarMaybeUndef, ScalarMaybeUndef),
-}
-
-impl<'tcx> ty::TypeFoldable<'tcx> for Value {
- fn super_fold_with<'gcx: 'tcx, F: ty::fold::TypeFolder<'gcx, 'tcx>>(&self, _: &mut F) -> Self {
- *self
+ pub fn new_slice(
+ val: Scalar,
+ len: u64,
+ cx: impl HasDataLayout
+ ) -> Self {
+ ConstValue::ScalarPair(val, Scalar::Bits {
+ bits: len as u128,
+ size: cx.data_layout().pointer_size.bytes() as u8,
+ }.into())
}
- fn super_visit_with<V: ty::fold::TypeVisitor<'tcx>>(&self, _: &mut V) -> bool {
- false
+
+ pub fn new_dyn_trait(val: Scalar, vtable: Pointer) -> Self {
+ ConstValue::ScalarPair(val, Scalar::Ptr(vtable).into())
}
}
impl<'tcx> Scalar {
- pub fn ptr_null<C: HasDataLayout>(cx: C) -> Self {
+ pub fn ptr_null(cx: impl HasDataLayout) -> Self {
Scalar::Bits {
bits: 0,
size: cx.data_layout().pointer_size.bytes() as u8,
}
}
- pub fn to_value_with_len<C: HasDataLayout>(self, len: u64, cx: C) -> Value {
- ScalarMaybeUndef::Scalar(self).to_value_with_len(len, cx)
- }
-
- pub fn to_value_with_vtable(self, vtable: Pointer) -> Value {
- ScalarMaybeUndef::Scalar(self).to_value_with_vtable(vtable)
+ pub fn zst() -> Self {
+ Scalar::Bits { bits: 0, size: 0 }
}
- pub fn ptr_signed_offset<C: HasDataLayout>(self, i: i64, cx: C) -> EvalResult<'tcx, Self> {
+ pub fn ptr_signed_offset(self, i: i64, cx: impl HasDataLayout) -> EvalResult<'tcx, Self> {
let layout = cx.data_layout();
match self {
Scalar::Bits { bits, size } => {
}
}
- pub fn ptr_offset<C: HasDataLayout>(self, i: Size, cx: C) -> EvalResult<'tcx, Self> {
+ pub fn ptr_offset(self, i: Size, cx: impl HasDataLayout) -> EvalResult<'tcx, Self> {
let layout = cx.data_layout();
match self {
Scalar::Bits { bits, size } => {
}
}
- pub fn ptr_wrapping_signed_offset<C: HasDataLayout>(self, i: i64, cx: C) -> Self {
+ pub fn ptr_wrapping_signed_offset(self, i: i64, cx: impl HasDataLayout) -> Self {
let layout = cx.data_layout();
match self {
Scalar::Bits { bits, size } => {
}
}
- pub fn is_null_ptr<C: HasDataLayout>(self, cx: C) -> bool {
+ pub fn is_null_ptr(self, cx: impl HasDataLayout) -> bool {
match self {
Scalar::Bits { bits, size } => {
assert_eq!(size as u64, cx.data_layout().pointer_size.bytes());
}
}
- pub fn to_value(self) -> Value {
- Value::Scalar(ScalarMaybeUndef::Scalar(self))
+ pub fn from_bool(b: bool) -> Self {
+ Scalar::Bits { bits: b as u128, size: 1 }
+ }
+
+ pub fn from_char(c: char) -> Self {
+ Scalar::Bits { bits: c as u128, size: 4 }
+ }
+
+ pub fn to_bits(self, target_size: Size) -> EvalResult<'tcx, u128> {
+ match self {
+ Scalar::Bits { bits, size } => {
+ assert_eq!(target_size.bytes(), size as u64);
+ assert_ne!(size, 0, "to_bits cannot be used with zsts");
+ Ok(bits)
+ }
+ Scalar::Ptr(_) => err!(ReadPointerAsBytes),
+ }
+ }
+
+ pub fn to_ptr(self) -> EvalResult<'tcx, Pointer> {
+ match self {
+ Scalar::Bits { bits: 0, .. } => err!(InvalidNullPointerUsage),
+ Scalar::Bits { .. } => err!(ReadBytesAsPointer),
+ Scalar::Ptr(p) => Ok(p),
+ }
+ }
+
+ pub fn is_bits(self) -> bool {
+ match self {
+ Scalar::Bits { .. } => true,
+ _ => false,
+ }
+ }
+
+ pub fn is_ptr(self) -> bool {
+ match self {
+ Scalar::Ptr(_) => true,
+ _ => false,
+ }
+ }
+
+ pub fn to_bool(self) -> EvalResult<'tcx, bool> {
+ match self {
+ Scalar::Bits { bits: 0, size: 1 } => Ok(false),
+ Scalar::Bits { bits: 1, size: 1 } => Ok(true),
+ _ => err!(InvalidBool),
+ }
}
}
impl From<Pointer> for Scalar {
+ #[inline(always)]
fn from(ptr: Pointer) -> Self {
Scalar::Ptr(ptr)
}
/// The raw bytes of a simple value.
Bits {
/// The first `size` bytes are the value.
- /// Do not try to read less or more bytes that that
+ /// Do not try to read less or more bytes that that. The remaining bytes must be 0.
size: u8,
bits: u128,
},
}
impl From<Scalar> for ScalarMaybeUndef {
+ #[inline(always)]
fn from(s: Scalar) -> Self {
ScalarMaybeUndef::Scalar(s)
}
}
-impl ScalarMaybeUndef {
- pub fn unwrap_or_err(self) -> EvalResult<'static, Scalar> {
+impl<'tcx> ScalarMaybeUndef {
+ pub fn not_undef(self) -> EvalResult<'static, Scalar> {
match self {
ScalarMaybeUndef::Scalar(scalar) => Ok(scalar),
ScalarMaybeUndef::Undef => err!(ReadUndefBytes),
}
}
- pub fn to_value_with_len<C: HasDataLayout>(self, len: u64, cx: C) -> Value {
- Value::ScalarPair(self, Scalar::Bits {
- bits: len as u128,
- size: cx.data_layout().pointer_size.bytes() as u8,
- }.into())
- }
-
- pub fn to_value_with_vtable(self, vtable: Pointer) -> Value {
- Value::ScalarPair(self, Scalar::Ptr(vtable).into())
- }
-
- pub fn ptr_offset<C: HasDataLayout>(self, i: Size, cx: C) -> EvalResult<'tcx, Self> {
- match self {
- ScalarMaybeUndef::Scalar(scalar) => {
- scalar.ptr_offset(i, cx).map(ScalarMaybeUndef::Scalar)
- },
- ScalarMaybeUndef::Undef => Ok(ScalarMaybeUndef::Undef)
- }
- }
-}
-
-impl<'tcx> Scalar {
- pub fn from_bool(b: bool) -> Self {
- Scalar::Bits { bits: b as u128, size: 1 }
- }
-
- pub fn from_char(c: char) -> Self {
- Scalar::Bits { bits: c as u128, size: 4 }
- }
-
- pub fn to_bits(self, target_size: Size) -> EvalResult<'tcx, u128> {
- match self {
- Scalar::Bits { bits, size } => {
- assert_eq!(target_size.bytes(), size as u64);
- assert_ne!(size, 0, "to_bits cannot be used with zsts");
- Ok(bits)
- }
- Scalar::Ptr(_) => err!(ReadPointerAsBytes),
- }
- }
-
pub fn to_ptr(self) -> EvalResult<'tcx, Pointer> {
- match self {
- Scalar::Bits {..} => err!(ReadBytesAsPointer),
- Scalar::Ptr(p) => Ok(p),
- }
+ self.not_undef()?.to_ptr()
}
- pub fn is_bits(self) -> bool {
- match self {
- Scalar::Bits { .. } => true,
- _ => false,
- }
- }
-
- pub fn is_ptr(self) -> bool {
- match self {
- Scalar::Ptr(_) => true,
- _ => false,
- }
+ pub fn to_bits(self, target_size: Size) -> EvalResult<'tcx, u128> {
+ self.not_undef()?.to_bits(target_size)
}
pub fn to_bool(self) -> EvalResult<'tcx, bool> {
- match self {
- Scalar::Bits { bits: 0, size: 1 } => Ok(false),
- Scalar::Bits { bits: 1, size: 1 } => Ok(true),
- _ => err!(InvalidBool),
- }
+ self.not_undef()?.to_bool()
}
}
use hir::def_id::DefId;
use hir::{self, HirId, InlineAsm};
use middle::region;
-use mir::interpret::{EvalErrorKind, Scalar, Value, ScalarMaybeUndef};
+use mir::interpret::{EvalErrorKind, Scalar, ScalarMaybeUndef, ConstValue};
use mir::visit::MirVisitable;
use rustc_apfloat::ieee::{Double, Single};
use rustc_apfloat::Float;
/// Drop(P, goto BB1, unwind BB2)
/// }
/// BB1 {
- /// // P is now unitialized
+ /// // P is now uninitialized
/// P <- V
/// }
/// BB2 {
- /// // P is now unitialized -- its dtor panicked
+ /// // P is now uninitialized -- its dtor panicked
/// P <- V
/// }
/// ```
.iter()
.map(|&u| {
let mut s = String::new();
- print_miri_value(
- Scalar::Bits {
- bits: u,
- size: size.bytes() as u8,
- }.to_value(),
- switch_ty,
- &mut s,
- ).unwrap();
+ let c = ty::Const {
+ val: ConstValue::Scalar(Scalar::Bits {
+ bits: u,
+ size: size.bytes() as u8,
+ }.into()),
+ ty: switch_ty,
+ };
+ fmt_const_val(&mut s, &c).unwrap();
s.into()
})
.chain(iter::once(String::from("otherwise").into()))
}
/// Write a `ConstValue` in a way closer to the original source code than the `Debug` output.
-pub fn fmt_const_val<W: Write>(fmt: &mut W, const_val: &ty::Const) -> fmt::Result {
- if let Some(value) = const_val.to_byval_value() {
- print_miri_value(value, const_val.ty, fmt)
- } else {
- write!(fmt, "{:?}:{}", const_val.val, const_val.ty)
- }
-}
-
-pub fn print_miri_value<W: Write>(value: Value, ty: Ty, f: &mut W) -> fmt::Result {
+pub fn fmt_const_val(f: &mut impl Write, const_val: &ty::Const) -> fmt::Result {
use ty::TypeVariants::*;
+ let value = const_val.val;
+ let ty = const_val.ty;
// print some primitives
- if let Value::Scalar(ScalarMaybeUndef::Scalar(Scalar::Bits { bits, .. })) = value {
+ if let ConstValue::Scalar(Scalar::Bits { bits, .. }) = value {
match ty.sty {
TyBool if bits == 0 => return write!(f, "false"),
TyBool if bits == 1 => return write!(f, "true"),
return write!(f, "{}", item_path_str(did));
}
// print string literals
- if let Value::ScalarPair(ptr, len) = value {
- if let ScalarMaybeUndef::Scalar(Scalar::Ptr(ptr)) = ptr {
+ if let ConstValue::ScalarPair(ptr, len) = value {
+ if let Scalar::Ptr(ptr) = ptr {
if let ScalarMaybeUndef::Scalar(Scalar::Bits { bits: len, .. }) = len {
if let TyRef(_, &ty::TyS { sty: TyStr, .. }, _) = ty.sty {
return ty::tls::with(|tcx| {
// (A, [C])]
//
// Now that the top of the stack has no successors we can traverse, each item will
- // be popped off during iteration until we get back to `A`. This yeilds [E, D, B].
+ // be popped off during iteration until we get back to `A`. This yields [E, D, B].
//
// When we yield `B` and call `traverse_successor`, we push `C` to the stack, but
// since we've already visited `E`, that child isn't added to the stack. The last
// The core logic responsible for computing the bounds for our synthesized impl.
//
// To calculate the bounds, we call SelectionContext.select in a loop. Like FulfillmentContext,
- // we recursively select the nested obligations of predicates we encounter. However, whenver we
+ // we recursively select the nested obligations of predicates we encounter. However, whenever we
// encounter an UnimplementedError involving a type parameter, we add it to our ParamEnv. Since
// our goal is to determine when a particular type implements an auto trait, Unimplemented
// errors tell us what conditions need to be met.
//
- // This method ends up working somewhat similary to FulfillmentContext, but with a few key
+ // This method ends up working somewhat similarly to FulfillmentContext, but with a few key
// differences. FulfillmentContext works under the assumption that it's dealing with concrete
// user code. According, it considers all possible ways that a Predicate could be met - which
// isn't always what we want for a synthesized impl. For example, given the predicate 'T:
// we'll pick up any nested bounds, without ever inferring that 'T: IntoIterator' needs to
// hold.
//
- // One additonal consideration is supertrait bounds. Normally, a ParamEnv is only ever
+ // One additional consideration is supertrait bounds. Normally, a ParamEnv is only ever
// consutrcted once for a given type. As part of the construction process, the ParamEnv will
// have any supertrait bounds normalized - e.g. if we have a type 'struct Foo<T: Copy>', the
// ParamEnv will contain 'T: Copy' and 'T: Clone', since 'Copy: Clone'. When we construct our
- // own ParamEnv, we need to do this outselves, through traits::elaborate_predicates, or else
+ // own ParamEnv, we need to do this ourselves, through traits::elaborate_predicates, or else
// SelectionContext will choke on the missing predicates. However, this should never show up in
// the final synthesized generics: we don't want our generated docs page to contain something
// like 'T: Copy + Clone', as that's redundant. Therefore, we keep track of a separate
}
// If this error is due to `!: Trait` not implemented but `(): Trait` is
- // implemented, and fallback has occured, then it could be due to a
+ // implemented, and fallback has occurred, then it could be due to a
// variable that used to fallback to `()` now falling back to `!`. Issue a
// note informing about the change in behaviour.
if trait_predicate.skip_binder().self_ty().is_never()
// Errors and ambiuity in dropck occur in two cases:
// - unresolved inference variables at the end of typeck
// - non well-formed types where projections cannot be resolved
- // Either of these should hvae created an error before.
+ // Either of these should have created an error before.
tcx.sess
.delay_span_bug(span, "dtorck encountered internal error");
return InferOk {
use ich::{StableHashingContext, NodeIdHashingMode};
use infer::canonical::{CanonicalVarInfo, CanonicalVarInfos};
use infer::outlives::free_region_map::FreeRegionMap;
-use middle::cstore::{CrateStoreDyn, LinkMeta};
+use middle::cstore::CrateStoreDyn;
use middle::cstore::EncodedMetadata;
use middle::lang_items;
use middle::resolve_lifetime::{self, ObjectLifetimeDefault};
pub(crate) queries: query::Queries<'tcx>,
- // Records the free variables refrenced by every closure
+ // Records the free variables referenced by every closure
// expression. Do not track deps for this, just recompute it from
// scratch every time.
freevars: FxHashMap<DefId, Lrc<Vec<hir::Freevar>>>,
}
impl<'a, 'tcx> TyCtxt<'a, 'tcx, 'tcx> {
- pub fn encode_metadata(self, link_meta: &LinkMeta)
+ pub fn encode_metadata(self)
-> EncodedMetadata
{
- self.cstore.encode_metadata(self, link_meta)
+ self.cstore.encode_metadata(self)
}
}
}
}
+#[derive(Default)]
pub struct GenericParamCount {
pub lifetimes: usize,
pub types: usize,
// We could cache this as a property of `GenericParamCount`, but
// the aim is to refactor this away entirely eventually and the
// presence of this method will be a constant reminder.
- let mut own_counts = GenericParamCount {
- lifetimes: 0,
- types: 0,
- };
+ let mut own_counts: GenericParamCount = Default::default();
for param in &self.params {
match param.kind {
GenericParamDefKind::Lifetime => own_counts.lifetimes += 1,
- GenericParamDefKind::Type {..} => own_counts.types += 1,
+ GenericParamDefKind::Type { .. } => own_counts.types += 1,
};
}
pub fn requires_monomorphization(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>) -> bool {
for param in &self.params {
match param.kind {
- GenericParamDefKind::Type {..} => return true,
+ GenericParamDefKind::Type { .. } => return true,
GenericParamDefKind::Lifetime => {}
}
}
/// Creates a universe index from the given integer. Not to be
/// used lightly lest you pick a bad value. But sometimes we
- /// convert universe indicies into integers and back for various
+ /// convert universe indices into integers and back for various
/// reasons.
pub fn from_u32(index: u32) -> Self {
UniverseIndex(index)
}
}
-impl<'tcx> QueryDescription<'tcx> for queries::const_value_to_allocation<'tcx> {
+impl<'tcx> QueryDescription<'tcx> for queries::const_to_allocation<'tcx> {
fn describe(_tcx: TyCtxt, val: &'tcx ty::Const<'tcx>) -> String {
- format!("converting value `{:?}` to an allocation", val)
+ format!("converting constant `{:?}` to an allocation", val)
}
}
}
}
- // Visit the explict waiters which use condvars and are resumable
+ // Visit the explicit waiters which use condvars and are resumable
for (i, waiter) in query.latch.info.lock().waiters.iter().enumerate() {
if let Some(ref waiter_query) = waiter.query {
if visit(waiter.span, waiter_query.clone()).is_some() {
[] fn const_eval: const_eval_dep_node(ty::ParamEnvAnd<'tcx, GlobalId<'tcx>>)
-> ConstEvalResult<'tcx>,
- /// Converts a constant value to an constant allocation
- [] fn const_value_to_allocation: const_value_to_allocation(
+ /// Converts a constant value to a constant allocation
+ [] fn const_to_allocation: const_to_allocation(
&'tcx ty::Const<'tcx>
) -> &'tcx Allocation,
},
DepConstructor::EraseRegionsTy { ty }
}
-fn const_value_to_allocation<'tcx>(
+fn const_to_allocation<'tcx>(
val: &'tcx ty::Const<'tcx>,
) -> DepConstructor<'tcx> {
- DepConstructor::ConstValueToAllocation { val }
+ DepConstructor::ConstToAllocation { val }
}
fn type_param_predicates<'tcx>((item_id, param_id): (DefId, DefId)) -> DepConstructor<'tcx> {
DepKind::FulfillObligation |
DepKind::VtableMethods |
DepKind::EraseRegionsTy |
- DepKind::ConstValueToAllocation |
+ DepKind::ConstToAllocation |
DepKind::NormalizeProjectionTy |
DepKind::NormalizeTyAfterErasingRegions |
DepKind::ImpliedOutlivesBounds |
use ty::{self, AdtDef, TypeFlags, Ty, TyCtxt, TypeFoldable};
use ty::{Slice, TyS, ParamEnvAnd, ParamEnv};
use util::captures::Captures;
-use mir::interpret::{Scalar, Pointer, Value};
+use mir::interpret::{Scalar, Pointer};
use std::iter;
use std::cmp::Ordering;
}
let ty = tcx.lift_to_global(&ty).unwrap();
let size = tcx.layout_of(ty).ok()?.size;
- self.val.to_bits(size)
+ self.val.try_to_bits(size)
}
#[inline]
pub fn to_ptr(&self) -> Option<Pointer> {
- self.val.to_ptr()
- }
-
- #[inline]
- pub fn to_byval_value(&self) -> Option<Value> {
- self.val.to_byval_value()
+ self.val.try_to_ptr()
}
#[inline]
assert_eq!(self.ty, ty.value);
let ty = tcx.lift_to_global(&ty).unwrap();
let size = tcx.layout_of(ty).ok()?.size;
- self.val.to_bits(size)
+ self.val.try_to_bits(size)
}
#[inline]
mk_kind: &mut F)
where F: FnMut(&ty::GenericParamDef, &[Kind<'tcx>]) -> Kind<'tcx>
{
-
if let Some(def_id) = defs.parent {
let parent_defs = tcx.generics_of(def_id);
Substs::fill_item(substs, tcx, parent_defs, mk_kind);
let verbose = self.is_verbose;
let mut num_supplied_defaults = 0;
let mut has_self = false;
- let mut own_counts = GenericParamCount {
- lifetimes: 0,
- types: 0,
- };
+ let mut own_counts: GenericParamCount = Default::default();
let mut is_value_path = false;
let fn_trait_kind = ty::tls::with(|tcx| {
// Unfortunately, some kinds of items (e.g., closures) don't have
use syntax::attr;
pub use rustc_codegen_utils::link::{find_crate_name, filename_for_input, default_output_for_target,
- invalid_output_for_target, build_link_meta, out_filename,
- check_file_is_writeable};
+ invalid_output_for_target, out_filename, check_file_is_writeable};
// The third parameter is for env vars, used on windows to set up the
// path for MSVC to find its DLLs, and gcc to find its bundled
use consts;
use rustc_incremental::{copy_cgu_workproducts_to_incr_comp_cache_dir, in_incr_comp_dir};
use rustc::dep_graph::{WorkProduct, WorkProductId, WorkProductFileKind};
-use rustc::middle::cstore::{LinkMeta, EncodedMetadata};
+use rustc::middle::cstore::EncodedMetadata;
use rustc::session::config::{self, OutputFilenames, OutputType, Passes, Sanitizer, Lto};
use rustc::session::Session;
use rustc::util::nodemap::FxHashMap;
use rustc::util::common::{time_ext, time_depth, set_time_depth, print_time_passes_entry};
use rustc_fs_util::{path2cstr, link_or_copy};
use rustc_data_structures::small_c_str::SmallCStr;
+use rustc_data_structures::svh::Svh;
use errors::{self, Handler, Level, DiagnosticBuilder, FatalError, DiagnosticId};
use errors::emitter::{Emitter};
use syntax::attr;
/// Additional resources used by optimize_and_codegen (not module specific)
#[derive(Clone)]
pub struct CodegenContext {
- // Resouces needed when running LTO
+ // Resources needed when running LTO
pub time_passes: bool,
pub lto: Lto,
pub no_landing_pads: bool,
-C passes=name-anon-globals to the compiler command line.");
} else {
bug!("We are using thin LTO buffers without running the NameAnonGlobals pass. \
- This will likely cause errors in LLVM and shoud never happen.");
+ This will likely cause errors in LLVM and should never happen.");
}
}
}
pub fn start_async_codegen(tcx: TyCtxt,
time_graph: Option<TimeGraph>,
- link: LinkMeta,
metadata: EncodedMetadata,
coordinator_receive: Receiver<Box<dyn Any + Send>>,
total_cgus: usize)
-> OngoingCodegen {
let sess = tcx.sess;
let crate_name = tcx.crate_name(LOCAL_CRATE);
+ let crate_hash = tcx.crate_hash(LOCAL_CRATE);
let no_builtins = attr::contains_name(&tcx.hir.krate().attrs, "no_builtins");
let subsystem = attr::first_attr_value_str_by_name(&tcx.hir.krate().attrs,
"windows_subsystem");
OngoingCodegen {
crate_name,
- link,
+ crate_hash,
metadata,
windows_subsystem,
linker_info,
pub struct OngoingCodegen {
crate_name: Symbol,
- link: LinkMeta,
+ crate_hash: Svh,
metadata: EncodedMetadata,
windows_subsystem: Option<String>,
linker_info: LinkerInfo,
(CodegenResults {
crate_name: self.crate_name,
- link: self.link,
+ crate_hash: self.crate_hash,
metadata: self.metadata,
windows_subsystem: self.windows_subsystem,
linker_info: self.linker_info,
use super::ModuleKind;
use abi;
-use back::link;
use back::write::{self, OngoingCodegen};
use llvm::{self, TypeKind, get_param};
use metadata;
use rustc::ty::layout::{self, Align, TyLayout, LayoutOf};
use rustc::ty::query::Providers;
use rustc::dep_graph::{DepNode, DepConstructor};
-use rustc::middle::cstore::{self, LinkMeta, LinkagePreference};
+use rustc::middle::cstore::{self, LinkagePreference};
use rustc::middle::exported_symbols;
use rustc::util::common::{time, print_time_passes_entry};
use rustc::util::profiling::ProfileCategory;
}
fn write_metadata<'a, 'gcx>(tcx: TyCtxt<'a, 'gcx, 'gcx>,
- llvm_module: &ModuleLlvm,
- link_meta: &LinkMeta)
+ llvm_module: &ModuleLlvm)
-> EncodedMetadata {
use std::io::Write;
use flate2::Compression;
return EncodedMetadata::new();
}
- let metadata = tcx.encode_metadata(link_meta);
+ let metadata = tcx.encode_metadata();
if kind == MetadataKind::Uncompressed {
return metadata;
}
tcx.sess.fatal("this compiler's LLVM does not support PGO");
}
- let crate_hash = tcx.crate_hash(LOCAL_CRATE);
- let link_meta = link::build_link_meta(crate_hash);
let cgu_name_builder = &mut CodegenUnitNameBuilder::new(tcx);
// Codegen the metadata.
.to_string();
let metadata_llvm_module = ModuleLlvm::new(tcx.sess, &metadata_cgu_name);
let metadata = time(tcx.sess, "write metadata", || {
- write_metadata(tcx, &metadata_llvm_module, &link_meta)
+ write_metadata(tcx, &metadata_llvm_module)
});
tcx.sess.profiler(|p| p.end_activity(ProfileCategory::Codegen));
let ongoing_codegen = write::start_async_codegen(
tcx,
time_graph.clone(),
- link_meta,
metadata,
rx,
1);
let ongoing_codegen = write::start_async_codegen(
tcx,
time_graph.clone(),
- link_meta,
metadata,
rx,
codegen_units.len());
// If this is not a univariant enum, there is also the discriminant field.
let (discr_offset, discr_arg) = match discriminant_info {
RegularDiscriminant(_) => {
+ // We have the layout of an enum variant, we need the layout of the outer enum
let enum_layout = cx.layout_of(layout.ty);
(Some(enum_layout.fields.offset(0)),
Some(("RUST$ENUM$DISR".to_string(), enum_layout.field(cx, 0).ty)))
use rustc::util::profiling::ProfileCategory;
use rustc_mir::monomorphize;
use rustc_codegen_utils::codegen_backend::CodegenBackend;
+use rustc_data_structures::svh::Svh;
mod diagnostics;
// Now that we won't touch anything in the incremental compilation directory
// any more, we can finalize it (which involves renaming it)
- rustc_incremental::finalize_session_directory(sess, ongoing_codegen.link.crate_hash);
+ rustc_incremental::finalize_session_directory(sess, ongoing_codegen.crate_hash);
Ok(())
}
modules: Vec<CompiledModule>,
allocator_module: Option<CompiledModule>,
metadata_module: CompiledModule,
- link: rustc::middle::cstore::LinkMeta,
+ crate_hash: Svh,
metadata: rustc::middle::cstore::EncodedMetadata,
windows_subsystem: Option<String>,
linker_info: back::linker::LinkerInfo,
llargs.push(b);
return;
}
- _ => bug!("codegen_argument: {:?} invalid for pair arugment", op)
+ _ => bug!("codegen_argument: {:?} invalid for pair argument", op)
}
} else if arg.is_unsized_indirect() {
match op.val {
// except according to those terms.
use llvm;
-use rustc::mir::interpret::ConstEvalErr;
-use rustc_mir::interpret::{read_target_uint, const_val_field};
+use rustc::mir::interpret::{ConstEvalErr, read_target_uint};
+use rustc_mir::interpret::{const_field};
use rustc::hir::def_id::DefId;
use rustc::mir;
use rustc_data_structures::indexed_vec::Idx;
ref other => bug!("invalid simd shuffle type: {}", other),
};
let values: Result<Vec<_>, Lrc<_>> = (0..fields).map(|field| {
- let field = const_val_field(
+ let field = const_field(
bx.tcx(),
ty::ParamEnv::reveal_all(),
self.instance,
use rustc_target::spec::Target;
use rustc_data_structures::fx::FxHashMap;
use rustc_mir::monomorphize::collector;
-use link::{build_link_meta, out_filename};
+use link::out_filename;
pub use rustc_data_structures::sync::MetadataRef;
}
tcx.sess.abort_if_errors();
- let link_meta = build_link_meta(tcx.crate_hash(LOCAL_CRATE));
- let metadata = tcx.encode_metadata(&link_meta);
+ let metadata = tcx.encode_metadata();
box OngoingCodegen {
metadata: metadata,
use rustc::session::config::{self, OutputFilenames, Input, OutputType};
use rustc::session::Session;
-use rustc::middle::cstore::LinkMeta;
-use rustc_data_structures::svh::Svh;
use std::path::{Path, PathBuf};
use syntax::{ast, attr};
use syntax_pos::Span;
}
}
-pub fn build_link_meta(crate_hash: Svh) -> LinkMeta {
- let r = LinkMeta {
- crate_hash,
- };
- info!("{:?}", r);
- return r;
-}
-
pub fn find_crate_name(sess: Option<&Session>,
attrs: &[ast::Attribute],
input: &Input) -> String {
// whole Drain iterator (like &mut T).
let range_slice = {
let arr = &mut self.values as &mut [ManuallyDrop<<A as Array>::Element>];
- slice::from_raw_parts_mut(arr.as_mut_ptr().offset(start as isize),
+ slice::from_raw_parts_mut(arr.as_mut_ptr().add(start),
end - start)
};
Drain {
{
let arr =
&mut source_array_vec.values as &mut [ManuallyDrop<<A as Array>::Element>];
- let src = arr.as_ptr().offset(tail as isize);
- let dst = arr.as_mut_ptr().offset(start as isize);
+ let src = arr.as_ptr().add(tail);
+ let dst = arr.as_mut_ptr().add(start);
ptr::copy(src, dst, self.tail_len);
};
source_array_vec.set_len(start + self.tail_len);
#![feature(unsize)]
#![feature(specialization)]
#![feature(optin_builtin_traits)]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![cfg_attr(not(stage0), feature(nll))]
#![feature(allow_internal_unstable)]
#![feature(vec_resize_with)]
// infallible
// The spot to put the new value
{
- let p = self.as_mut_ptr().offset(index as isize);
+ let p = self.as_mut_ptr().add(index);
// Shift everything over to make space. (Duplicating the
// `index`th element into two consecutive places.)
ptr::copy(p, p.offset(1), len - index);
//!
//! `MTLock` is a mutex which disappears if cfg!(parallel_queries) is false.
//!
-//! `MTRef` is a immutable refernce if cfg!(parallel_queries), and an mutable reference otherwise.
+//! `MTRef` is a immutable reference if cfg!(parallel_queries), and an mutable reference otherwise.
//!
//! `rustc_erase_owner!` erases a OwningRef owner into Erased or Erased + Send + Sync
//! depending on the value of cfg!(parallel_queries).
/// closures may concurrently be computing a value which the inner value should take.
/// Only one of these closures are used to actually initialize the value.
/// If some other closure already set the value, we assert that it our closure computed
- /// a value equal to the value aready set and then
+ /// a value equal to the value already set and then
/// we return the value our closure computed wrapped in a `Option`.
/// If our closure set the value, `None` is returned.
/// If the value is already initialized, the closure is not called and `None` is returned.
// NB. this has an edge case with non-returning statements,
// like `loop {}` or `panic!()`: control flow never reaches
// the exit node through these, so one can have a function
- // that never actually calls itselfs but is still picked up by
+ // that never actually calls itself but is still picked up by
// this lint:
//
// fn f(cond: bool) {
) {
let mut ecx = ::rustc_mir::interpret::mk_eval_cx(tcx, gid.instance, param_env).unwrap();
let result = (|| {
- let val = ecx.const_to_value(constant.val)?;
use rustc_target::abi::LayoutOf;
+ use rustc_mir::interpret::OpTy;
+
+ let op = ecx.const_value_to_op(constant.val)?;
let layout = ecx.layout_of(constant.ty)?;
- let place = ecx.allocate_place_for_value(val, layout, None)?;
- let ptr = place.to_ptr()?;
- let mut todo = vec![(ptr, layout.ty, String::new())];
+ let place = ecx.allocate_op(OpTy { op, layout })?.into();
+
+ let mut todo = vec![(place, Vec::new())];
let mut seen = FxHashSet();
- seen.insert((ptr, layout.ty));
- while let Some((ptr, ty, path)) = todo.pop() {
- let layout = ecx.layout_of(ty)?;
- ecx.validate_ptr_target(
- ptr,
- layout.align,
- layout,
- path,
+ seen.insert(place);
+ while let Some((place, mut path)) = todo.pop() {
+ ecx.validate_mplace(
+ place,
+ &mut path,
&mut seen,
&mut todo,
)?;
);
// Don't suggest about raw identifiers if the feature isn't active
- if cx.sess.features_untracked().raw_identifiers {
- lint.span_suggestion_with_applicability(
- span,
- "you can use a raw identifier to stay compatible",
- "r#async".to_string(),
- Applicability::MachineApplicable,
- );
- }
+ lint.span_suggestion_with_applicability(
+ span,
+ "you can use a raw identifier to stay compatible",
+ "r#async".to_string(),
+ Applicability::MachineApplicable,
+ );
lint.emit()
}
}
#![cfg_attr(test, feature(test))]
#![feature(box_patterns)]
#![feature(box_syntax)]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![cfg_attr(not(stage0), feature(nll))]
#![feature(quote)]
#![feature(rustc_diagnostic_macros)]
// Protect against infinite recursion, for example
// `struct S(*mut S);`.
// FIXME: A recursion limit is necessary as well, for irregular
- // recusive types.
+ // recursive types.
if !cache.insert(ty) {
return FfiSafe;
}
fn check_item(&mut self, cx: &LateContext, it: &hir::Item) {
if let hir::ItemKind::Enum(ref enum_definition, _) = it.node {
let item_def_id = cx.tcx.hir.local_def_id(it.id);
- let generics = cx.tcx.generics_of(item_def_id);
- for param in &generics.params {
- match param.kind {
- ty::GenericParamDefKind::Lifetime { .. } => {},
- ty::GenericParamDefKind::Type { .. } => return,
- }
- }
- // Sizes only make sense for non-generic types.
let t = cx.tcx.type_of(item_def_id);
let ty = cx.tcx.erase_regions(&t);
match cx.layout_of(ty) {
use rustc::ty::query::QueryConfig;
use rustc::middle::cstore::{CrateStore, DepKind,
- LinkMeta,
EncodedMetadata, NativeLibraryKind};
use rustc::middle::exported_symbols::ExportedSymbol;
use rustc::middle::stability::DeprecationEntry;
}
fn encode_metadata<'a, 'tcx>(&self,
- tcx: TyCtxt<'a, 'tcx, 'tcx>,
- link_meta: &LinkMeta)
+ tcx: TyCtxt<'a, 'tcx, 'tcx>)
-> EncodedMetadata
{
- encoder::encode_metadata(tcx, link_meta)
+ encoder::encode_metadata(tcx)
}
fn metadata_encoding_version(&self) -> &[u8]
use isolated_encoder::IsolatedEncoder;
use schema::*;
-use rustc::middle::cstore::{LinkMeta, LinkagePreference, NativeLibrary,
+use rustc::middle::cstore::{LinkagePreference, NativeLibrary,
EncodedMetadata, ForeignModule};
use rustc::hir::def::CtorKind;
use rustc::hir::def_id::{CrateNum, CRATE_DEF_INDEX, DefIndex, DefId, LocalDefId, LOCAL_CRATE};
pub struct EncodeContext<'a, 'tcx: 'a> {
opaque: opaque::Encoder,
pub tcx: TyCtxt<'a, 'tcx, 'tcx>,
- link_meta: &'a LinkMeta,
lazy_state: LazyState,
type_shorthands: FxHashMap<Ty<'tcx>, usize>,
let index_bytes = self.position() - i;
let attrs = tcx.hir.krate_attrs();
- let link_meta = self.link_meta;
let is_proc_macro = tcx.sess.crate_types.borrow().contains(&CrateType::ProcMacro);
let has_default_lib_allocator = attr::contains_name(&attrs, "default_lib_allocator");
let has_global_allocator = *tcx.sess.has_global_allocator.get();
name: tcx.crate_name(LOCAL_CRATE),
extra_filename: tcx.sess.opts.cg.extra_filename.clone(),
triple: tcx.sess.opts.target_triple.clone(),
- hash: link_meta.crate_hash,
+ hash: tcx.crate_hash(LOCAL_CRATE),
disambiguator: tcx.sess.local_crate_disambiguator(),
panic_strategy: tcx.sess.panic_strategy(),
edition: hygiene::default_edition(),
hir::ItemKind::Const(..) => self.encode_optimized_mir(def_id),
hir::ItemKind::Fn(_, header, ..) => {
let generics = tcx.generics_of(def_id);
- let has_types = generics.params.iter().any(|param| match param.kind {
- ty::GenericParamDefKind::Type { .. } => true,
- _ => false,
- });
let needs_inline =
- (has_types || tcx.codegen_fn_attrs(def_id).requests_inline()) &&
+ (generics.requires_monomorphization(tcx) ||
+ tcx.codegen_fn_attrs(def_id).requests_inline()) &&
!self.metadata_output_only();
let always_encode_mir = self.tcx.sess.opts.debugging_opts.always_encode_mir;
if needs_inline
}
fn encode_info_for_generics(&mut self, generics: &hir::Generics) {
- generics.params.iter().for_each(|param| match param.kind {
- hir::GenericParamKind::Lifetime { .. } => {}
- hir::GenericParamKind::Type { ref default, .. } => {
- let def_id = self.tcx.hir.local_def_id(param.id);
- let has_default = Untracked(default.is_some());
- let encode_info = IsolatedEncoder::encode_info_for_ty_param;
- self.record(def_id, encode_info, (def_id, has_default));
+ for param in &generics.params {
+ match param.kind {
+ hir::GenericParamKind::Lifetime { .. } => {}
+ hir::GenericParamKind::Type { ref default, .. } => {
+ let def_id = self.tcx.hir.local_def_id(param.id);
+ let has_default = Untracked(default.is_some());
+ let encode_info = IsolatedEncoder::encode_info_for_ty_param;
+ self.record(def_id, encode_info, (def_id, has_default));
+ }
}
- });
+ }
}
fn encode_info_for_ty(&mut self, ty: &hir::Ty) {
// will allow us to slice the metadata to the precise length that we just
// generated regardless of trailing bytes that end up in it.
-pub fn encode_metadata<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
- link_meta: &LinkMeta)
+pub fn encode_metadata<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>)
-> EncodedMetadata
{
let mut encoder = opaque::Encoder::new(vec![]);
let mut ecx = EncodeContext {
opaque: encoder,
tcx,
- link_meta,
lazy_state: LazyState::NoNode,
type_shorthands: Default::default(),
predicate_shorthands: Default::default(),
}
}
- // Update kind and, optionally, the name of all native libaries
+ // Update kind and, optionally, the name of all native libraries
// (there may be more than one) with the specified name.
for &(ref name, ref new_name, kind) in &self.tcx.sess.opts.libs {
let mut found = false;
}
}
- if self
+ // Check is_empty() first because it's the common case, and doing that
+ // way we avoid the clone() call.
+ if !self.access_place_error_reported.is_empty() &&
+ self
.access_place_error_reported
.contains(&(place_span.0.clone(), place_span.1))
{
// unique or mutable borrows are invalidated by writes.
// Reservations count as writes since we need to check
// that activating the borrow will be OK
- // TOOD(bob_twinkles) is this actually the right thing to do?
+ // FIXME(bob_twinkles) is this actually the right thing to do?
this.generate_invalidates(borrow_index, context.loc);
}
}
/// predicates, or otherwise uses the inference context, executes
/// `op` and then executes all the further obligations that `op`
/// returns. This will yield a set of outlives constraints amongst
- /// regions which are extracted and stored as having occured at
+ /// regions which are extracted and stored as having occurred at
/// `locations`.
///
/// **Any `rustc::infer` operations that might generate region
// Our invariant is, that at each step of the iteration:
// - If we didn't run out of access to match, our borrow and access are comparable
// and either equal or disjoint.
- // - If we did run out of accesss, the borrow can access a part of it.
+ // - If we did run out of access, the borrow can access a part of it.
loop {
// loop invariant: borrow_c is always either equal to access_c or disjoint from it.
if let Some(borrow_c) = borrow_components.next() {
/// `sets.on_entry` to that local clone into `statement_effect` and
/// `terminator_effect`).
///
- /// When its false, no local clone is constucted; instead a
+ /// When it's false, no local clone is constructed; instead a
/// reference directly into `on_entry` is passed along via
/// `sets.on_entry` instead, which represents the flow state at
/// the block's start, not necessarily the state immediately prior
LitKind::Str(ref s, _) => {
let s = s.as_str();
let id = self.tcx.allocate_bytes(s.as_bytes());
- let value = Scalar::Ptr(id.into()).to_value_with_len(s.len() as u64, self.tcx);
- ConstValue::from_byval_value(value).unwrap()
+ ConstValue::new_slice(Scalar::Ptr(id.into()), s.len() as u64, self.tcx)
},
LitKind::ByteStr(ref data) => {
let id = self.tcx.allocate_bytes(data);
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+/// This file includes the logic for exhaustiveness and usefulness checking for
+/// pattern-matching. Specifically, given a list of patterns for a type, we can
+/// tell whether:
+/// (a) the patterns cover every possible constructor for the type [exhaustiveness]
+/// (b) each pattern is necessary [usefulness]
+///
+/// The algorithm implemented here is a modified version of the one described in:
+/// http://moscova.inria.fr/~maranget/papers/warn/index.html
+/// However, to save future implementors from reading the original paper, I'm going
+/// to summarise the algorithm here to hopefully save time and be a little clearer
+/// (without being so rigorous).
+///
+/// The core of the algorithm revolves about a "usefulness" check. In particular, we
+/// are trying to compute a predicate `U(P, p_{m + 1})` where `P` is a list of patterns
+/// of length `m` for a compound (product) type with `n` components (we refer to this as
+/// a matrix). `U(P, p_{m + 1})` represents whether, given an existing list of patterns
+/// `p_1 ..= p_m`, adding a new pattern will be "useful" (that is, cover previously-
+/// uncovered values of the type).
+///
+/// If we have this predicate, then we can easily compute both exhaustiveness of an
+/// entire set of patterns and the individual usefulness of each one.
+/// (a) the set of patterns is exhaustive iff `U(P, _)` is false (i.e. adding a wildcard
+/// match doesn't increase the number of values we're matching)
+/// (b) a pattern `p_i` is not useful if `U(P[0..=(i-1), p_i)` is false (i.e. adding a
+/// pattern to those that have come before it doesn't increase the number of values
+/// we're matching).
+///
+/// For example, say we have the following:
+/// ```
+/// // x: (Option<bool>, Result<()>)
+/// match x {
+/// (Some(true), _) => {}
+/// (None, Err(())) => {}
+/// (None, Err(_)) => {}
+/// }
+/// ```
+/// Here, the matrix `P` is 3 x 2 (rows x columns).
+/// [
+/// [Some(true), _],
+/// [None, Err(())],
+/// [None, Err(_)],
+/// ]
+/// We can tell it's not exhaustive, because `U(P, _)` is true (we're not covering
+/// `[Some(false), _]`, for instance). In addition, row 3 is not useful, because
+/// all the values it covers are already covered by row 2.
+///
+/// To compute `U`, we must have two other concepts.
+/// 1. `S(c, P)` is a "specialised matrix", where `c` is a constructor (like `Some` or
+/// `None`). You can think of it as filtering `P` to just the rows whose *first* pattern
+/// can cover `c` (and expanding OR-patterns into distinct patterns), and then expanding
+/// the constructor into all of its components.
+/// The specialisation of a row vector is computed by `specialize`.
+///
+/// It is computed as follows. For each row `p_i` of P, we have four cases:
+/// 1.1. `p_(i,1) = c(r_1, .., r_a)`. Then `S(c, P)` has a corresponding row:
+/// r_1, .., r_a, p_(i,2), .., p_(i,n)
+/// 1.2. `p_(i,1) = c'(r_1, .., r_a')` where `c ≠c'`. Then `S(c, P)` has no
+/// corresponding row.
+/// 1.3. `p_(i,1) = _`. Then `S(c, P)` has a corresponding row:
+/// _, .., _, p_(i,2), .., p_(i,n)
+/// 1.4. `p_(i,1) = r_1 | r_2`. Then `S(c, P)` has corresponding rows inlined from:
+/// S(c, (r_1, p_(i,2), .., p_(i,n)))
+/// S(c, (r_2, p_(i,2), .., p_(i,n)))
+///
+/// 2. `D(P)` is a "default matrix". This is used when we know there are missing
+/// constructor cases, but there might be existing wildcard patterns, so to check the
+/// usefulness of the matrix, we have to check all its *other* components.
+/// The default matrix is computed inline in `is_useful`.
+///
+/// It is computed as follows. For each row `p_i` of P, we have three cases:
+/// 1.1. `p_(i,1) = c(r_1, .., r_a)`. Then `D(P)` has no corresponding row.
+/// 1.2. `p_(i,1) = _`. Then `D(P)` has a corresponding row:
+/// p_(i,2), .., p_(i,n)
+/// 1.3. `p_(i,1) = r_1 | r_2`. Then `D(P)` has corresponding rows inlined from:
+/// D((r_1, p_(i,2), .., p_(i,n)))
+/// D((r_2, p_(i,2), .., p_(i,n)))
+///
+/// Note that the OR-patterns are not always used directly in Rust, but are used to derive
+/// the exhaustive integer matching rules, so they're written here for posterity.
+///
+/// The algorithm for computing `U`
+/// -------------------------------
+/// The algorithm is inductive (on the number of columns: i.e. components of tuple patterns).
+/// That means we're going to check the components from left-to-right, so the algorithm
+/// operates principally on the first component of the matrix and new pattern `p_{m + 1}`.
+/// This algorithm is realised in the `is_useful` function.
+///
+/// Base case. (`n = 0`, i.e. an empty tuple pattern)
+/// - If `P` already contains an empty pattern (i.e. if the number of patterns `m > 0`),
+/// then `U(P, p_{m + 1})` is false.
+/// - Otherwise, `P` must be empty, so `U(P, p_{m + 1})` is true.
+///
+/// Inductive step. (`n > 0`, i.e. whether there's at least one column
+/// [which may then be expanded into further columns later])
+/// We're going to match on the new pattern, `p_{m + 1}`.
+/// - If `p_{m + 1} == c(r_1, .., r_a)`, then we have a constructor pattern.
+/// Thus, the usefulness of `p_{m + 1}` can be reduced to whether it is useful when
+/// we ignore all the patterns in `P` that involve other constructors. This is where
+/// `S(c, P)` comes in:
+/// `U(P, p_{m + 1}) := U(S(c, P), S(c, p_{m + 1}))`
+/// This special case is handled in `is_useful_specialized`.
+/// - If `p_{m + 1} == _`, then we have two more cases:
+/// + All the constructors of the first component of the type exist within
+/// all the rows (after having expanded OR-patterns). In this case:
+/// `U(P, p_{m + 1}) := ∨(k ϵ constructors) U(S(k, P), S(k, p_{m + 1}))`
+/// I.e. the pattern `p_{m + 1}` is only useful when all the constructors are
+/// present *if* its later components are useful for the respective constructors
+/// covered by `p_{m + 1}` (usually a single constructor, but all in the case of `_`).
+/// + Some constructors are not present in the existing rows (after having expanded
+/// OR-patterns). However, there might be wildcard patterns (`_`) present. Thus, we
+/// are only really concerned with the other patterns leading with wildcards. This is
+/// where `D` comes in:
+/// `U(P, p_{m + 1}) := U(D(P), p_({m + 1},2), .., p_({m + 1},n))`
+/// - If `p_{m + 1} == r_1 | r_2`, then the usefulness depends on each separately:
+/// `U(P, p_{m + 1}) := U(P, (r_1, p_({m + 1},2), .., p_({m + 1},n)))
+/// || U(P, (r_2, p_({m + 1},2), .., p_({m + 1},n)))`
+///
+/// Modifications to the algorithm
+/// ------------------------------
+/// The algorithm in the paper doesn't cover some of the special cases that arise in Rust, for
+/// example uninhabited types and variable-length slice patterns. These are drawn attention to
+/// throughout the code below. I'll make a quick note here about how exhaustive integer matching
+/// is accounted for, though.
+///
+/// Exhaustive integer matching
+/// ---------------------------
+/// An integer type can be thought of as a (huge) sum type: 1 | 2 | 3 | ...
+/// So to support exhaustive integer matching, we can make use of the logic in the paper for
+/// OR-patterns. However, we obviously can't just treat ranges x..=y as individual sums, because
+/// they are likely gigantic. So we instead treat ranges as constructors of the integers. This means
+/// that we have a constructor *of* constructors (the integers themselves). We then need to work
+/// through all the inductive step rules above, deriving how the ranges would be treated as
+/// OR-patterns, and making sure that they're treated in the same way even when they're ranges.
+/// There are really only four special cases here:
+/// - When we match on a constructor that's actually a range, we have to treat it as if we would
+/// an OR-pattern.
+/// + It turns out that we can simply extend the case for single-value patterns in
+/// `specialize` to either be *equal* to a value constructor, or *contained within* a range
+/// constructor.
+/// + When the pattern itself is a range, you just want to tell whether any of the values in
+/// the pattern range coincide with values in the constructor range, which is precisely
+/// intersection.
+/// Since when encountering a range pattern for a value constructor, we also use inclusion, it
+/// means that whenever the constructor is a value/range and the pattern is also a value/range,
+/// we can simply use intersection to test usefulness.
+/// - When we're testing for usefulness of a pattern and the pattern's first component is a
+/// wildcard.
+/// + If all the constructors appear in the matrix, we have a slight complication. By default,
+/// the behaviour (i.e. a disjunction over specialised matrices for each constructor) is
+/// invalid, because we want a disjunction over every *integer* in each range, not just a
+/// disjunction over every range. This is a bit more tricky to deal with: essentially we need
+/// to form equivalence classes of subranges of the constructor range for which the behaviour
+/// of the matrix `P` and new pattern `p_{m + 1}` are the same. This is described in more
+/// detail in `split_grouped_constructors`.
+/// + If some constructors are missing from the matrix, it turns out we don't need to do
+/// anything special (because we know none of the integers are actually wildcards: i.e. we
+/// can't span wildcards using ranges).
+
use self::Constructor::*;
use self::Usefulness::*;
use self::WitnessPreference::*;
use rustc::hir::def_id::DefId;
use rustc::hir::RangeEnd;
use rustc::ty::{self, Ty, TyCtxt, TypeFoldable};
+use rustc::ty::layout::{Integer, IntegerExt};
use rustc::mir::Field;
use rustc::mir::interpret::ConstValue;
use rustc::util::common::ErrorReported;
+use syntax::attr::{SignedInt, UnsignedInt};
use syntax_pos::{Span, DUMMY_SP};
use arena::TypedArena;
-use std::cmp::{self, Ordering};
+use std::cmp::{self, Ordering, min, max};
use std::fmt;
use std::iter::{FromIterator, IntoIterator};
+use std::ops::RangeInclusive;
+use std::u128;
pub fn expand_pattern<'a, 'tcx>(cx: &MatchCheckCtxt<'a, 'tcx>, pat: Pattern<'tcx>)
-> &'a Pattern<'tcx>
}
}
-//NOTE: appears to be the only place other then InferCtxt to contain a ParamEnv
pub struct MatchCheckCtxt<'a, 'tcx: 'a> {
pub tcx: TyCtxt<'a, 'tcx, 'tcx>,
/// The module in which the match occurs. This is necessary for
tcx,
module,
pattern_arena: &pattern_arena,
- byte_array_map: FxHashMap(),
+ byte_array_map: FxHashMap::default(),
})
}
}
}
-#[derive(Clone)]
+#[derive(Clone, Debug)]
pub enum Usefulness<'tcx> {
Useful,
UsefulWithWitness(Vec<Witness<'tcx>>),
}
}
-#[derive(Copy, Clone)]
+#[derive(Copy, Clone, Debug)]
pub enum WitnessPreference {
ConstructWitness,
LeaveOutWitness
max_slice_length: u64,
}
-/// A stack of patterns in reverse order of construction
-#[derive(Clone)]
+/// A witness of non-exhaustiveness for error reporting, represented
+/// as a list of patterns (in reverse order of construction) with
+/// wildcards inside to represent elements that can take any inhabitant
+/// of the type as a value.
+///
+/// A witness against a list of patterns should have the same types
+/// and length as the pattern matched against. Because Rust `match`
+/// is always against a single pattern, at the end the witness will
+/// have length 1, but in the middle of the algorithm, it can contain
+/// multiple patterns.
+///
+/// For example, if we are constructing a witness for the match against
+/// ```
+/// struct Pair(Option<(u32, u32)>, bool);
+///
+/// match (p: Pair) {
+/// Pair(None, _) => {}
+/// Pair(_, false) => {}
+/// }
+/// ```
+///
+/// We'll perform the following steps:
+/// 1. Start with an empty witness
+/// `Witness(vec![])`
+/// 2. Push a witness `Some(_)` against the `None`
+/// `Witness(vec![Some(_)])`
+/// 3. Push a witness `true` against the `false`
+/// `Witness(vec![Some(_), true])`
+/// 4. Apply the `Pair` constructor to the witnesses
+/// `Witness(vec![Pair(Some(_), true)])`
+///
+/// The final `Pair(Some(_), true)` is then the resulting witness.
+#[derive(Clone, Debug)]
pub struct Witness<'tcx>(Vec<Pattern<'tcx>>);
impl<'tcx> Witness<'tcx> {
let arity = constructor_arity(cx, ctor, ty);
let pat = {
let len = self.0.len() as u64;
- let mut pats = self.0.drain((len-arity) as usize..).rev();
+ let mut pats = self.0.drain((len - arity) as usize..).rev();
match ty.sty {
ty::TyAdt(..) |
_ => {
match *ctor {
ConstantValue(value) => PatternKind::Constant { value },
+ ConstantRange(lo, hi, end) => PatternKind::Range { lo, hi, end },
_ => PatternKind::Wild,
}
}
/// but is instead bounded by the maximum fixed length of slice patterns in
/// the column of patterns being analyzed.
///
-/// This intentionally does not list ConstantValue specializations for
-/// non-booleans, because we currently assume that there is always a
-/// "non-standard constant" that matches. See issue #12483.
-///
/// We make sure to omit constructors that are statically impossible. eg for
/// Option<!> we do not include Some(_) in the returned list of constructors.
fn all_constructors<'a, 'tcx: 'a>(cx: &mut MatchCheckCtxt<'a, 'tcx>,
-> Vec<Constructor<'tcx>>
{
debug!("all_constructors({:?})", pcx.ty);
- match pcx.ty.sty {
+ let exhaustive_integer_patterns = cx.tcx.features().exhaustive_integer_patterns;
+ let ctors = match pcx.ty.sty {
ty::TyBool => {
[true, false].iter().map(|&b| {
ConstantValue(ty::Const::from_bool(cx.tcx, b))
.map(|v| Variant(v.did))
.collect()
}
+ ty::TyChar if exhaustive_integer_patterns => {
+ let endpoint = |c: char| {
+ let ty = ty::ParamEnv::empty().and(cx.tcx.types.char);
+ ty::Const::from_bits(cx.tcx, c as u128, ty)
+ };
+ vec![
+ // The valid Unicode Scalar Value ranges.
+ ConstantRange(endpoint('\u{0000}'), endpoint('\u{D7FF}'), RangeEnd::Included),
+ ConstantRange(endpoint('\u{E000}'), endpoint('\u{10FFFF}'), RangeEnd::Included),
+ ]
+ }
+ ty::TyInt(ity) if exhaustive_integer_patterns => {
+ // FIXME(49937): refactor these bit manipulations into interpret.
+ let bits = Integer::from_attr(cx.tcx, SignedInt(ity)).size().bits() as u128;
+ let min = 1u128 << (bits - 1);
+ let max = (1u128 << (bits - 1)) - 1;
+ let ty = ty::ParamEnv::empty().and(pcx.ty);
+ vec![ConstantRange(ty::Const::from_bits(cx.tcx, min as u128, ty),
+ ty::Const::from_bits(cx.tcx, max as u128, ty),
+ RangeEnd::Included)]
+ }
+ ty::TyUint(uty) if exhaustive_integer_patterns => {
+ // FIXME(49937): refactor these bit manipulations into interpret.
+ let bits = Integer::from_attr(cx.tcx, UnsignedInt(uty)).size().bits() as u128;
+ let max = !0u128 >> (128 - bits);
+ let ty = ty::ParamEnv::empty().and(pcx.ty);
+ vec![ConstantRange(ty::Const::from_bits(cx.tcx, 0, ty),
+ ty::Const::from_bits(cx.tcx, max, ty),
+ RangeEnd::Included)]
+ }
_ => {
if cx.is_uninhabited(pcx.ty) {
vec![]
vec![Single]
}
}
- }
+ };
+ ctors
}
fn max_slice_length<'p, 'a: 'p, 'tcx: 'a, I>(
// `[true, ..]`
// `[.., false]`
// Then any slice of length ≥1 that matches one of these two
- // patterns can be be trivially turned to a slice of any
+ // patterns can be trivially turned to a slice of any
// other length ≥1 that matches them and vice-versa - for
// but the slice from length 2 `[false, true]` that matches neither
// of these patterns can't be turned to a slice from length 1 that
cmp::max(max_fixed_len + 1, max_prefix_len + max_suffix_len)
}
+/// An inclusive interval, used for precise integer exhaustiveness checking.
+/// `IntRange`s always store a contiguous range. This means that values are
+/// encoded such that `0` encodes the minimum value for the integer,
+/// regardless of the signedness.
+/// For example, the pattern `-128...127i8` is encoded as `0..=255`.
+/// This makes comparisons and arithmetic on interval endpoints much more
+/// straightforward. See `signed_bias` for details.
+///
+/// `IntRange` is never used to encode an empty range or a "range" that wraps
+/// around the (offset) space: i.e. `range.lo <= range.hi`.
+#[derive(Clone)]
+struct IntRange<'tcx> {
+ pub range: RangeInclusive<u128>,
+ pub ty: Ty<'tcx>,
+}
+
+impl<'tcx> IntRange<'tcx> {
+ fn from_ctor(tcx: TyCtxt<'_, 'tcx, 'tcx>,
+ ctor: &Constructor<'tcx>)
+ -> Option<IntRange<'tcx>> {
+ match ctor {
+ ConstantRange(lo, hi, end) => {
+ assert_eq!(lo.ty, hi.ty);
+ let ty = lo.ty;
+ let env_ty = ty::ParamEnv::empty().and(ty);
+ if let Some(lo) = lo.assert_bits(tcx, env_ty) {
+ if let Some(hi) = hi.assert_bits(tcx, env_ty) {
+ // Perform a shift if the underlying types are signed,
+ // which makes the interval arithmetic simpler.
+ let bias = IntRange::signed_bias(tcx, ty);
+ let (lo, hi) = (lo ^ bias, hi ^ bias);
+ // Make sure the interval is well-formed.
+ return if lo > hi || lo == hi && *end == RangeEnd::Excluded {
+ None
+ } else {
+ let offset = (*end == RangeEnd::Excluded) as u128;
+ Some(IntRange { range: lo..=(hi - offset), ty })
+ };
+ }
+ }
+ None
+ }
+ ConstantValue(val) => {
+ let ty = val.ty;
+ if let Some(val) = val.assert_bits(tcx, ty::ParamEnv::empty().and(ty)) {
+ let bias = IntRange::signed_bias(tcx, ty);
+ let val = val ^ bias;
+ Some(IntRange { range: val..=val, ty })
+ } else {
+ None
+ }
+ }
+ Single | Variant(_) | Slice(_) => {
+ None
+ }
+ }
+ }
+
+ fn from_pat(tcx: TyCtxt<'_, 'tcx, 'tcx>,
+ pat: &Pattern<'tcx>)
+ -> Option<IntRange<'tcx>> {
+ Self::from_ctor(tcx, &match pat.kind {
+ box PatternKind::Constant { value } => ConstantValue(value),
+ box PatternKind::Range { lo, hi, end } => ConstantRange(lo, hi, end),
+ _ => return None,
+ })
+ }
+
+ // The return value of `signed_bias` should be XORed with an endpoint to encode/decode it.
+ fn signed_bias(tcx: TyCtxt<'_, 'tcx, 'tcx>, ty: Ty<'tcx>) -> u128 {
+ match ty.sty {
+ ty::TyInt(ity) => {
+ let bits = Integer::from_attr(tcx, SignedInt(ity)).size().bits() as u128;
+ 1u128 << (bits - 1)
+ }
+ _ => 0
+ }
+ }
+
+ /// Convert a `RangeInclusive` to a `ConstantValue` or inclusive `ConstantRange`.
+ fn range_to_ctor(
+ tcx: TyCtxt<'_, 'tcx, 'tcx>,
+ ty: Ty<'tcx>,
+ r: RangeInclusive<u128>,
+ ) -> Constructor<'tcx> {
+ let bias = IntRange::signed_bias(tcx, ty);
+ let ty = ty::ParamEnv::empty().and(ty);
+ let (lo, hi) = r.into_inner();
+ if lo == hi {
+ ConstantValue(ty::Const::from_bits(tcx, lo ^ bias, ty))
+ } else {
+ ConstantRange(ty::Const::from_bits(tcx, lo ^ bias, ty),
+ ty::Const::from_bits(tcx, hi ^ bias, ty),
+ RangeEnd::Included)
+ }
+ }
+
+ /// Return a collection of ranges that spans the values covered by `ranges`, subtracted
+ /// by the values covered by `self`: i.e. `ranges \ self` (in set notation).
+ fn subtract_from(self,
+ tcx: TyCtxt<'_, 'tcx, 'tcx>,
+ ranges: Vec<Constructor<'tcx>>)
+ -> Vec<Constructor<'tcx>> {
+ let ranges = ranges.into_iter().filter_map(|r| {
+ IntRange::from_ctor(tcx, &r).map(|i| i.range)
+ });
+ let mut remaining_ranges = vec![];
+ let ty = self.ty;
+ let (lo, hi) = self.range.into_inner();
+ for subrange in ranges {
+ let (subrange_lo, subrange_hi) = subrange.into_inner();
+ if lo > subrange_hi || subrange_lo > hi {
+ // The pattern doesn't intersect with the subrange at all,
+ // so the subrange remains untouched.
+ remaining_ranges.push(Self::range_to_ctor(tcx, ty, subrange_lo..=subrange_hi));
+ } else {
+ if lo > subrange_lo {
+ // The pattern intersects an upper section of the
+ // subrange, so a lower section will remain.
+ remaining_ranges.push(Self::range_to_ctor(tcx, ty, subrange_lo..=(lo - 1)));
+ }
+ if hi < subrange_hi {
+ // The pattern intersects a lower section of the
+ // subrange, so an upper section will remain.
+ remaining_ranges.push(Self::range_to_ctor(tcx, ty, (hi + 1)..=subrange_hi));
+ }
+ }
+ }
+ remaining_ranges
+ }
+
+ fn intersection(&self, other: &Self) -> Option<Self> {
+ let ty = self.ty;
+ let (lo, hi) = (*self.range.start(), *self.range.end());
+ let (other_lo, other_hi) = (*other.range.start(), *other.range.end());
+ if lo <= other_hi && other_lo <= hi {
+ Some(IntRange { range: max(lo, other_lo)..=min(hi, other_hi), ty })
+ } else {
+ None
+ }
+ }
+}
+
+// Return a set of constructors equivalent to `all_ctors \ used_ctors`.
+fn compute_missing_ctors<'a, 'tcx: 'a>(
+ tcx: TyCtxt<'a, 'tcx, 'tcx>,
+ all_ctors: &Vec<Constructor<'tcx>>,
+ used_ctors: &Vec<Constructor<'tcx>>,
+) -> Vec<Constructor<'tcx>> {
+ let mut missing_ctors = vec![];
+
+ for req_ctor in all_ctors {
+ let mut refined_ctors = vec![req_ctor.clone()];
+ for used_ctor in used_ctors {
+ if used_ctor == req_ctor {
+ // If a constructor appears in a `match` arm, we can
+ // eliminate it straight away.
+ refined_ctors = vec![]
+ } else if tcx.features().exhaustive_integer_patterns {
+ if let Some(interval) = IntRange::from_ctor(tcx, used_ctor) {
+ // Refine the required constructors for the type by subtracting
+ // the range defined by the current constructor pattern.
+ refined_ctors = interval.subtract_from(tcx, refined_ctors);
+ }
+ }
+
+ // If the constructor patterns that have been considered so far
+ // already cover the entire range of values, then we the
+ // constructor is not missing, and we can move on to the next one.
+ if refined_ctors.is_empty() {
+ break;
+ }
+ }
+ // If a constructor has not been matched, then it is missing.
+ // We add `refined_ctors` instead of `req_ctor`, because then we can
+ // provide more detailed error information about precisely which
+ // ranges have been omitted.
+ missing_ctors.extend(refined_ctors);
+ }
+
+ missing_ctors
+}
+
/// Algorithm from http://moscova.inria.fr/~maranget/papers/warn/index.html
/// The algorithm from the paper has been modified to correctly handle empty
/// types. The changes are:
// FIXME: this might lead to "unstable" behavior with macro hygiene
// introducing uninhabited patterns for inaccessible fields. We
// need to figure out how to model that.
- ty: rows.iter().map(|r| r[0].ty).find(|ty| !ty.references_error())
- .unwrap_or(v[0].ty),
+ ty: rows.iter().map(|r| r[0].ty).find(|ty| !ty.references_error()).unwrap_or(v[0].ty),
max_slice_length: max_slice_length(cx, rows.iter().map(|r| r[0]).chain(Some(v[0])))
};
if let Some(constructors) = pat_constructors(cx, v[0], pcx) {
debug!("is_useful - expanding constructors: {:#?}", constructors);
- constructors.into_iter().map(|c|
+ split_grouped_constructors(cx.tcx, constructors, matrix, pcx.ty).into_iter().map(|c|
is_useful_specialized(cx, matrix, v, c.clone(), pcx.ty, witness)
).find(|result| result.is_useful()).unwrap_or(NotUseful)
} else {
pat_constructors(cx, row[0], pcx).unwrap_or(vec![])
}).collect();
debug!("used_ctors = {:#?}", used_ctors);
+ // `all_ctors` are all the constructors for the given type, which
+ // should all be represented (or caught with the wild pattern `_`).
let all_ctors = all_constructors(cx, pcx);
debug!("all_ctors = {:#?}", all_ctors);
- let missing_ctors: Vec<Constructor> = all_ctors.iter().filter(|c| {
- !used_ctors.contains(*c)
- }).cloned().collect();
// `missing_ctors` is the set of constructors from the same type as the
// first column of `matrix` that are matched only by wildcard patterns
// feature flag is not present, so this is only
// needed for that case.
- let is_privately_empty =
- all_ctors.is_empty() && !cx.is_uninhabited(pcx.ty);
- let is_declared_nonexhaustive =
- cx.is_non_exhaustive_enum(pcx.ty) && !cx.is_local(pcx.ty);
+ // Find those constructors that are not matched by any non-wildcard patterns in the
+ // current column.
+ let missing_ctors = compute_missing_ctors(cx.tcx, &all_ctors, &used_ctors);
+
+ let is_privately_empty = all_ctors.is_empty() && !cx.is_uninhabited(pcx.ty);
+ let is_declared_nonexhaustive = cx.is_non_exhaustive_enum(pcx.ty) && !cx.is_local(pcx.ty);
debug!("missing_ctors={:#?} is_privately_empty={:#?} is_declared_nonexhaustive={:#?}",
missing_ctors, is_privately_empty, is_declared_nonexhaustive);
let is_non_exhaustive = is_privately_empty || is_declared_nonexhaustive;
if missing_ctors.is_empty() && !is_non_exhaustive {
- all_ctors.into_iter().map(|c| {
+ split_grouped_constructors(cx.tcx, all_ctors, matrix, pcx.ty).into_iter().map(|c| {
is_useful_specialized(cx, matrix, v, c.clone(), pcx.ty, witness)
}).find(|result| result.is_useful()).unwrap_or(NotUseful)
} else {
// `used_ctors` is empty.
let new_witnesses = if is_non_exhaustive || used_ctors.is_empty() {
// All constructors are unused. Add wild patterns
- // rather than each individual constructor
+ // rather than each individual constructor.
pats.into_iter().map(|mut witness| {
witness.0.push(Pattern {
ty: pcx.ty,
} else {
pats.into_iter().flat_map(|witness| {
missing_ctors.iter().map(move |ctor| {
+ // Extends the witness with a "wild" version of this
+ // constructor, that matches everything that can be built with
+ // it. For example, if `ctor` is a `Constructor::Variant` for
+ // `Option::Some`, this pushes the witness for `Some(_)`.
witness.clone().push_wild_constructor(cx, ctor, pcx.ty)
})
}).collect()
}
}
+/// A shorthand for the `U(S(c, P), S(c, q))` operation from the paper. I.e. `is_useful` applied
+/// to the specialised version of both the pattern matrix `P` and the new pattern `q`.
fn is_useful_specialized<'p, 'a:'p, 'tcx: 'a>(
cx: &mut MatchCheckCtxt<'a, 'tcx>,
&Matrix(ref m): &Matrix<'p, 'tcx>,
v: &[&'p Pattern<'tcx>],
ctor: Constructor<'tcx>,
lty: Ty<'tcx>,
- witness: WitnessPreference) -> Usefulness<'tcx>
-{
+ witness: WitnessPreference,
+) -> Usefulness<'tcx> {
debug!("is_useful_specialized({:#?}, {:#?}, {:?})", v, ctor, lty);
let sub_pat_tys = constructor_sub_pattern_tys(cx, &ctor, lty);
let wild_patterns_owned: Vec<_> = sub_pat_tys.iter().map(|ty| {
.collect()
),
result => result
- },
+ }
None => NotUseful
}
}
/// Slice patterns, however, can match slices of different lengths. For instance,
/// `[a, b, ..tail]` can match a slice of length 2, 3, 4 and so on.
///
-/// Returns None in case of a catch-all, which can't be specialized.
+/// Returns `None` in case of a catch-all, which can't be specialized.
fn pat_constructors<'tcx>(cx: &mut MatchCheckCtxt,
pat: &Pattern<'tcx>,
pcx: PatternContext)
-> Option<Vec<Constructor<'tcx>>>
{
match *pat.kind {
- PatternKind::Binding { .. } | PatternKind::Wild =>
- None,
- PatternKind::Leaf { .. } | PatternKind::Deref { .. } =>
- Some(vec![Single]),
- PatternKind::Variant { adt_def, variant_index, .. } =>
- Some(vec![Variant(adt_def.variants[variant_index].did)]),
- PatternKind::Constant { value } =>
- Some(vec![ConstantValue(value)]),
- PatternKind::Range { lo, hi, end } =>
- Some(vec![ConstantRange(lo, hi, end)]),
+ PatternKind::Binding { .. } | PatternKind::Wild => None,
+ PatternKind::Leaf { .. } | PatternKind::Deref { .. } => Some(vec![Single]),
+ PatternKind::Variant { adt_def, variant_index, .. } => {
+ Some(vec![Variant(adt_def.variants[variant_index].did)])
+ }
+ PatternKind::Constant { value } => Some(vec![ConstantValue(value)]),
+ PatternKind::Range { lo, hi, end } => Some(vec![ConstantRange(lo, hi, end)]),
PatternKind::Array { .. } => match pcx.ty.sty {
ty::TyArray(_, length) => Some(vec![
Slice(length.unwrap_usize(cx.tcx))
Ok(true)
}
+// Whether to evaluate a constructor using exhaustive integer matching. This is true if the
+// constructor is a range or constant with an integer type.
+fn should_treat_range_exhaustively(tcx: TyCtxt<'_, 'tcx, 'tcx>, ctor: &Constructor<'tcx>) -> bool {
+ if tcx.features().exhaustive_integer_patterns {
+ if let ConstantValue(value) | ConstantRange(value, _, _) = ctor {
+ if let ty::TyChar | ty::TyInt(_) | ty::TyUint(_) = value.ty.sty {
+ return true;
+ }
+ }
+ }
+ false
+}
+
+/// For exhaustive integer matching, some constructors are grouped within other constructors
+/// (namely integer typed values are grouped within ranges). However, when specialising these
+/// constructors, we want to be specialising for the underlying constructors (the integers), not
+/// the groups (the ranges). Thus we need to split the groups up. Splitting them up naïvely would
+/// mean creating a separate constructor for every single value in the range, which is clearly
+/// impractical. However, observe that for some ranges of integers, the specialisation will be
+/// identical across all values in that range (i.e. there are equivalence classes of ranges of
+/// constructors based on their `is_useful_specialised` outcome). These classes are grouped by
+/// the patterns that apply to them (in the matrix `P`). We can split the range whenever the
+/// patterns that apply to that range (specifically: the patterns that *intersect* with that range)
+/// change.
+/// Our solution, therefore, is to split the range constructor into subranges at every single point
+/// the group of intersecting patterns changes (using the method described below).
+/// And voilà ! We're testing precisely those ranges that we need to, without any exhaustive matching
+/// on actual integers. The nice thing about this is that the number of subranges is linear in the
+/// number of rows in the matrix (i.e. the number of cases in the `match` statement), so we don't
+/// need to be worried about matching over gargantuan ranges.
+///
+/// Essentially, given the first column of a matrix representing ranges, looking like the following:
+///
+/// |------| |----------| |-------| ||
+/// |-------| |-------| |----| ||
+/// |---------|
+///
+/// We split the ranges up into equivalence classes so the ranges are no longer overlapping:
+///
+/// |--|--|||-||||--||---|||-------| |-|||| ||
+///
+/// The logic for determining how to split the ranges is fairly straightforward: we calculate
+/// boundaries for each interval range, sort them, then create constructors for each new interval
+/// between every pair of boundary points. (This essentially sums up to performing the intuitive
+/// merging operation depicted above.)
+fn split_grouped_constructors<'p, 'a: 'p, 'tcx: 'a>(
+ tcx: TyCtxt<'a, 'tcx, 'tcx>,
+ ctors: Vec<Constructor<'tcx>>,
+ &Matrix(ref m): &Matrix<'p, 'tcx>,
+ ty: Ty<'tcx>,
+) -> Vec<Constructor<'tcx>> {
+ let mut split_ctors = Vec::with_capacity(ctors.len());
+
+ for ctor in ctors.into_iter() {
+ match ctor {
+ // For now, only ranges may denote groups of "subconstructors", so we only need to
+ // special-case constant ranges.
+ ConstantRange(..) if should_treat_range_exhaustively(tcx, &ctor) => {
+ // We only care about finding all the subranges within the range of the constructor
+ // range. Anything else is irrelevant, because it is guaranteed to result in
+ // `NotUseful`, which is the default case anyway, and can be ignored.
+ let ctor_range = IntRange::from_ctor(tcx, &ctor).unwrap();
+
+ /// Represents a border between 2 integers. Because the intervals spanning borders
+ /// must be able to cover every integer, we need to be able to represent
+ /// 2^128 + 1 such borders.
+ #[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
+ enum Border {
+ JustBefore(u128),
+ AfterMax,
+ }
+
+ // A function for extracting the borders of an integer interval.
+ fn range_borders(r: IntRange<'_>) -> impl Iterator<Item = Border> {
+ let (lo, hi) = r.range.into_inner();
+ let from = Border::JustBefore(lo);
+ let to = match hi.checked_add(1) {
+ Some(m) => Border::JustBefore(m),
+ None => Border::AfterMax,
+ };
+ vec![from, to].into_iter()
+ }
+
+ // `borders` is the set of borders between equivalence classes: each equivalence
+ // class lies between 2 borders.
+ let row_borders = m.iter()
+ .flat_map(|row| IntRange::from_pat(tcx, row[0]))
+ .flat_map(|range| ctor_range.intersection(&range))
+ .flat_map(|range| range_borders(range));
+ let ctor_borders = range_borders(ctor_range.clone());
+ let mut borders: Vec<_> = row_borders.chain(ctor_borders).collect();
+ borders.sort_unstable();
+
+ // We're going to iterate through every pair of borders, making sure that each
+ // represents an interval of nonnegative length, and convert each such interval
+ // into a constructor.
+ for IntRange { range, .. } in borders.windows(2).filter_map(|window| {
+ match (window[0], window[1]) {
+ (Border::JustBefore(n), Border::JustBefore(m)) => {
+ if n < m {
+ Some(IntRange { range: n..=(m - 1), ty })
+ } else {
+ None
+ }
+ }
+ (Border::JustBefore(n), Border::AfterMax) => {
+ Some(IntRange { range: n..=u128::MAX, ty })
+ }
+ (Border::AfterMax, _) => None,
+ }
+ }) {
+ split_ctors.push(IntRange::range_to_ctor(tcx, ty, range));
+ }
+ }
+ // Any other constructor can be used unchanged.
+ _ => split_ctors.push(ctor),
+ }
+ }
+
+ split_ctors
+}
+
+/// Check whether there exists any shared value in either `ctor` or `pat` by intersecting them.
+fn constructor_intersects_pattern<'p, 'a: 'p, 'tcx: 'a>(
+ tcx: TyCtxt<'a, 'tcx, 'tcx>,
+ ctor: &Constructor<'tcx>,
+ pat: &'p Pattern<'tcx>,
+) -> Option<Vec<&'p Pattern<'tcx>>> {
+ if should_treat_range_exhaustively(tcx, ctor) {
+ match (IntRange::from_ctor(tcx, ctor), IntRange::from_pat(tcx, pat)) {
+ (Some(ctor), Some(pat)) => {
+ ctor.intersection(&pat).map(|_| {
+ let (pat_lo, pat_hi) = pat.range.into_inner();
+ let (ctor_lo, ctor_hi) = ctor.range.into_inner();
+ assert!(pat_lo <= ctor_lo && ctor_hi <= pat_hi);
+ vec![]
+ })
+ }
+ _ => None,
+ }
+ } else {
+ // Fallback for non-ranges and ranges that involve floating-point numbers, which are not
+ // conveniently handled by `IntRange`. For these cases, the constructor may not be a range
+ // so intersection actually devolves into being covered by the pattern.
+ match constructor_covered_by_range(tcx, ctor, pat) {
+ Ok(true) => Some(vec![]),
+ Ok(false) | Err(ErrorReported) => None,
+ }
+ }
+}
+
fn constructor_covered_by_range<'a, 'tcx>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
ctor: &Constructor<'tcx>,
- from: &'tcx ty::Const<'tcx>, to: &'tcx ty::Const<'tcx>,
- end: RangeEnd,
- ty: Ty<'tcx>,
+ pat: &Pattern<'tcx>,
) -> Result<bool, ErrorReported> {
+ let (from, to, end, ty) = match pat.kind {
+ box PatternKind::Constant { value } => (value, value, RangeEnd::Included, value.ty),
+ box PatternKind::Range { lo, hi, end } => (lo, hi, end, lo.ty),
+ _ => bug!("`constructor_covered_by_range` called with {:?}", pat),
+ };
trace!("constructor_covered_by_range {:#?}, {:#?}, {:#?}, {}", ctor, from, to, ty);
let cmp_from = |c_from| compare_const_vals(tcx, c_from, from, ty::ParamEnv::empty().and(ty))
.map(|res| res != Ordering::Less);
cx: &mut MatchCheckCtxt<'a, 'tcx>,
r: &[&'p Pattern<'tcx>],
constructor: &Constructor<'tcx>,
- wild_patterns: &[&'p Pattern<'tcx>])
- -> Option<Vec<&'p Pattern<'tcx>>>
-{
+ wild_patterns: &[&'p Pattern<'tcx>],
+) -> Option<Vec<&'p Pattern<'tcx>>> {
let pat = &r[0];
let head: Option<Vec<&Pattern>> = match *pat.kind {
PatternKind::Binding { .. } | PatternKind::Wild => {
Some(wild_patterns.to_owned())
- },
+ }
PatternKind::Variant { adt_def, variant_index, ref subpatterns, .. } => {
let ref variant = adt_def.variants[variant_index];
PatternKind::Leaf { ref subpatterns } => {
Some(patterns_for_variant(subpatterns, wild_patterns))
}
+
PatternKind::Deref { ref subpattern } => {
Some(vec![subpattern])
}
span_bug!(pat.span,
"unexpected const-val {:?} with ctor {:?}", value, constructor)
}
- },
+ }
_ => {
- match constructor_covered_by_range(
- cx.tcx,
- constructor, value, value, RangeEnd::Included,
- value.ty,
- ) {
- Ok(true) => Some(vec![]),
- Ok(false) => None,
- Err(ErrorReported) => None,
- }
+ // If the constructor is a:
+ // Single value: add a row if the constructor equals the pattern.
+ // Range: add a row if the constructor contains the pattern.
+ constructor_intersects_pattern(cx.tcx, constructor, pat)
}
}
}
- PatternKind::Range { lo, hi, ref end } => {
- match constructor_covered_by_range(
- cx.tcx,
- constructor, lo, hi, end.clone(), lo.ty,
- ) {
- Ok(true) => Some(vec![]),
- Ok(false) => None,
- Err(ErrorReported) => None,
- }
+ PatternKind::Range { .. } => {
+ // If the constructor is a:
+ // Single value: add a row if the pattern contains the constructor.
+ // Range: add a row if the constructor intersects the pattern.
+ constructor_intersects_pattern(cx.tcx, constructor, pat)
}
PatternKind::Array { ref prefix, ref slice, ref suffix } |
let pat_len = prefix.len() + suffix.len();
if let Some(slice_count) = wild_patterns.len().checked_sub(pat_len) {
if slice_count == 0 || slice.is_some() {
- Some(
- prefix.iter().chain(
- wild_patterns.iter().map(|p| *p)
- .skip(prefix.len())
- .take(slice_count)
- .chain(
- suffix.iter()
- )).collect())
+ Some(prefix.iter().chain(
+ wild_patterns.iter().map(|p| *p)
+ .skip(prefix.len())
+ .take(slice_count)
+ .chain(suffix.iter())
+ ).collect())
} else {
None
}
self.tables);
let pattern = patcx.lower_pattern(pat);
let pattern_ty = pattern.ty;
- let pats : Matrix = vec![vec![
+ let pats: Matrix = vec![vec![
expand_pattern(cx, pattern)
]].into_iter().collect();
printed_if_let_err = true;
}
}
- },
+ }
hir::MatchSource::WhileLetDesugar => {
// check which arm we're on.
pub use self::check_match::check_crate;
pub(crate) use self::check_match::check_match;
-use interpret::{const_val_field, const_variant_index, self};
+use interpret::{const_field, const_variant_index};
use rustc::mir::{fmt_const_val, Field, BorrowKind, Mutability};
-use rustc::mir::interpret::{Scalar, GlobalId, ConstValue};
+use rustc::mir::interpret::{Scalar, GlobalId, ConstValue, sign_extend};
use rustc::ty::{self, TyCtxt, AdtDef, Ty, Region};
use rustc::ty::subst::{Substs, Kind};
use rustc::hir::{self, PatKind, RangeEnd};
PatternKind::Range { lo, hi, end } => {
fmt_const_val(f, lo)?;
match end {
- RangeEnd::Included => write!(f, "...")?,
+ RangeEnd::Included => write!(f, "..=")?,
RangeEnd::Excluded => write!(f, "..")?,
}
fmt_const_val(f, hi)
"lower range bound must be less than upper",
);
PatternKind::Wild
- },
- (RangeEnd::Included, None) |
- (RangeEnd::Included, Some(Ordering::Greater)) => {
+ }
+ (RangeEnd::Included, Some(Ordering::Equal)) => {
+ PatternKind::Constant { value: lo }
+ }
+ (RangeEnd::Included, Some(Ordering::Less)) => {
+ PatternKind::Range { lo, hi, end }
+ }
+ (RangeEnd::Included, _) => {
let mut err = struct_span_err!(
self.tcx.sess,
lo_expr.span,
}
err.emit();
PatternKind::Wild
- },
- (RangeEnd::Included, Some(_)) => PatternKind::Range { lo, hi, end },
+ }
}
}
_ => PatternKind::Wild
debug!("const_to_pat: cv={:#?}", cv);
let adt_subpattern = |i, variant_opt| {
let field = Field::new(i);
- let val = const_val_field(
+ let val = const_field(
self.tcx, self.param_env, instance,
variant_opt, field, cv,
).expect("field access failed");
},
ty::TyInt(_) => {
let layout = tcx.layout_of(ty).ok()?;
- let a = interpret::sign_extend(a, layout);
- let b = interpret::sign_extend(b, layout);
+ assert!(layout.abi.is_signed());
+ let a = sign_extend(a, layout.size);
+ let b = sign_extend(b, layout.size);
Some((a as i128).cmp(&(b as i128)))
},
_ => Some(a.cmp(&b)),
len_b,
),
) if ptr_a.offset.bytes() == 0 && ptr_b.offset.bytes() == 0 => {
- let len_a = len_a.unwrap_or_err().ok();
- let len_b = len_b.unwrap_or_err().ok();
+ let len_a = len_a.not_undef().ok();
+ let len_b = len_b.not_undef().ok();
if len_a.is_none() || len_b.is_none() {
tcx.sess.struct_err("str slice len is undef").delay_as_bug();
}
LitKind::Str(ref s, _) => {
let s = s.as_str();
let id = tcx.allocate_bytes(s.as_bytes());
- let value = Scalar::Ptr(id.into()).to_value_with_len(s.len() as u64, tcx);
- ConstValue::from_byval_value(value).unwrap()
+ ConstValue::new_slice(Scalar::Ptr(id.into()), s.len() as u64, tcx)
},
LitKind::ByteStr(ref data) => {
let id = tcx.allocate_bytes(data);
-use rustc::ty::{self, Ty};
-use rustc::ty::layout::{self, LayoutOf, TyLayout};
+use rustc::ty::{self, Ty, TypeAndMut};
+use rustc::ty::layout::{self, TyLayout, Size};
use syntax::ast::{FloatTy, IntTy, UintTy};
use rustc_apfloat::ieee::{Single, Double};
-use super::{EvalContext, Machine};
-use rustc::mir::interpret::{Scalar, EvalResult, Pointer, PointerArithmetic, Value, EvalErrorKind};
+use rustc::mir::interpret::{
+ Scalar, EvalResult, Pointer, PointerArithmetic, EvalErrorKind,
+ truncate, sign_extend
+};
use rustc::mir::CastKind;
use rustc_apfloat::Float;
-use interpret::eval_context::ValTy;
-use interpret::Place;
+
+use super::{EvalContext, Machine, PlaceTy, OpTy, Value};
impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> EvalContext<'a, 'mir, 'tcx, M> {
+ fn type_is_fat_ptr(&self, ty: Ty<'tcx>) -> bool {
+ match ty.sty {
+ ty::TyRawPtr(ty::TypeAndMut { ty, .. }) |
+ ty::TyRef(_, ty, _) => !self.type_is_sized(ty),
+ ty::TyAdt(def, _) if def.is_box() => !self.type_is_sized(ty.boxed_ty()),
+ _ => false,
+ }
+ }
+
crate fn cast(
&mut self,
- src: ValTy<'tcx>,
+ src: OpTy<'tcx>,
kind: CastKind,
- dest_ty: Ty<'tcx>,
- dest: Place,
+ dest: PlaceTy<'tcx>,
) -> EvalResult<'tcx> {
- let src_layout = self.layout_of(src.ty)?;
- let dst_layout = self.layout_of(dest_ty)?;
+ let src_layout = src.layout;
+ let dst_layout = dest.layout;
use rustc::mir::CastKind::*;
match kind {
Unsize => {
- self.unsize_into(src.value, src_layout, dest, dst_layout)?;
+ self.unsize_into(src, dest)?;
}
Misc => {
- if self.type_is_fat_ptr(src.ty) {
- match (src.value, self.type_is_fat_ptr(dest_ty)) {
- (Value::ByRef { .. }, _) |
+ let src = self.read_value(src)?;
+ if self.type_is_fat_ptr(src_layout.ty) {
+ match (src.value, self.type_is_fat_ptr(dest.layout.ty)) {
// pointers to extern types
(Value::Scalar(_),_) |
// slices and trait objects to other slices/trait objects
(Value::ScalarPair(..), true) => {
- let valty = ValTy {
- value: src.value,
- ty: dest_ty,
- };
- self.write_value(valty, dest)?;
+ // No change to value
+ self.write_value(src.value, dest)?;
}
// slices and trait objects to thin pointers (dropping the metadata)
(Value::ScalarPair(data, _), false) => {
- let valty = ValTy {
- value: Value::Scalar(data),
- ty: dest_ty,
- };
- self.write_value(valty, dest)?;
+ self.write_scalar(data, dest)?;
}
}
} else {
- let src_layout = self.layout_of(src.ty)?;
match src_layout.variants {
layout::Variants::Single { index } => {
- if let Some(def) = src.ty.ty_adt_def() {
+ if let Some(def) = src_layout.ty.ty_adt_def() {
let discr_val = def
.discriminant_for_variant(*self.tcx, index)
.val;
return self.write_scalar(
- dest,
Scalar::Bits {
bits: discr_val,
size: dst_layout.size.bytes() as u8,
},
- dest_ty);
+ dest);
}
}
layout::Variants::Tagged { .. } |
layout::Variants::NicheFilling { .. } => {},
}
- let src_val = self.value_to_scalar(src)?;
- let dest_val = self.cast_scalar(src_val, src_layout, dst_layout)?;
- let valty = ValTy {
- value: Value::Scalar(dest_val.into()),
- ty: dest_ty,
- };
- self.write_value(valty, dest)?;
+ let src = src.to_scalar()?;
+ let dest_val = self.cast_scalar(src, src_layout, dest.layout)?;
+ self.write_scalar(dest_val, dest)?;
}
}
ReifyFnPointer => {
- match src.ty.sty {
+ // The src operand does not matter, just its type
+ match src_layout.ty.sty {
ty::TyFnDef(def_id, substs) => {
if self.tcx.has_attr(def_id, "rustc_args_required_const") {
bug!("reifying a fn ptr that requires \
substs,
).ok_or_else(|| EvalErrorKind::TooGeneric.into());
let fn_ptr = self.memory.create_fn_alloc(instance?);
- let valty = ValTy {
- value: Value::Scalar(Scalar::Ptr(fn_ptr.into()).into()),
- ty: dest_ty,
- };
- self.write_value(valty, dest)?;
+ self.write_scalar(Scalar::Ptr(fn_ptr.into()), dest)?;
}
ref other => bug!("reify fn pointer on {:?}", other),
}
}
UnsafeFnPointer => {
- match dest_ty.sty {
+ let src = self.read_value(src)?;
+ match dest.layout.ty.sty {
ty::TyFnPtr(_) => {
- let mut src = src;
- src.ty = dest_ty;
- self.write_value(src, dest)?;
+ // No change to value
+ self.write_value(*src, dest)?;
}
ref other => bug!("fn to unsafe fn cast on {:?}", other),
}
}
ClosureFnPointer => {
- match src.ty.sty {
+ // The src operand does not matter, just its type
+ match src_layout.ty.sty {
ty::TyClosure(def_id, substs) => {
let substs = self.tcx.subst_and_normalize_erasing_regions(
self.substs(),
ty::ClosureKind::FnOnce,
);
let fn_ptr = self.memory.create_fn_alloc(instance);
- let valty = ValTy {
- value: Value::Scalar(Scalar::Ptr(fn_ptr.into()).into()),
- ty: dest_ty,
- };
- self.write_value(valty, dest)?;
+ let val = Value::Scalar(Scalar::Ptr(fn_ptr.into()).into());
+ self.write_value(val, dest)?;
}
ref other => bug!("closure fn pointer on {:?}", other),
}
match val {
Scalar::Ptr(ptr) => self.cast_from_ptr(ptr, dest_layout.ty),
Scalar::Bits { bits, size } => {
- assert_eq!(size as u64, src_layout.size.bytes());
- match src_layout.ty.sty {
- TyFloat(fty) => self.cast_from_float(bits, fty, dest_layout.ty),
- _ => self.cast_from_int(bits, src_layout, dest_layout),
+ debug_assert_eq!(size as u64, src_layout.size.bytes());
+ debug_assert_eq!(truncate(bits, Size::from_bytes(size.into())), bits,
+ "Unexpected value of size {} before casting", size);
+
+ let res = match src_layout.ty.sty {
+ TyFloat(fty) => self.cast_from_float(bits, fty, dest_layout.ty)?,
+ _ => self.cast_from_int(bits, src_layout, dest_layout)?,
+ };
+
+ // Sanity check
+ match res {
+ Scalar::Ptr(_) => bug!("Fabricated a ptr value from an int...?"),
+ Scalar::Bits { bits, size } => {
+ debug_assert_eq!(size as u64, dest_layout.size.bytes());
+ debug_assert_eq!(truncate(bits, Size::from_bytes(size.into())), bits,
+ "Unexpected value of size {} after casting", size);
+ }
}
+ // Done
+ Ok(res)
}
}
}
// float -> uint
TyUint(t) => {
let width = t.bit_width().unwrap_or(self.memory.pointer_size().bits() as usize);
- match fty {
- FloatTy::F32 => Ok(Scalar::Bits {
- bits: Single::from_bits(bits).to_u128(width).value,
- size: (width / 8) as u8,
- }),
- FloatTy::F64 => Ok(Scalar::Bits {
- bits: Double::from_bits(bits).to_u128(width).value,
- size: (width / 8) as u8,
- }),
- }
+ let v = match fty {
+ FloatTy::F32 => Single::from_bits(bits).to_u128(width).value,
+ FloatTy::F64 => Double::from_bits(bits).to_u128(width).value,
+ };
+ // This should already fit the bit width
+ Ok(Scalar::Bits {
+ bits: v,
+ size: (width / 8) as u8,
+ })
},
// float -> int
TyInt(t) => {
let width = t.bit_width().unwrap_or(self.memory.pointer_size().bits() as usize);
- match fty {
- FloatTy::F32 => Ok(Scalar::Bits {
- bits: Single::from_bits(bits).to_i128(width).value as u128,
- size: (width / 8) as u8,
- }),
- FloatTy::F64 => Ok(Scalar::Bits {
- bits: Double::from_bits(bits).to_i128(width).value as u128,
- size: (width / 8) as u8,
- }),
- }
+ let v = match fty {
+ FloatTy::F32 => Single::from_bits(bits).to_i128(width).value,
+ FloatTy::F64 => Double::from_bits(bits).to_i128(width).value,
+ };
+ // We got an i128, but we may need something smaller. We have to truncate ourselves.
+ let truncated = truncate(v as u128, Size::from_bits(width as u64));
+ assert_eq!(sign_extend(truncated, Size::from_bits(width as u64)) as i128, v,
+ "truncating and extending changed the value?!?");
+ Ok(Scalar::Bits {
+ bits: truncated,
+ size: (width / 8) as u8,
+ })
},
// f64 -> f32
TyFloat(FloatTy::F32) if fty == FloatTy::F64 => {
_ => err!(Unimplemented(format!("ptr to {:?} cast", ty))),
}
}
+
+ fn unsize_into_ptr(
+ &mut self,
+ src: OpTy<'tcx>,
+ dest: PlaceTy<'tcx>,
+ // The pointee types
+ sty: Ty<'tcx>,
+ dty: Ty<'tcx>,
+ ) -> EvalResult<'tcx> {
+ // A<Struct> -> A<Trait> conversion
+ let (src_pointee_ty, dest_pointee_ty) = self.tcx.struct_lockstep_tails(sty, dty);
+
+ match (&src_pointee_ty.sty, &dest_pointee_ty.sty) {
+ (&ty::TyArray(_, length), &ty::TySlice(_)) => {
+ let ptr = self.read_value(src)?.to_scalar_ptr()?;
+ // u64 cast is from usize to u64, which is always good
+ let val = Value::new_slice(ptr, length.unwrap_usize(self.tcx.tcx), self.tcx.tcx);
+ self.write_value(val, dest)
+ }
+ (&ty::TyDynamic(..), &ty::TyDynamic(..)) => {
+ // For now, upcasts are limited to changes in marker
+ // traits, and hence never actually require an actual
+ // change to the vtable.
+ self.copy_op(src, dest)
+ }
+ (_, &ty::TyDynamic(ref data, _)) => {
+ // Initial cast from sized to dyn trait
+ let trait_ref = data.principal().unwrap().with_self_ty(
+ *self.tcx,
+ src_pointee_ty,
+ );
+ let trait_ref = self.tcx.erase_regions(&trait_ref);
+ let vtable = self.get_vtable(src_pointee_ty, trait_ref)?;
+ let ptr = self.read_value(src)?.to_scalar_ptr()?;
+ let val = Value::new_dyn_trait(ptr, vtable);
+ self.write_value(val, dest)
+ }
+
+ _ => bug!("invalid unsizing {:?} -> {:?}", src.layout.ty, dest.layout.ty),
+ }
+ }
+
+ fn unsize_into(
+ &mut self,
+ src: OpTy<'tcx>,
+ dest: PlaceTy<'tcx>,
+ ) -> EvalResult<'tcx> {
+ match (&src.layout.ty.sty, &dest.layout.ty.sty) {
+ (&ty::TyRef(_, s, _), &ty::TyRef(_, d, _)) |
+ (&ty::TyRef(_, s, _), &ty::TyRawPtr(TypeAndMut { ty: d, .. })) |
+ (&ty::TyRawPtr(TypeAndMut { ty: s, .. }),
+ &ty::TyRawPtr(TypeAndMut { ty: d, .. })) => {
+ self.unsize_into_ptr(src, dest, s, d)
+ }
+ (&ty::TyAdt(def_a, _), &ty::TyAdt(def_b, _)) => {
+ assert_eq!(def_a, def_b);
+ if def_a.is_box() || def_b.is_box() {
+ if !def_a.is_box() || !def_b.is_box() {
+ bug!("invalid unsizing between {:?} -> {:?}", src.layout, dest.layout);
+ }
+ return self.unsize_into_ptr(
+ src,
+ dest,
+ src.layout.ty.boxed_ty(),
+ dest.layout.ty.boxed_ty(),
+ );
+ }
+
+ // unsizing of generic struct with pointer fields
+ // Example: `Arc<T>` -> `Arc<Trait>`
+ // here we need to increase the size of every &T thin ptr field to a fat ptr
+ for i in 0..src.layout.fields.count() {
+ let dst_field = self.place_field(dest, i as u64)?;
+ if dst_field.layout.is_zst() {
+ continue;
+ }
+ let src_field = match src.try_as_mplace() {
+ Ok(mplace) => {
+ let src_field = self.mplace_field(mplace, i as u64)?;
+ src_field.into()
+ }
+ Err(..) => {
+ let src_field_layout = src.layout.field(&self, i)?;
+ // this must be a field covering the entire thing
+ assert_eq!(src.layout.fields.offset(i).bytes(), 0);
+ assert_eq!(src_field_layout.size, src.layout.size);
+ // just sawp out the layout
+ OpTy { op: src.op, layout: src_field_layout }
+ }
+ };
+ if src_field.layout.ty == dst_field.layout.ty {
+ self.copy_op(src_field, dst_field)?;
+ } else {
+ self.unsize_into(src_field, dst_field)?;
+ }
+ }
+ Ok(())
+ }
+ _ => {
+ bug!(
+ "unsize_into: invalid conversion: {:?} -> {:?}",
+ src.layout,
+ dest.layout
+ )
+ }
+ }
+ }
}
use std::error::Error;
use rustc::hir;
-use rustc::mir::interpret::{ConstEvalErr, ScalarMaybeUndef};
+use rustc::mir::interpret::ConstEvalErr;
use rustc::mir;
-use rustc::ty::{self, TyCtxt, Ty, Instance};
-use rustc::ty::layout::{self, LayoutOf, Primitive, TyLayout};
+use rustc::ty::{self, TyCtxt, Instance};
+use rustc::ty::layout::{LayoutOf, Primitive, TyLayout};
use rustc::ty::subst::Subst;
-use rustc_data_structures::indexed_vec::IndexVec;
+use rustc_data_structures::indexed_vec::{IndexVec, Idx};
use syntax::ast::Mutability;
use syntax::source_map::Span;
use rustc::mir::interpret::{
EvalResult, EvalError, EvalErrorKind, GlobalId,
- Value, Scalar, AllocId, Allocation, ConstValue,
+ Scalar, AllocId, Allocation, ConstValue,
+};
+use super::{
+ Place, PlaceExtra, PlaceTy, MemPlace, OpTy, Operand, Value,
+ EvalContext, StackPopCleanup, Memory, MemoryKind
};
-use super::{Place, EvalContext, StackPopCleanup, ValTy, Memory, MemoryKind};
pub fn mk_borrowck_eval_cx<'a, 'mir, 'tcx>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
instance,
span,
mir,
- return_place: Place::undef(),
+ return_place: Place::null(tcx),
return_to_block: StackPopCleanup::None,
stmt: 0,
});
instance,
mir.span,
mir,
- Place::undef(),
+ Place::null(tcx),
StackPopCleanup::None,
)?;
Ok(ecx)
cid: GlobalId<'tcx>,
mir: &'mir mir::Mir<'tcx>,
param_env: ty::ParamEnv<'tcx>,
-) -> EvalResult<'tcx, (Value, Scalar, TyLayout<'tcx>)> {
+) -> EvalResult<'tcx, OpTy<'tcx>> {
ecx.with_fresh_body(|ecx| {
eval_body_using_ecx(ecx, cid, Some(mir), param_env)
})
}
-pub fn value_to_const_value<'tcx>(
+pub fn op_to_const<'tcx>(
ecx: &EvalContext<'_, '_, 'tcx, CompileTimeEvaluator>,
- val: Value,
- layout: TyLayout<'tcx>,
+ op: OpTy<'tcx>,
+ normalize: bool,
) -> EvalResult<'tcx, &'tcx ty::Const<'tcx>> {
- match (val, &layout.abi) {
- (Value::Scalar(ScalarMaybeUndef::Scalar(Scalar::Bits { size: 0, ..})), _) if layout.is_zst() => {},
- (Value::ByRef(..), _) |
- (Value::Scalar(_), &layout::Abi::Scalar(_)) |
- (Value::ScalarPair(..), &layout::Abi::ScalarPair(..)) => {},
- _ => bug!("bad value/layout combo: {:#?}, {:#?}", val, layout),
- }
- let val = match val {
- Value::Scalar(val) => ConstValue::Scalar(val.unwrap_or_err()?),
- Value::ScalarPair(a, b) => ConstValue::ScalarPair(a.unwrap_or_err()?, b),
- Value::ByRef(ptr, align) => {
- let ptr = ptr.to_ptr().unwrap();
+ let normalized_op = if normalize {
+ ecx.try_read_value(op)?
+ } else {
+ match op.op {
+ Operand::Indirect(mplace) => Err(mplace),
+ Operand::Immediate(val) => Ok(val)
+ }
+ };
+ let val = match normalized_op {
+ Err(MemPlace { ptr, align, extra }) => {
+ // extract alloc-offset pair
+ assert_eq!(extra, PlaceExtra::None);
+ let ptr = ptr.to_ptr()?;
let alloc = ecx.memory.get(ptr.alloc_id)?;
assert!(alloc.align.abi() >= align.abi());
- assert!(alloc.bytes.len() as u64 - ptr.offset.bytes() >= layout.size.bytes());
+ assert!(alloc.bytes.len() as u64 - ptr.offset.bytes() >= op.layout.size.bytes());
let mut alloc = alloc.clone();
alloc.align = align;
let alloc = ecx.tcx.intern_const_alloc(alloc);
ConstValue::ByRef(alloc, ptr.offset)
- }
+ },
+ Ok(Value::Scalar(x)) =>
+ ConstValue::Scalar(x.not_undef()?),
+ Ok(Value::ScalarPair(a, b)) =>
+ ConstValue::ScalarPair(a.not_undef()?, b),
};
- Ok(ty::Const::from_const_value(ecx.tcx.tcx, val, layout.ty))
+ Ok(ty::Const::from_const_value(ecx.tcx.tcx, val, op.layout.ty))
+}
+pub fn const_to_op<'tcx>(
+ ecx: &mut EvalContext<'_, '_, 'tcx, CompileTimeEvaluator>,
+ cnst: &'tcx ty::Const<'tcx>,
+) -> EvalResult<'tcx, OpTy<'tcx>> {
+ let op = ecx.const_value_to_op(cnst.val)?;
+ Ok(OpTy { op, layout: ecx.layout_of(cnst.ty)? })
}
fn eval_body_and_ecx<'a, 'mir, 'tcx>(
cid: GlobalId<'tcx>,
mir: Option<&'mir mir::Mir<'tcx>>,
param_env: ty::ParamEnv<'tcx>,
-) -> (EvalResult<'tcx, (Value, Scalar, TyLayout<'tcx>)>, EvalContext<'a, 'mir, 'tcx, CompileTimeEvaluator>) {
+) -> (EvalResult<'tcx, OpTy<'tcx>>, EvalContext<'a, 'mir, 'tcx, CompileTimeEvaluator>) {
debug!("eval_body_and_ecx: {:?}, {:?}", cid, param_env);
// we start out with the best span we have
// and try improving it down the road when more information is available
(r, ecx)
}
+// Returns a pointer to where the result lives
fn eval_body_using_ecx<'a, 'mir, 'tcx>(
ecx: &mut EvalContext<'a, 'mir, 'tcx, CompileTimeEvaluator>,
cid: GlobalId<'tcx>,
mir: Option<&'mir mir::Mir<'tcx>>,
param_env: ty::ParamEnv<'tcx>,
-) -> EvalResult<'tcx, (Value, Scalar, TyLayout<'tcx>)> {
+) -> EvalResult<'tcx, OpTy<'tcx>> {
debug!("eval_body: {:?}, {:?}", cid, param_env);
let tcx = ecx.tcx.tcx;
let mut mir = match mir {
}
let layout = ecx.layout_of(mir.return_ty().subst(tcx, cid.instance.substs))?;
assert!(!layout.is_unsized());
- let ptr = ecx.memory.allocate(
- layout.size,
- layout.align,
- MemoryKind::Stack,
- )?;
+ let ret = ecx.allocate(layout, MemoryKind::Stack)?;
let internally_mutable = !layout.ty.is_freeze(tcx, param_env, mir.span);
let is_static = tcx.is_static(cid.instance.def_id());
let mutability = if is_static == Some(hir::Mutability::MutMutable) || internally_mutable {
cid.instance,
mir.span,
mir,
- Place::from_ptr(ptr, layout.align),
+ Place::Ptr(*ret),
cleanup,
)?;
+ // The main interpreter loop.
while ecx.step()? {}
- let ptr = ptr.into();
- // always try to read the value and report errors
- let value = match ecx.try_read_value(ptr, layout.align, layout.ty)? {
- Some(val) if is_static.is_none() && cid.promoted.is_none() => val,
- // point at the allocation
- _ => Value::ByRef(ptr, layout.align),
- };
- Ok((value, ptr, layout))
+
+ Ok(ret.into())
}
#[derive(Debug, Clone, Eq, PartialEq, Hash)]
fn eval_fn_call<'a>(
ecx: &mut EvalContext<'a, 'mir, 'tcx, Self>,
instance: ty::Instance<'tcx>,
- destination: Option<(Place, mir::BasicBlock)>,
- args: &[ValTy<'tcx>],
+ destination: Option<(PlaceTy<'tcx>, mir::BasicBlock)>,
+ args: &[OpTy<'tcx>],
span: Span,
- sig: ty::FnSig<'tcx>,
) -> EvalResult<'tcx, bool> {
debug!("eval_fn_call: {:?}", instance);
if !ecx.tcx.is_const_fn(instance.def_id()) {
let def_id = instance.def_id();
+ // Some fn calls are actually BinOp intrinsics
let (op, oflo) = if let Some(op) = ecx.tcx.is_binop_lang_item(def_id) {
op
} else {
);
};
let (dest, bb) = destination.expect("128 lowerings can't diverge");
- let dest_ty = sig.output();
+ let l = ecx.read_value(args[0])?;
+ let r = ecx.read_value(args[1])?;
if oflo {
- ecx.intrinsic_with_overflow(op, args[0], args[1], dest, dest_ty)?;
+ ecx.binop_with_overflow(op, l, r, dest)?;
} else {
- ecx.intrinsic_overflowing(op, args[0], args[1], dest, dest_ty)?;
+ ecx.binop_ignore_overflow(op, l, r, dest)?;
}
ecx.goto_block(bb);
return Ok(true);
}
};
let (return_place, return_to_block) = match destination {
- Some((place, block)) => (place, StackPopCleanup::Goto(block)),
- None => (Place::undef(), StackPopCleanup::None),
+ Some((place, block)) => (*place, StackPopCleanup::Goto(block)),
+ None => (Place::null(&ecx), StackPopCleanup::None),
};
ecx.push_stack_frame(
fn call_intrinsic<'a>(
ecx: &mut EvalContext<'a, 'mir, 'tcx, Self>,
instance: ty::Instance<'tcx>,
- args: &[ValTy<'tcx>],
- dest: Place,
- dest_layout: layout::TyLayout<'tcx>,
+ args: &[OpTy<'tcx>],
+ dest: PlaceTy<'tcx>,
target: mir::BasicBlock,
) -> EvalResult<'tcx> {
let substs = instance.substs;
let elem_align = ecx.layout_of(elem_ty)?.align.abi();
let align_val = Scalar::Bits {
bits: elem_align as u128,
- size: dest_layout.size.bytes() as u8,
+ size: dest.layout.size.bytes() as u8,
};
- ecx.write_scalar(dest, align_val, dest_layout.ty)?;
+ ecx.write_scalar(align_val, dest)?;
}
"size_of" => {
let size = ecx.layout_of(ty)?.size.bytes() as u128;
let size_val = Scalar::Bits {
bits: size,
- size: dest_layout.size.bytes() as u8,
+ size: dest.layout.size.bytes() as u8,
};
- ecx.write_scalar(dest, size_val, dest_layout.ty)?;
+ ecx.write_scalar(size_val, dest)?;
}
"type_id" => {
let type_id = ecx.tcx.type_id_hash(ty) as u128;
let id_val = Scalar::Bits {
bits: type_id,
- size: dest_layout.size.bytes() as u8,
+ size: dest.layout.size.bytes() as u8,
};
- ecx.write_scalar(dest, id_val, dest_layout.ty)?;
+ ecx.write_scalar(id_val, dest)?;
}
"ctpop" | "cttz" | "cttz_nonzero" | "ctlz" | "ctlz_nonzero" | "bswap" => {
let ty = substs.type_at(0);
let layout_of = ecx.layout_of(ty)?;
- let bits = ecx.value_to_scalar(args[0])?.to_bits(layout_of.size)?;
+ let bits = ecx.read_scalar(args[0])?.to_bits(layout_of.size)?;
let kind = match layout_of.abi {
ty::layout::Abi::Scalar(ref scalar) => scalar.value,
_ => Err(::rustc::mir::interpret::EvalErrorKind::TypeNotPrimitive(ty))?,
} else {
numeric_intrinsic(intrinsic_name, bits, kind)?
};
- ecx.write_scalar(dest, out_val, ty)?;
+ ecx.write_scalar(out_val, dest)?;
}
name => return Err(
_ecx: &EvalContext<'a, 'mir, 'tcx, Self>,
_bin_op: mir::BinOp,
left: Scalar,
- _left_ty: Ty<'tcx>,
+ _left_layout: TyLayout<'tcx>,
right: Scalar,
- _right_ty: Ty<'tcx>,
+ _right_layout: TyLayout<'tcx>,
) -> EvalResult<'tcx, Option<(Scalar, bool)>> {
if left.is_bits() && right.is_bits() {
Ok(None)
fn box_alloc<'a>(
_ecx: &mut EvalContext<'a, 'mir, 'tcx, Self>,
- _ty: Ty<'tcx>,
- _dest: Place,
+ _dest: PlaceTy<'tcx>,
) -> EvalResult<'tcx> {
Err(
ConstEvalError::NeedsRfc("heap allocations via `box` keyword".to_string()).into(),
}
}
-pub fn const_val_field<'a, 'tcx>(
+/// Project to a field of a (variant of a) const
+pub fn const_field<'a, 'tcx>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
param_env: ty::ParamEnv<'tcx>,
instance: ty::Instance<'tcx>,
field: mir::Field,
value: &'tcx ty::Const<'tcx>,
) -> ::rustc::mir::interpret::ConstEvalResult<'tcx> {
- trace!("const_val_field: {:?}, {:?}, {:?}", instance, field, value);
+ trace!("const_field: {:?}, {:?}, {:?}", instance, field, value);
let mut ecx = mk_eval_cx(tcx, instance, param_env).unwrap();
let result = (|| {
- let ty = value.ty;
- let value = ecx.const_to_value(value.val)?;
- let layout = ecx.layout_of(ty)?;
- let place = ecx.allocate_place_for_value(value, layout, variant)?;
- let (place, layout) = ecx.place_field(place, field, layout)?;
- let (ptr, align) = place.to_ptr_align();
- let mut new_value = Value::ByRef(ptr.unwrap_or_err()?, align);
- new_value = ecx.try_read_by_ref(new_value, layout.ty)?;
- use rustc_data_structures::indexed_vec::Idx;
- match (value, new_value) {
- (Value::Scalar(_), Value::ByRef(..)) |
- (Value::ScalarPair(..), Value::ByRef(..)) |
- (Value::Scalar(_), Value::ScalarPair(..)) => bug!(
- "field {} of {:?} yielded {:?}",
- field.index(),
- value,
- new_value,
- ),
- _ => {},
- }
- value_to_const_value(&ecx, new_value, layout)
+ // get the operand again
+ let op = const_to_op(&mut ecx, value)?;
+ // downcast
+ let down = match variant {
+ None => op,
+ Some(variant) => ecx.operand_downcast(op, variant)?
+ };
+ // then project
+ let field = ecx.operand_field(down, field.index() as u64)?;
+ // and finally move back to the const world, always normalizing because
+ // this is not called for statics.
+ op_to_const(&ecx, field, true)
})();
result.map_err(|err| {
let (trace, span) = ecx.generate_stacktrace(None);
) -> EvalResult<'tcx, usize> {
trace!("const_variant_index: {:?}, {:?}", instance, val);
let mut ecx = mk_eval_cx(tcx, instance, param_env).unwrap();
- let value = ecx.const_to_value(val.val)?;
- let layout = ecx.layout_of(val.ty)?;
- let (ptr, align) = match value {
- Value::ScalarPair(..) | Value::Scalar(_) => {
- let ptr = ecx.memory.allocate(layout.size, layout.align, MemoryKind::Stack)?.into();
- ecx.write_value_to_ptr(value, ptr, layout.align, val.ty)?;
- (ptr, layout.align)
- },
- Value::ByRef(ptr, align) => (ptr, align),
- };
- let place = Place::from_scalar_ptr(ptr.into(), align);
- ecx.read_discriminant_as_variant_index(place, layout)
+ let op = const_to_op(&mut ecx, val)?;
+ ecx.read_discriminant_as_variant_index(op)
}
-pub fn const_value_to_allocation_provider<'a, 'tcx>(
+pub fn const_to_allocation_provider<'a, 'tcx>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
val: &'tcx ty::Const<'tcx>,
) -> &'tcx Allocation {
ty::ParamEnv::reveal_all(),
CompileTimeEvaluator,
());
- let value = ecx.const_to_value(val.val)?;
- let layout = ecx.layout_of(val.ty)?;
- let ptr = ecx.memory.allocate(layout.size, layout.align, MemoryKind::Stack)?;
- ecx.write_value_to_ptr(value, ptr.into(), layout.align, val.ty)?;
- let alloc = ecx.memory.get(ptr.alloc_id)?;
+ let op = const_to_op(&mut ecx, val)?;
+ // Make a new allocation, copy things there
+ let ptr = ecx.allocate(op.layout, MemoryKind::Stack)?;
+ ecx.copy_op(op, ptr.into())?;
+ let alloc = ecx.memory.get(ptr.to_ptr()?.alloc_id)?;
Ok(tcx.intern_const_alloc(alloc.clone()))
};
result().expect("unable to convert ConstValue to Allocation")
};
let (res, ecx) = eval_body_and_ecx(tcx, cid, None, key.param_env);
- res.and_then(|(mut val, _, layout)| {
- if tcx.is_static(def_id).is_none() && cid.promoted.is_none() {
- val = ecx.try_read_by_ref(val, layout.ty)?;
+ res.and_then(|op| {
+ let normalize = tcx.is_static(def_id).is_none() && cid.promoted.is_none();
+ if !normalize {
+ // Sanity check: These must always be a MemPlace
+ match op.op {
+ Operand::Indirect(_) => { /* all is good */ },
+ Operand::Immediate(_) => bug!("const eval gave us an Immediate"),
+ }
}
- value_to_const_value(&ecx, val, layout)
+ op_to_const(&ecx, op, normalize)
}).map_err(|err| {
let (trace, span) = ecx.generate_stacktrace(None);
let err = ConstEvalErr {
use rustc::hir::def::Def;
use rustc::hir::map::definitions::DefPathData;
use rustc::mir;
-use rustc::ty::layout::{self, Size, Align, HasDataLayout, IntegerExt, LayoutOf, TyLayout, Primitive};
+use rustc::ty::layout::{
+ self, Size, Align, HasDataLayout, LayoutOf, TyLayout
+};
use rustc::ty::subst::{Subst, Substs};
-use rustc::ty::{self, Ty, TyCtxt, TypeAndMut};
+use rustc::ty::{self, Ty, TyCtxt, TypeFoldable};
use rustc::ty::query::TyCtxtAt;
use rustc_data_structures::fx::{FxHashSet, FxHasher};
-use rustc_data_structures::indexed_vec::{IndexVec, Idx};
+use rustc_data_structures::indexed_vec::IndexVec;
use rustc::mir::interpret::{
- GlobalId, Value, Scalar, FrameInfo, AllocType,
- EvalResult, EvalErrorKind, Pointer, ConstValue,
+ GlobalId, Scalar, FrameInfo,
+ EvalResult, EvalErrorKind,
ScalarMaybeUndef,
+ truncate, sign_extend,
};
use syntax::source_map::{self, Span};
use syntax::ast::Mutability;
-use super::{Place, PlaceExtra, Memory,
- HasMemory, MemoryKind,
- Machine};
-
-macro_rules! validation_failure{
- ($what:expr, $where:expr, $details:expr) => {{
- let where_ = if $where.is_empty() {
- String::new()
- } else {
- format!(" at {}", $where)
- };
- err!(ValidationFailure(format!(
- "encountered {}{}, but expected {}",
- $what, where_, $details,
- )))
- }};
- ($what:expr, $where:expr) => {{
- let where_ = if $where.is_empty() {
- String::new()
- } else {
- format!(" at {}", $where)
- };
- err!(ValidationFailure(format!(
- "encountered {}{}",
- $what, where_,
- )))
- }};
-}
+use super::{
+ Value, Operand, MemPlace, MPlaceTy, Place, PlaceExtra,
+ Memory, Machine
+};
pub struct EvalContext<'a, 'mir, 'tcx: 'a + 'mir, M: Machine<'mir, 'tcx>> {
/// Stores the `Machine` instance.
pub stmt: usize,
}
-#[derive(Copy, Clone, PartialEq, Eq, Hash)]
-pub enum LocalValue {
- Dead,
- Live(Value),
-}
-
-impl LocalValue {
- pub fn access(self) -> EvalResult<'static, Value> {
- match self {
- LocalValue::Dead => err!(DeadLocal),
- LocalValue::Live(val) => Ok(val),
- }
- }
-}
-
impl<'mir, 'tcx: 'mir> Eq for Frame<'mir, 'tcx> {}
impl<'mir, 'tcx: 'mir> PartialEq for Frame<'mir, 'tcx> {
}
}
+// State of a local variable
+#[derive(Copy, Clone, PartialEq, Eq, Hash)]
+pub enum LocalValue {
+ Dead,
+ // Mostly for convenience, we re-use the `Operand` type here.
+ // This is an optimization over just always having a pointer here;
+ // we can thus avoid doing an allocation when the local just stores
+ // immediate values *and* never has its address taken.
+ Live(Operand),
+}
+
+impl<'tcx> LocalValue {
+ pub fn access(&self) -> EvalResult<'tcx, &Operand> {
+ match self {
+ LocalValue::Dead => err!(DeadLocal),
+ LocalValue::Live(ref val) => Ok(val),
+ }
+ }
+
+ pub fn access_mut(&mut self) -> EvalResult<'tcx, &mut Operand> {
+ match self {
+ LocalValue::Dead => err!(DeadLocal),
+ LocalValue::Live(ref mut val) => Ok(val),
+ }
+ }
+}
+
/// The virtual machine state during const-evaluation at a given point in time.
type EvalSnapshot<'a, 'mir, 'tcx, M>
= (M, Vec<Frame<'mir, 'tcx>>, Memory<'a, 'mir, 'tcx, M>);
None,
}
-#[derive(Copy, Clone, Debug)]
-pub struct TyAndPacked<'tcx> {
- pub ty: Ty<'tcx>,
- pub packed: bool,
-}
-
-#[derive(Copy, Clone, Debug)]
-pub struct ValTy<'tcx> {
- pub value: Value,
- pub ty: Ty<'tcx>,
-}
-
-impl<'tcx> ::std::ops::Deref for ValTy<'tcx> {
- type Target = Value;
- fn deref(&self) -> &Value {
- &self.value
- }
-}
-
impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> HasDataLayout for &'a EvalContext<'a, 'mir, 'tcx, M> {
#[inline]
fn data_layout(&self) -> &layout::TargetDataLayout {
type Ty = Ty<'tcx>;
type TyLayout = EvalResult<'tcx, TyLayout<'tcx>>;
+ #[inline]
fn layout_of(self, ty: Ty<'tcx>) -> Self::TyLayout {
self.tcx.layout_of(self.param_env.and(ty))
.map_err(|layout| EvalErrorKind::Layout(layout).into())
r
}
- pub fn alloc_ptr(&mut self, layout: TyLayout<'tcx>) -> EvalResult<'tcx, Pointer> {
- assert!(!layout.is_unsized(), "cannot alloc memory for unsized type");
-
- self.memory.allocate(layout.size, layout.align, MemoryKind::Stack)
- }
-
pub fn memory(&self) -> &Memory<'a, 'mir, 'tcx, M> {
&self.memory
}
self.stack.len() - 1
}
- pub fn str_to_value(&mut self, s: &str) -> EvalResult<'tcx, Value> {
- let ptr = self.memory.allocate_bytes(s.as_bytes());
- Ok(Scalar::Ptr(ptr).to_value_with_len(s.len() as u64, self.tcx.tcx))
+ /// Mark a storage as live, killing the previous content and returning it.
+ /// Remember to deallocate that!
+ pub fn storage_live(&mut self, local: mir::Local) -> EvalResult<'tcx, LocalValue> {
+ trace!("{:?} is now live", local);
+
+ let layout = self.layout_of_local(self.cur_frame(), local)?;
+ let init = LocalValue::Live(self.uninit_operand(layout)?);
+ // StorageLive *always* kills the value that's currently stored
+ Ok(mem::replace(&mut self.frame_mut().locals[local], init))
}
- pub fn const_to_value(
- &mut self,
- val: ConstValue<'tcx>,
- ) -> EvalResult<'tcx, Value> {
- match val {
- ConstValue::Unevaluated(def_id, substs) => {
- let instance = self.resolve(def_id, substs)?;
- self.read_global_as_value(GlobalId {
- instance,
- promoted: None,
- })
- }
- ConstValue::ByRef(alloc, offset) => {
- // FIXME: Allocate new AllocId for all constants inside
- let id = self.memory.allocate_value(alloc.clone(), MemoryKind::Stack)?;
- Ok(Value::ByRef(Pointer::new(id, offset).into(), alloc.align))
- },
- ConstValue::ScalarPair(a, b) => Ok(Value::ScalarPair(a.into(), b.into())),
- ConstValue::Scalar(val) => Ok(Value::Scalar(val.into())),
- }
+ /// Returns the old value of the local.
+ /// Remember to deallocate that!
+ pub fn storage_dead(&mut self, local: mir::Local) -> LocalValue {
+ trace!("{:?} is now dead", local);
+
+ mem::replace(&mut self.frame_mut().locals[local], LocalValue::Dead)
+ }
+
+ pub fn str_to_value(&mut self, s: &str) -> EvalResult<'tcx, Value> {
+ let ptr = self.memory.allocate_bytes(s.as_bytes());
+ Ok(Value::new_slice(Scalar::Ptr(ptr), s.len() as u64, self.tcx.tcx))
}
pub(super) fn resolve(&self, def_id: DefId, substs: &'tcx Substs<'tcx>) -> EvalResult<'tcx, ty::Instance<'tcx>> {
}
}
- pub fn monomorphize(&self, ty: Ty<'tcx>, substs: &'tcx Substs<'tcx>) -> Ty<'tcx> {
+ pub fn monomorphize<T: TypeFoldable<'tcx> + Subst<'tcx>>(
+ &self,
+ t: T,
+ substs: &'tcx Substs<'tcx>
+ ) -> T {
// miri doesn't care about lifetimes, and will choke on some crazy ones
// let's simply get rid of them
- let substituted = ty.subst(*self.tcx, substs);
+ let substituted = t.subst(*self.tcx, substs);
self.tcx.normalize_erasing_regions(ty::ParamEnv::reveal_all(), substituted)
}
- /// Return the size and aligment of the value at the given type.
+ pub fn layout_of_local(
+ &self,
+ frame: usize,
+ local: mir::Local
+ ) -> EvalResult<'tcx, TyLayout<'tcx>> {
+ let local_ty = self.stack[frame].mir.local_decls[local].ty;
+ let local_ty = self.monomorphize(
+ local_ty,
+ self.stack[frame].instance.substs
+ );
+ self.layout_of(local_ty)
+ }
+
+ /// Return the actual dynamic size and alignment of the place at the given type.
/// Note that the value does not matter if the type is sized. For unsized types,
/// the value has to be a fat pointer, and we only care about the "extra" data in it.
- pub fn size_and_align_of_dst(
+ pub fn size_and_align_of_mplace(
&self,
- ty: Ty<'tcx>,
- value: Value,
+ mplace: MPlaceTy<'tcx>,
) -> EvalResult<'tcx, (Size, Align)> {
- let layout = self.layout_of(ty)?;
- if !layout.is_unsized() {
- Ok(layout.size_and_align())
+ if let PlaceExtra::None = mplace.extra {
+ assert!(!mplace.layout.is_unsized());
+ Ok(mplace.layout.size_and_align())
} else {
- match ty.sty {
+ let layout = mplace.layout;
+ assert!(layout.is_unsized());
+ match layout.ty.sty {
ty::TyAdt(..) | ty::TyTuple(..) => {
// First get the size of all statically known fields.
// Don't use type_of::sizing_type_of because that expects t to be sized,
// and it also rounds up to alignment, which we want to avoid,
// as the unsized field's alignment could be smaller.
- assert!(!ty.is_simd());
- debug!("DST {} layout: {:?}", ty, layout);
+ assert!(!layout.ty.is_simd());
+ debug!("DST layout: {:?}", layout);
let sized_size = layout.fields.offset(layout.fields.count() - 1);
let sized_align = layout.align;
debug!(
"DST {} statically sized prefix size: {:?} align: {:?}",
- ty,
+ layout.ty,
sized_size,
sized_align
);
// Recurse to get the size of the dynamically sized field (must be
// the last field).
- let field_ty = layout.field(self, layout.fields.count() - 1)?.ty;
- let (unsized_size, unsized_align) =
- self.size_and_align_of_dst(field_ty, value)?;
+ let field = self.mplace_field(mplace, layout.fields.count() as u64 - 1)?;
+ let (unsized_size, unsized_align) = self.size_and_align_of_mplace(field)?;
// FIXME (#26403, #27023): We should be adding padding
// to `sized_size` (to accommodate the `unsized_align`
Ok((size.abi_align(align), align))
}
ty::TyDynamic(..) => {
- let (_, vtable) = self.into_ptr_vtable_pair(value)?;
+ let vtable = match mplace.extra {
+ PlaceExtra::Vtable(vtable) => vtable,
+ _ => bug!("Expected vtable"),
+ };
// the second entry in the vtable is the dynamic size of the object.
self.read_size_and_align_from_vtable(vtable)
}
ty::TySlice(_) | ty::TyStr => {
+ let len = match mplace.extra {
+ PlaceExtra::Length(len) => len,
+ _ => bug!("Expected length"),
+ };
let (elem_size, align) = layout.field(self, 0)?.size_and_align();
- let (_, len) = self.into_slice(value)?;
Ok((elem_size * len, align))
}
- _ => bug!("size_of_val::<{:?}>", ty),
+ _ => bug!("size_of_val::<{:?}> not supported", layout.ty),
}
}
}
// don't allocate at all for trivial constants
if mir.local_decls.len() > 1 {
- let mut locals = IndexVec::from_elem(LocalValue::Dead, &mir.local_decls);
- for (local, decl) in locals.iter_mut().zip(mir.local_decls.iter()) {
- *local = LocalValue::Live(self.init_value(decl.ty)?);
- }
+ // We put some marker value into the locals that we later want to initialize.
+ // This can be anything except for LocalValue::Dead -- because *that* is the
+ // value we use for things that we know are initially dead.
+ let dummy =
+ LocalValue::Live(Operand::Immediate(Value::Scalar(ScalarMaybeUndef::Undef)));
+ let mut locals = IndexVec::from_elem(dummy, &mir.local_decls);
+ // Now mark those locals as dead that we do not want to initialize
match self.tcx.describe_def(instance.def_id()) {
// statics and constants don't have `Storage*` statements, no need to look for them
Some(Def::Static(..)) | Some(Def::Const(..)) | Some(Def::AssociatedConst(..)) => {},
use rustc::mir::StatementKind::{StorageDead, StorageLive};
match stmt.kind {
StorageLive(local) |
- StorageDead(local) => locals[local] = LocalValue::Dead,
+ StorageDead(local) => {
+ locals[local] = LocalValue::Dead;
+ }
_ => {}
}
}
}
},
}
+ // Finally, properly initialize all those that still have the dummy value
+ for (local, decl) in locals.iter_mut().zip(mir.local_decls.iter()) {
+ match *local {
+ LocalValue::Live(_) => {
+ // This needs to be peoperly initialized.
+ let layout = self.layout_of(self.monomorphize(decl.ty, instance.substs))?;
+ *local = LocalValue::Live(self.uninit_operand(layout)?);
+ }
+ LocalValue::Dead => {
+ // Nothing to do
+ }
+ }
+ }
+ // done
self.frame_mut().locals = locals;
}
- self.memory.cur_frame = self.cur_frame();
-
if self.stack.len() > self.stack_limit {
err!(StackFrameLimitReached)
} else {
let frame = self.stack.pop().expect(
"tried to pop a stack frame, but there were none",
);
- if !self.stack.is_empty() {
- // TODO: Is this the correct time to start considering these accesses as originating from the returned-to stack frame?
- self.memory.cur_frame = self.cur_frame();
- }
match frame.return_to_block {
StackPopCleanup::MarkStatic(mutable) => {
- if let Place::Ptr { ptr, .. } = frame.return_place {
+ if let Place::Ptr(MemPlace { ptr, .. }) = frame.return_place {
// FIXME: to_ptr()? might be too extreme here, static zsts might reach this under certain conditions
self.memory.mark_static_initialized(
- ptr.unwrap_or_err()?.to_ptr()?.alloc_id,
+ ptr.to_ptr()?.alloc_id,
mutable,
)?
} else {
Ok(())
}
- pub fn deallocate_local(&mut self, local: LocalValue) -> EvalResult<'tcx> {
+ crate fn deallocate_local(&mut self, local: LocalValue) -> EvalResult<'tcx> {
// FIXME: should we tell the user that there was a local which was never written to?
- if let LocalValue::Live(Value::ByRef(ptr, _align)) = local {
+ if let LocalValue::Live(Operand::Indirect(MemPlace { ptr, .. })) = local {
trace!("deallocating local");
let ptr = ptr.to_ptr()?;
self.memory.dump_alloc(ptr.alloc_id);
Ok(())
}
- /// Evaluate an assignment statement.
- ///
- /// There is no separate `eval_rvalue` function. Instead, the code for handling each rvalue
- /// type writes its results directly into the memory specified by the place.
- pub(super) fn eval_rvalue_into_place(
- &mut self,
- rvalue: &mir::Rvalue<'tcx>,
- place: &mir::Place<'tcx>,
- ) -> EvalResult<'tcx> {
- let dest = self.eval_place(place)?;
- let dest_ty = self.place_ty(place);
- let dest_layout = self.layout_of(dest_ty)?;
-
- use rustc::mir::Rvalue::*;
- match *rvalue {
- Use(ref operand) => {
- let value = self.eval_operand(operand)?.value;
- let valty = ValTy {
- value,
- ty: dest_ty,
- };
- self.write_value(valty, dest)?;
- }
-
- BinaryOp(bin_op, ref left, ref right) => {
- let left = self.eval_operand(left)?;
- let right = self.eval_operand(right)?;
- self.intrinsic_overflowing(
- bin_op,
- left,
- right,
- dest,
- dest_ty,
- )?;
- }
-
- CheckedBinaryOp(bin_op, ref left, ref right) => {
- let left = self.eval_operand(left)?;
- let right = self.eval_operand(right)?;
- self.intrinsic_with_overflow(
- bin_op,
- left,
- right,
- dest,
- dest_ty,
- )?;
- }
-
- UnaryOp(un_op, ref operand) => {
- let val = self.eval_operand_to_scalar(operand)?;
- let val = self.unary_op(un_op, val, dest_layout)?;
- self.write_scalar(
- dest,
- val,
- dest_ty,
- )?;
- }
-
- Aggregate(ref kind, ref operands) => {
- let (dest, active_field_index) = match **kind {
- mir::AggregateKind::Adt(adt_def, variant_index, _, active_field_index) => {
- self.write_discriminant_value(dest_ty, dest, variant_index)?;
- if adt_def.is_enum() {
- (self.place_downcast(dest, variant_index)?, active_field_index)
- } else {
- (dest, active_field_index)
- }
- }
- _ => (dest, None)
- };
-
- let layout = self.layout_of(dest_ty)?;
- for (i, operand) in operands.iter().enumerate() {
- let value = self.eval_operand(operand)?;
- // Ignore zero-sized fields.
- if !self.layout_of(value.ty)?.is_zst() {
- let field_index = active_field_index.unwrap_or(i);
- let (field_dest, _) = self.place_field(dest, mir::Field::new(field_index), layout)?;
- self.write_value(value, field_dest)?;
- }
- }
- }
-
- Repeat(ref operand, _) => {
- let (elem_ty, length) = match dest_ty.sty {
- ty::TyArray(elem_ty, n) => (elem_ty, n.unwrap_usize(self.tcx.tcx)),
- _ => {
- bug!(
- "tried to assign array-repeat to non-array type {:?}",
- dest_ty
- )
- }
- };
- let elem_size = self.layout_of(elem_ty)?.size;
- let value = self.eval_operand(operand)?.value;
-
- let (dest, dest_align) = self.force_allocation(dest)?.to_ptr_align();
-
- if length > 0 {
- let dest = dest.unwrap_or_err()?;
- //write the first value
- self.write_value_to_ptr(value, dest, dest_align, elem_ty)?;
-
- if length > 1 {
- let rest = dest.ptr_offset(elem_size * 1 as u64, &self)?;
- self.memory.copy_repeatedly(dest, dest_align, rest, dest_align, elem_size, length - 1, false)?;
- }
- }
- }
-
- Len(ref place) => {
- // FIXME(CTFE): don't allow computing the length of arrays in const eval
- let src = self.eval_place(place)?;
- let ty = self.place_ty(place);
- let (_, len) = src.elem_ty_and_len(ty, self.tcx.tcx);
- let size = self.memory.pointer_size().bytes() as u8;
- self.write_scalar(
- dest,
- Scalar::Bits {
- bits: len as u128,
- size,
- },
- dest_ty,
- )?;
- }
-
- Ref(_, _, ref place) => {
- let src = self.eval_place(place)?;
- // We ignore the alignment of the place here -- special handling for packed structs ends
- // at the `&` operator.
- let (ptr, _align, extra) = self.force_allocation(src)?.to_ptr_align_extra();
-
- let val = match extra {
- PlaceExtra::None => Value::Scalar(ptr),
- PlaceExtra::Length(len) => ptr.to_value_with_len(len, self.tcx.tcx),
- PlaceExtra::Vtable(vtable) => ptr.to_value_with_vtable(vtable),
- PlaceExtra::DowncastVariant(..) => {
- bug!("attempted to take a reference to an enum downcast place")
- }
- };
- let valty = ValTy {
- value: val,
- ty: dest_ty,
- };
- self.write_value(valty, dest)?;
- }
-
- NullaryOp(mir::NullOp::Box, ty) => {
- let ty = self.monomorphize(ty, self.substs());
- M::box_alloc(self, ty, dest)?;
- }
-
- NullaryOp(mir::NullOp::SizeOf, ty) => {
- let ty = self.monomorphize(ty, self.substs());
- let layout = self.layout_of(ty)?;
- assert!(!layout.is_unsized(),
- "SizeOf nullary MIR operator called for unsized type");
- let size = self.memory.pointer_size().bytes() as u8;
- self.write_scalar(
- dest,
- Scalar::Bits {
- bits: layout.size.bytes() as u128,
- size,
- },
- dest_ty,
- )?;
- }
-
- Cast(kind, ref operand, cast_ty) => {
- debug_assert_eq!(self.monomorphize(cast_ty, self.substs()), dest_ty);
- let src = self.eval_operand(operand)?;
- self.cast(src, kind, dest_ty, dest)?;
- }
-
- Discriminant(ref place) => {
- let ty = self.place_ty(place);
- let layout = self.layout_of(ty)?;
- let place = self.eval_place(place)?;
- let discr_val = self.read_discriminant_value(place, layout)?;
- let size = self.layout_of(dest_ty).unwrap().size.bytes() as u8;
- self.write_scalar(dest, Scalar::Bits {
- bits: discr_val,
- size,
- }, dest_ty)?;
- }
- }
-
- self.dump_local(dest);
-
- Ok(())
- }
-
- pub(super) fn type_is_fat_ptr(&self, ty: Ty<'tcx>) -> bool {
- match ty.sty {
- ty::TyRawPtr(ty::TypeAndMut { ty, .. }) |
- ty::TyRef(_, ty, _) => !self.type_is_sized(ty),
- ty::TyAdt(def, _) if def.is_box() => !self.type_is_sized(ty.boxed_ty()),
- _ => false,
- }
- }
-
- pub(super) fn eval_operand_to_scalar(
- &mut self,
- op: &mir::Operand<'tcx>,
- ) -> EvalResult<'tcx, Scalar> {
- let valty = self.eval_operand(op)?;
- self.value_to_scalar(valty)
- }
-
- pub(crate) fn operands_to_args(
- &mut self,
- ops: &[mir::Operand<'tcx>],
- ) -> EvalResult<'tcx, Vec<ValTy<'tcx>>> {
- ops.into_iter()
- .map(|op| self.eval_operand(op))
- .collect()
- }
-
- pub fn eval_operand(&mut self, op: &mir::Operand<'tcx>) -> EvalResult<'tcx, ValTy<'tcx>> {
- use rustc::mir::Operand::*;
- let ty = self.monomorphize(op.ty(self.mir(), *self.tcx), self.substs());
- match *op {
- // FIXME: do some more logic on `move` to invalidate the old location
- Copy(ref place) |
- Move(ref place) => {
- Ok(ValTy {
- value: self.eval_and_read_place(place)?,
- ty
- })
- },
-
- Constant(ref constant) => {
- let value = self.const_to_value(constant.literal.val)?;
-
- Ok(ValTy {
- value,
- ty,
- })
- }
- }
- }
-
- /// reads a tag and produces the corresponding variant index
- pub fn read_discriminant_as_variant_index(
- &self,
- place: Place,
- layout: TyLayout<'tcx>,
- ) -> EvalResult<'tcx, usize> {
- match layout.variants {
- ty::layout::Variants::Single { index } => Ok(index),
- ty::layout::Variants::Tagged { .. } => {
- let discr_val = self.read_discriminant_value(place, layout)?;
- layout
- .ty
- .ty_adt_def()
- .expect("tagged layout for non adt")
- .discriminants(self.tcx.tcx)
- .position(|var| var.val == discr_val)
- .ok_or_else(|| EvalErrorKind::InvalidDiscriminant.into())
- }
- ty::layout::Variants::NicheFilling { .. } => {
- let discr_val = self.read_discriminant_value(place, layout)?;
- assert_eq!(discr_val as usize as u128, discr_val);
- Ok(discr_val as usize)
- },
- }
- }
-
- pub fn read_discriminant_value(
- &self,
- place: Place,
- layout: TyLayout<'tcx>,
- ) -> EvalResult<'tcx, u128> {
- trace!("read_discriminant_value {:#?}", layout);
- if layout.abi == layout::Abi::Uninhabited {
- return Ok(0);
- }
-
- match layout.variants {
- layout::Variants::Single { index } => {
- let discr_val = layout.ty.ty_adt_def().map_or(
- index as u128,
- |def| def.discriminant_for_variant(*self.tcx, index).val);
- return Ok(discr_val);
- }
- layout::Variants::Tagged { .. } |
- layout::Variants::NicheFilling { .. } => {},
- }
- let discr_place_val = self.read_place(place)?;
- let (discr_val, discr) = self.read_field(discr_place_val, None, mir::Field::new(0), layout)?;
- trace!("discr value: {:?}, {:?}", discr_val, discr);
- let raw_discr = self.value_to_scalar(ValTy {
- value: discr_val,
- ty: discr.ty
- })?;
- let discr_val = match layout.variants {
- layout::Variants::Single { .. } => bug!(),
- // FIXME: should we catch invalid discriminants here?
- layout::Variants::Tagged { .. } => {
- if discr.ty.is_signed() {
- let i = raw_discr.to_bits(discr.size)? as i128;
- // going from layout tag type to typeck discriminant type
- // requires first sign extending with the layout discriminant
- let shift = 128 - discr.size.bits();
- let sexted = (i << shift) >> shift;
- // and then zeroing with the typeck discriminant type
- let discr_ty = layout
- .ty
- .ty_adt_def().expect("tagged layout corresponds to adt")
- .repr
- .discr_type();
- let discr_ty = layout::Integer::from_attr(self.tcx.tcx, discr_ty);
- let shift = 128 - discr_ty.size().bits();
- let truncatee = sexted as u128;
- (truncatee << shift) >> shift
- } else {
- raw_discr.to_bits(discr.size)?
- }
- },
- layout::Variants::NicheFilling {
- dataful_variant,
- ref niche_variants,
- niche_start,
- ..
- } => {
- let variants_start = *niche_variants.start() as u128;
- let variants_end = *niche_variants.end() as u128;
- match raw_discr {
- Scalar::Ptr(_) => {
- assert!(niche_start == 0);
- assert!(variants_start == variants_end);
- dataful_variant as u128
- },
- Scalar::Bits { bits: raw_discr, size } => {
- assert_eq!(size as u64, discr.size.bytes());
- let discr = raw_discr.wrapping_sub(niche_start)
- .wrapping_add(variants_start);
- if variants_start <= discr && discr <= variants_end {
- discr
- } else {
- dataful_variant as u128
- }
- },
- }
- }
- };
-
- Ok(discr_val)
- }
-
-
- pub fn write_discriminant_value(
- &mut self,
- dest_ty: Ty<'tcx>,
- dest: Place,
- variant_index: usize,
- ) -> EvalResult<'tcx> {
- let layout = self.layout_of(dest_ty)?;
-
- match layout.variants {
- layout::Variants::Single { index } => {
- if index != variant_index {
- // If the layout of an enum is `Single`, all
- // other variants are necessarily uninhabited.
- assert_eq!(layout.for_variant(&self, variant_index).abi,
- layout::Abi::Uninhabited);
- }
- }
- layout::Variants::Tagged { ref tag, .. } => {
- let discr_val = dest_ty.ty_adt_def().unwrap()
- .discriminant_for_variant(*self.tcx, variant_index)
- .val;
-
- // raw discriminants for enums are isize or bigger during
- // their computation, but the in-memory tag is the smallest possible
- // representation
- let size = tag.value.size(self.tcx.tcx);
- let shift = 128 - size.bits();
- let discr_val = (discr_val << shift) >> shift;
-
- let (discr_dest, tag) = self.place_field(dest, mir::Field::new(0), layout)?;
- self.write_scalar(discr_dest, Scalar::Bits {
- bits: discr_val,
- size: size.bytes() as u8,
- }, tag.ty)?;
- }
- layout::Variants::NicheFilling {
- dataful_variant,
- ref niche_variants,
- niche_start,
- ..
- } => {
- if variant_index != dataful_variant {
- let (niche_dest, niche) =
- self.place_field(dest, mir::Field::new(0), layout)?;
- let niche_value = ((variant_index - niche_variants.start()) as u128)
- .wrapping_add(niche_start);
- self.write_scalar(niche_dest, Scalar::Bits {
- bits: niche_value,
- size: niche.size.bytes() as u8,
- }, niche.ty)?;
- }
- }
- }
-
- Ok(())
- }
-
- pub fn read_global_as_value(&mut self, gid: GlobalId<'tcx>) -> EvalResult<'tcx, Value> {
- let cv = self.const_eval(gid)?;
- self.const_to_value(cv.val)
- }
-
pub fn const_eval(&self, gid: GlobalId<'tcx>) -> EvalResult<'tcx, &'tcx ty::Const<'tcx>> {
let param_env = if self.tcx.is_static(gid.instance.def_id()).is_some() {
ty::ParamEnv::reveal_all()
self.tcx.const_eval(param_env.and(gid)).map_err(|err| EvalErrorKind::ReferencedConstant(err).into())
}
- pub fn allocate_place_for_value(
- &mut self,
- value: Value,
- layout: TyLayout<'tcx>,
- variant: Option<usize>,
- ) -> EvalResult<'tcx, Place> {
- let (ptr, align) = match value {
- Value::ByRef(ptr, align) => (ptr, align),
- Value::ScalarPair(..) | Value::Scalar(_) => {
- let ptr = self.alloc_ptr(layout)?.into();
- self.write_value_to_ptr(value, ptr, layout.align, layout.ty)?;
- (ptr, layout.align)
- },
- };
- Ok(Place::Ptr {
- ptr: ptr.into(),
- align,
- extra: variant.map_or(PlaceExtra::None, PlaceExtra::DowncastVariant),
- })
- }
-
- pub fn force_allocation(&mut self, place: Place) -> EvalResult<'tcx, Place> {
- let new_place = match place {
- Place::Local { frame, local } => {
- match self.stack[frame].locals[local].access()? {
- Value::ByRef(ptr, align) => {
- Place::Ptr {
- ptr: ptr.into(),
- align,
- extra: PlaceExtra::None,
- }
- }
- val => {
- let ty = self.stack[frame].mir.local_decls[local].ty;
- let ty = self.monomorphize(ty, self.stack[frame].instance.substs);
- let layout = self.layout_of(ty)?;
- let ptr = self.alloc_ptr(layout)?;
- self.stack[frame].locals[local] =
- LocalValue::Live(Value::ByRef(ptr.into(), layout.align)); // it stays live
-
- let place = Place::from_ptr(ptr, layout.align);
- self.write_value(ValTy { value: val, ty }, place)?;
- place
- }
- }
- }
- Place::Ptr { .. } => place,
- };
- Ok(new_place)
- }
-
- /// ensures this Value is not a ByRef
- pub fn follow_by_ref_value(
- &self,
- value: Value,
- ty: Ty<'tcx>,
- ) -> EvalResult<'tcx, Value> {
- match value {
- Value::ByRef(ptr, align) => {
- self.read_value(ptr, align, ty)
- }
- other => Ok(other),
- }
- }
-
- pub fn value_to_scalar(
- &self,
- ValTy { value, ty } : ValTy<'tcx>,
- ) -> EvalResult<'tcx, Scalar> {
- match self.follow_by_ref_value(value, ty)? {
- Value::ByRef { .. } => bug!("follow_by_ref_value can't result in `ByRef`"),
-
- Value::Scalar(scalar) => scalar.unwrap_or_err(),
-
- Value::ScalarPair(..) => bug!("value_to_scalar can't work with fat pointers"),
- }
- }
-
- pub fn write_ptr(&mut self, dest: Place, val: Scalar, dest_ty: Ty<'tcx>) -> EvalResult<'tcx> {
- let valty = ValTy {
- value: val.to_value(),
- ty: dest_ty,
- };
- self.write_value(valty, dest)
- }
-
- pub fn write_scalar(
- &mut self,
- dest: Place,
- val: impl Into<ScalarMaybeUndef>,
- dest_ty: Ty<'tcx>,
- ) -> EvalResult<'tcx> {
- let valty = ValTy {
- value: Value::Scalar(val.into()),
- ty: dest_ty,
- };
- self.write_value(valty, dest)
- }
-
- pub fn write_value(
- &mut self,
- ValTy { value: src_val, ty: dest_ty } : ValTy<'tcx>,
- dest: Place,
- ) -> EvalResult<'tcx> {
- //trace!("Writing {:?} to {:?} at type {:?}", src_val, dest, dest_ty);
- // Note that it is really important that the type here is the right one, and matches the type things are read at.
- // In case `src_val` is a `ScalarPair`, we don't do any magic here to handle padding properly, which is only
- // correct if we never look at this data with the wrong type.
-
- match dest {
- Place::Ptr { ptr, align, extra } => {
- assert_eq!(extra, PlaceExtra::None);
- self.write_value_to_ptr(src_val, ptr.unwrap_or_err()?, align, dest_ty)
- }
-
- Place::Local { frame, local } => {
- let old_val = self.stack[frame].locals[local].access()?;
- self.write_value_possibly_by_val(
- src_val,
- |this, val| this.stack[frame].set_local(local, val),
- old_val,
- dest_ty,
- )
- }
- }
- }
-
- // The cases here can be a bit subtle. Read carefully!
- fn write_value_possibly_by_val<F: FnOnce(&mut Self, Value) -> EvalResult<'tcx>>(
- &mut self,
- src_val: Value,
- write_dest: F,
- old_dest_val: Value,
- dest_ty: Ty<'tcx>,
- ) -> EvalResult<'tcx> {
- // FIXME: this should be a layout check, not underlying value
- if let Value::ByRef(dest_ptr, align) = old_dest_val {
- // If the value is already `ByRef` (that is, backed by an `Allocation`),
- // then we must write the new value into this allocation, because there may be
- // other pointers into the allocation. These other pointers are logically
- // pointers into the local variable, and must be able to observe the change.
- //
- // Thus, it would be an error to replace the `ByRef` with a `ByVal`, unless we
- // knew for certain that there were no outstanding pointers to this allocation.
- self.write_value_to_ptr(src_val, dest_ptr, align, dest_ty)?;
- } else if let Value::ByRef(src_ptr, align) = src_val {
- // If the value is not `ByRef`, then we know there are no pointers to it
- // and we can simply overwrite the `Value` in the locals array directly.
- //
- // In this specific case, where the source value is `ByRef`, we must duplicate
- // the allocation, because this is a by-value operation. It would be incorrect
- // if they referred to the same allocation, since then a change to one would
- // implicitly change the other.
- //
- // It is a valid optimization to attempt reading a primitive value out of the
- // source and write that into the destination without making an allocation, so
- // we do so here.
- if let Ok(Some(src_val)) = self.try_read_value(src_ptr, align, dest_ty) {
- write_dest(self, src_val)?;
- } else {
- let layout = self.layout_of(dest_ty)?;
- let dest_ptr = self.alloc_ptr(layout)?.into();
- self.memory.copy(src_ptr, align.min(layout.align), dest_ptr, layout.align, layout.size, false)?;
- write_dest(self, Value::ByRef(dest_ptr, layout.align))?;
- }
- } else {
- // Finally, we have the simple case where neither source nor destination are
- // `ByRef`. We may simply copy the source value over the the destintion.
- write_dest(self, src_val)?;
- }
- Ok(())
- }
-
- pub fn write_value_to_ptr(
- &mut self,
- value: Value,
- dest: Scalar,
- dest_align: Align,
- dest_ty: Ty<'tcx>,
- ) -> EvalResult<'tcx> {
- let layout = self.layout_of(dest_ty)?;
- trace!("write_value_to_ptr: {:#?}, {}, {:#?}", value, dest_ty, layout);
- match value {
- Value::ByRef(ptr, align) => {
- self.memory.copy(ptr, align.min(layout.align), dest, dest_align.min(layout.align), layout.size, false)
- }
- Value::Scalar(scalar) => {
- let signed = match layout.abi {
- layout::Abi::Scalar(ref scal) => match scal.value {
- layout::Primitive::Int(_, signed) => signed,
- _ => false,
- },
- _ => false,
- };
- self.memory.write_scalar(dest, dest_align, scalar, layout.size, layout.align, signed)
- }
- Value::ScalarPair(a_val, b_val) => {
- trace!("write_value_to_ptr valpair: {:#?}", layout);
- let (a, b) = match layout.abi {
- layout::Abi::ScalarPair(ref a, ref b) => (&a.value, &b.value),
- _ => bug!("write_value_to_ptr: invalid ScalarPair layout: {:#?}", layout)
- };
- let (a_size, b_size) = (a.size(&self), b.size(&self));
- let (a_align, b_align) = (a.align(&self), b.align(&self));
- let a_ptr = dest;
- let b_offset = a_size.abi_align(b_align);
- let b_ptr = dest.ptr_offset(b_offset, &self)?.into();
- // TODO: What about signedess?
- self.memory.write_scalar(a_ptr, dest_align, a_val, a_size, a_align, false)?;
- self.memory.write_scalar(b_ptr, dest_align, b_val, b_size, b_align, false)
- }
- }
- }
-
- pub fn read_value(&self, ptr: Scalar, align: Align, ty: Ty<'tcx>) -> EvalResult<'tcx, Value> {
- if let Some(val) = self.try_read_value(ptr, align, ty)? {
- Ok(val)
- } else {
- bug!("primitive read failed for type: {:?}", ty);
- }
- }
-
- fn validate_scalar(
- &self,
- value: ScalarMaybeUndef,
- size: Size,
- scalar: &layout::Scalar,
- path: &str,
- ty: Ty,
- ) -> EvalResult<'tcx> {
- trace!("validate scalar: {:#?}, {:#?}, {:#?}, {}", value, size, scalar, ty);
- let (lo, hi) = scalar.valid_range.clone().into_inner();
-
- let value = match value {
- ScalarMaybeUndef::Scalar(scalar) => scalar,
- ScalarMaybeUndef::Undef => return validation_failure!("undefined bytes", path),
- };
-
- let bits = match value {
- Scalar::Bits { bits, size: value_size } => {
- assert_eq!(value_size as u64, size.bytes());
- bits
- },
- Scalar::Ptr(_) => {
- let ptr_size = self.memory.pointer_size();
- let ptr_max = u128::max_value() >> (128 - ptr_size.bits());
- return if lo > hi {
- if lo - hi == 1 {
- // no gap, all values are ok
- Ok(())
- } else if hi < ptr_max || lo > 1 {
- let max = u128::max_value() >> (128 - size.bits());
- validation_failure!(
- "pointer",
- path,
- format!("something in the range {:?} or {:?}", 0..=lo, hi..=max)
- )
- } else {
- Ok(())
- }
- } else if hi < ptr_max || lo > 1 {
- validation_failure!(
- "pointer",
- path,
- format!("something in the range {:?}", scalar.valid_range)
- )
- } else {
- Ok(())
- };
- },
- };
-
- // char gets a special treatment, because its number space is not contiguous so `TyLayout`
- // has no special checks for chars
- match ty.sty {
- ty::TyChar => {
- debug_assert_eq!(size.bytes(), 4);
- if ::std::char::from_u32(bits as u32).is_none() {
- return err!(InvalidChar(bits));
- }
- }
- _ => {},
- }
-
- use std::ops::RangeInclusive;
- let in_range = |bound: RangeInclusive<u128>| bound.contains(&bits);
- if lo > hi {
- if in_range(0..=hi) || in_range(lo..=u128::max_value()) {
- Ok(())
- } else {
- validation_failure!(
- bits,
- path,
- format!("something in the range {:?} or {:?}", ..=hi, lo..)
- )
- }
- } else {
- if in_range(scalar.valid_range.clone()) {
- Ok(())
- } else {
- validation_failure!(
- bits,
- path,
- format!("something in the range {:?}", scalar.valid_range)
- )
- }
- }
- }
-
- /// This function checks the memory where `ptr` points to.
- /// It will error if the bits at the destination do not match the ones described by the layout.
- pub fn validate_ptr_target(
- &self,
- ptr: Pointer,
- ptr_align: Align,
- mut layout: TyLayout<'tcx>,
- path: String,
- seen: &mut FxHashSet<(Pointer, Ty<'tcx>)>,
- todo: &mut Vec<(Pointer, Ty<'tcx>, String)>,
- ) -> EvalResult<'tcx> {
- self.memory.dump_alloc(ptr.alloc_id);
- trace!("validate_ptr_target: {:?}, {:#?}", ptr, layout);
-
- let variant;
- match layout.variants {
- layout::Variants::NicheFilling { niche: ref tag, .. } |
- layout::Variants::Tagged { ref tag, .. } => {
- let size = tag.value.size(self);
- let (tag_value, tag_layout) = self.read_field(
- Value::ByRef(ptr.into(), ptr_align),
- None,
- mir::Field::new(0),
- layout,
- )?;
- let tag_value = match self.follow_by_ref_value(tag_value, tag_layout.ty)? {
- Value::Scalar(val) => val,
- _ => bug!("tag must be scalar"),
- };
- let path = format!("{}.TAG", path);
- self.validate_scalar(tag_value, size, tag, &path, tag_layout.ty)?;
- let variant_index = self.read_discriminant_as_variant_index(
- Place::from_ptr(ptr, ptr_align),
- layout,
- )?;
- variant = variant_index;
- layout = layout.for_variant(self, variant_index);
- trace!("variant layout: {:#?}", layout);
- },
- layout::Variants::Single { index } => variant = index,
- }
- match layout.fields {
- // primitives are unions with zero fields
- layout::FieldPlacement::Union(0) => {
- match layout.abi {
- // nothing to do, whatever the pointer points to, it is never going to be read
- layout::Abi::Uninhabited => validation_failure!("a value of an uninhabited type", path),
- // check that the scalar is a valid pointer or that its bit range matches the
- // expectation.
- layout::Abi::Scalar(ref scalar) => {
- let size = scalar.value.size(self);
- let value = self.memory.read_scalar(ptr, ptr_align, size)?;
- self.validate_scalar(value, size, scalar, &path, layout.ty)?;
- if scalar.value == Primitive::Pointer {
- // ignore integer pointers, we can't reason about the final hardware
- if let Scalar::Ptr(ptr) = value.unwrap_or_err()? {
- let alloc_kind = self.tcx.alloc_map.lock().get(ptr.alloc_id);
- if let Some(AllocType::Static(did)) = alloc_kind {
- // statics from other crates are already checked
- // extern statics should not be validated as they have no body
- if !did.is_local() || self.tcx.is_foreign_item(did) {
- return Ok(());
- }
- }
- if let Some(tam) = layout.ty.builtin_deref(false) {
- // we have not encountered this pointer+layout combination before
- if seen.insert((ptr, tam.ty)) {
- todo.push((ptr, tam.ty, format!("(*{})", path)))
- }
- }
- }
- }
- Ok(())
- },
- _ => bug!("bad abi for FieldPlacement::Union(0): {:#?}", layout.abi),
- }
- }
- layout::FieldPlacement::Union(_) => {
- // We can't check unions, their bits are allowed to be anything.
- // The fields don't need to correspond to any bit pattern of the union's fields.
- // See https://github.com/rust-lang/rust/issues/32836#issuecomment-406875389
- Ok(())
- },
- layout::FieldPlacement::Array { stride, count } => {
- let elem_layout = layout.field(self, 0)?;
- for i in 0..count {
- let mut path = path.clone();
- self.write_field_name(&mut path, layout.ty, i as usize, variant).unwrap();
- self.validate_ptr_target(ptr.offset(stride * i, self)?, ptr_align, elem_layout, path, seen, todo)?;
- }
- Ok(())
- },
- layout::FieldPlacement::Arbitrary { ref offsets, .. } => {
-
- // check length field and vtable field
- match layout.ty.builtin_deref(false).map(|tam| &tam.ty.sty) {
- | Some(ty::TyStr)
- | Some(ty::TySlice(_)) => {
- let (len, len_layout) = self.read_field(
- Value::ByRef(ptr.into(), ptr_align),
- None,
- mir::Field::new(1),
- layout,
- )?;
- let len = self.value_to_scalar(ValTy { value: len, ty: len_layout.ty })?;
- if len.to_bits(len_layout.size).is_err() {
- return validation_failure!("length is not a valid integer", path);
- }
- },
- Some(ty::TyDynamic(..)) => {
- let (vtable, vtable_layout) = self.read_field(
- Value::ByRef(ptr.into(), ptr_align),
- None,
- mir::Field::new(1),
- layout,
- )?;
- let vtable = self.value_to_scalar(ValTy { value: vtable, ty: vtable_layout.ty })?;
- if vtable.to_ptr().is_err() {
- return validation_failure!("vtable address is not a pointer", path);
- }
- }
- _ => {},
- }
- for (i, &offset) in offsets.iter().enumerate() {
- let field_layout = layout.field(self, i)?;
- let mut path = path.clone();
- self.write_field_name(&mut path, layout.ty, i, variant).unwrap();
- self.validate_ptr_target(ptr.offset(offset, self)?, ptr_align, field_layout, path, seen, todo)?;
- }
- Ok(())
- }
- }
- }
-
- pub fn try_read_by_ref(&self, mut val: Value, ty: Ty<'tcx>) -> EvalResult<'tcx, Value> {
- // Convert to ByVal or ScalarPair if possible
- if let Value::ByRef(ptr, align) = val {
- if let Some(read_val) = self.try_read_value(ptr, align, ty)? {
- val = read_val;
- }
- }
- Ok(val)
- }
-
- pub fn try_read_value(&self, ptr: Scalar, ptr_align: Align, ty: Ty<'tcx>) -> EvalResult<'tcx, Option<Value>> {
- let layout = self.layout_of(ty)?;
- self.memory.check_align(ptr, ptr_align)?;
-
- if layout.size.bytes() == 0 {
- return Ok(Some(Value::Scalar(ScalarMaybeUndef::Scalar(Scalar::Bits { bits: 0, size: 0 }))));
- }
-
- let ptr = ptr.to_ptr()?;
-
- match layout.abi {
- layout::Abi::Scalar(..) => {
- let scalar = self.memory.read_scalar(ptr, ptr_align, layout.size)?;
- Ok(Some(Value::Scalar(scalar)))
- }
- layout::Abi::ScalarPair(ref a, ref b) => {
- let (a, b) = (&a.value, &b.value);
- let (a_size, b_size) = (a.size(self), b.size(self));
- let a_ptr = ptr;
- let b_offset = a_size.abi_align(b.align(self));
- let b_ptr = ptr.offset(b_offset, self)?.into();
- let a_val = self.memory.read_scalar(a_ptr, ptr_align, a_size)?;
- let b_val = self.memory.read_scalar(b_ptr, ptr_align, b_size)?;
- Ok(Some(Value::ScalarPair(a_val, b_val)))
- }
- _ => Ok(None),
- }
- }
-
+ #[inline(always)]
pub fn frame(&self) -> &Frame<'mir, 'tcx> {
self.stack.last().expect("no call frames exist")
}
+ #[inline(always)]
pub fn frame_mut(&mut self) -> &mut Frame<'mir, 'tcx> {
self.stack.last_mut().expect("no call frames exist")
}
}
}
- fn unsize_into_ptr(
- &mut self,
- src: Value,
- src_ty: Ty<'tcx>,
- dest: Place,
- dest_ty: Ty<'tcx>,
- sty: Ty<'tcx>,
- dty: Ty<'tcx>,
- ) -> EvalResult<'tcx> {
- // A<Struct> -> A<Trait> conversion
- let (src_pointee_ty, dest_pointee_ty) = self.tcx.struct_lockstep_tails(sty, dty);
-
- match (&src_pointee_ty.sty, &dest_pointee_ty.sty) {
- (&ty::TyArray(_, length), &ty::TySlice(_)) => {
- let ptr = self.into_ptr(src)?;
- // u64 cast is from usize to u64, which is always good
- let valty = ValTy {
- value: ptr.to_value_with_len(length.unwrap_usize(self.tcx.tcx), self.tcx.tcx),
- ty: dest_ty,
- };
- self.write_value(valty, dest)
- }
- (&ty::TyDynamic(..), &ty::TyDynamic(..)) => {
- // For now, upcasts are limited to changes in marker
- // traits, and hence never actually require an actual
- // change to the vtable.
- let valty = ValTy {
- value: src,
- ty: dest_ty,
- };
- self.write_value(valty, dest)
- }
- (_, &ty::TyDynamic(ref data, _)) => {
- let trait_ref = data.principal().unwrap().with_self_ty(
- *self.tcx,
- src_pointee_ty,
- );
- let trait_ref = self.tcx.erase_regions(&trait_ref);
- let vtable = self.get_vtable(src_pointee_ty, trait_ref)?;
- let ptr = self.into_ptr(src)?;
- let valty = ValTy {
- value: ptr.to_value_with_vtable(vtable),
- ty: dest_ty,
- };
- self.write_value(valty, dest)
- }
-
- _ => bug!("invalid unsizing {:?} -> {:?}", src_ty, dest_ty),
- }
- }
-
- crate fn unsize_into(
- &mut self,
- src: Value,
- src_layout: TyLayout<'tcx>,
- dst: Place,
- dst_layout: TyLayout<'tcx>,
- ) -> EvalResult<'tcx> {
- match (&src_layout.ty.sty, &dst_layout.ty.sty) {
- (&ty::TyRef(_, s, _), &ty::TyRef(_, d, _)) |
- (&ty::TyRef(_, s, _), &ty::TyRawPtr(TypeAndMut { ty: d, .. })) |
- (&ty::TyRawPtr(TypeAndMut { ty: s, .. }),
- &ty::TyRawPtr(TypeAndMut { ty: d, .. })) => {
- self.unsize_into_ptr(src, src_layout.ty, dst, dst_layout.ty, s, d)
- }
- (&ty::TyAdt(def_a, _), &ty::TyAdt(def_b, _)) => {
- assert_eq!(def_a, def_b);
- if def_a.is_box() || def_b.is_box() {
- if !def_a.is_box() || !def_b.is_box() {
- bug!("invalid unsizing between {:?} -> {:?}", src_layout, dst_layout);
- }
- return self.unsize_into_ptr(
- src,
- src_layout.ty,
- dst,
- dst_layout.ty,
- src_layout.ty.boxed_ty(),
- dst_layout.ty.boxed_ty(),
- );
- }
-
- // unsizing of generic struct with pointer fields
- // Example: `Arc<T>` -> `Arc<Trait>`
- // here we need to increase the size of every &T thin ptr field to a fat ptr
- for i in 0..src_layout.fields.count() {
- let (dst_f_place, dst_field) =
- self.place_field(dst, mir::Field::new(i), dst_layout)?;
- if dst_field.is_zst() {
- continue;
- }
- let (src_f_value, src_field) = match src {
- Value::ByRef(ptr, align) => {
- let src_place = Place::from_scalar_ptr(ptr.into(), align);
- let (src_f_place, src_field) =
- self.place_field(src_place, mir::Field::new(i), src_layout)?;
- (self.read_place(src_f_place)?, src_field)
- }
- Value::Scalar(_) | Value::ScalarPair(..) => {
- let src_field = src_layout.field(&self, i)?;
- assert_eq!(src_layout.fields.offset(i).bytes(), 0);
- assert_eq!(src_field.size, src_layout.size);
- (src, src_field)
- }
- };
- if src_field.ty == dst_field.ty {
- self.write_value(ValTy {
- value: src_f_value,
- ty: src_field.ty,
- }, dst_f_place)?;
- } else {
- self.unsize_into(src_f_value, src_field, dst_f_place, dst_field)?;
- }
- }
- Ok(())
- }
- _ => {
- bug!(
- "unsize_into: invalid conversion: {:?} -> {:?}",
- src_layout,
- dst_layout
- )
- }
- }
- }
-
- pub fn dump_local(&self, place: Place) {
+ pub fn dump_place(&self, place: Place) {
// Debug output
if !log_enabled!(::log::Level::Trace) {
return;
panic!("Failed to access local: {:?}", err);
}
}
- Ok(Value::ByRef(ptr, align)) => {
+ Ok(Operand::Indirect(mplace)) => {
+ let (ptr, align) = mplace.to_scalar_ptr_align();
match ptr {
Scalar::Ptr(ptr) => {
write!(msg, " by align({}) ref:", align.abi()).unwrap();
allocs.push(ptr.alloc_id);
}
- ptr => write!(msg, " integral by ref: {:?}", ptr).unwrap(),
+ ptr => write!(msg, " by integral ref: {:?}", ptr).unwrap(),
}
}
- Ok(Value::Scalar(val)) => {
+ Ok(Operand::Immediate(Value::Scalar(val))) => {
write!(msg, " {:?}", val).unwrap();
if let ScalarMaybeUndef::Scalar(Scalar::Ptr(ptr)) = val {
allocs.push(ptr.alloc_id);
}
}
- Ok(Value::ScalarPair(val1, val2)) => {
+ Ok(Operand::Immediate(Value::ScalarPair(val1, val2))) => {
write!(msg, " ({:?}, {:?})", val1, val2).unwrap();
if let ScalarMaybeUndef::Scalar(Scalar::Ptr(ptr)) = val1 {
allocs.push(ptr.alloc_id);
trace!("{}", msg);
self.memory.dump_allocs(allocs);
}
- Place::Ptr { ptr, align, .. } => {
- match ptr {
- ScalarMaybeUndef::Scalar(Scalar::Ptr(ptr)) => {
- trace!("by align({}) ref:", align.abi());
+ Place::Ptr(mplace) => {
+ match mplace.ptr {
+ Scalar::Ptr(ptr) => {
+ trace!("by align({}) ref:", mplace.align.abi());
self.memory.dump_alloc(ptr.alloc_id);
}
ptr => trace!(" integral by ref: {:?}", ptr),
(frames, self.tcx.span)
}
+ #[inline(always)]
pub fn sign_extend(&self, value: u128, ty: TyLayout<'_>) -> u128 {
- super::sign_extend(value, ty)
+ assert!(ty.abi.is_signed());
+ sign_extend(value, ty.size)
}
+ #[inline(always)]
pub fn truncate(&self, value: u128, ty: TyLayout<'_>) -> u128 {
- super::truncate(value, ty)
- }
-
- fn write_field_name(&self, s: &mut String, ty: Ty<'tcx>, i: usize, variant: usize) -> ::std::fmt::Result {
- match ty.sty {
- ty::TyBool |
- ty::TyChar |
- ty::TyInt(_) |
- ty::TyUint(_) |
- ty::TyFloat(_) |
- ty::TyFnPtr(_) |
- ty::TyNever |
- ty::TyFnDef(..) |
- ty::TyGeneratorWitness(..) |
- ty::TyForeign(..) |
- ty::TyDynamic(..) => {
- bug!("field_name({:?}): not applicable", ty)
- }
-
- // Potentially-fat pointers.
- ty::TyRef(_, pointee, _) |
- ty::TyRawPtr(ty::TypeAndMut { ty: pointee, .. }) => {
- assert!(i < 2);
-
- // Reuse the fat *T type as its own thin pointer data field.
- // This provides information about e.g. DST struct pointees
- // (which may have no non-DST form), and will work as long
- // as the `Abi` or `FieldPlacement` is checked by users.
- if i == 0 {
- return write!(s, ".data_ptr");
- }
-
- match self.tcx.struct_tail(pointee).sty {
- ty::TySlice(_) |
- ty::TyStr => write!(s, ".len"),
- ty::TyDynamic(..) => write!(s, ".vtable_ptr"),
- _ => bug!("field_name({:?}): not applicable", ty)
- }
- }
-
- // Arrays and slices.
- ty::TyArray(_, _) |
- ty::TySlice(_) |
- ty::TyStr => write!(s, "[{}]", i),
-
- // generators and closures.
- ty::TyClosure(def_id, _) | ty::TyGenerator(def_id, _, _) => {
- let node_id = self.tcx.hir.as_local_node_id(def_id).unwrap();
- let freevar = self.tcx.with_freevars(node_id, |fv| fv[i]);
- write!(s, ".upvar({})", self.tcx.hir.name(freevar.var_id()))
- }
-
- ty::TyTuple(_) => write!(s, ".{}", i),
-
- // enums
- ty::TyAdt(def, ..) if def.is_enum() => {
- let variant = &def.variants[variant];
- write!(s, ".{}::{}", variant.name, variant.fields[i].ident)
- }
-
- // other ADTs.
- ty::TyAdt(def, _) => write!(s, ".{}", def.non_enum_variant().fields[i].ident),
-
- ty::TyProjection(_) | ty::TyAnon(..) | ty::TyParam(_) |
- ty::TyInfer(_) | ty::TyError => {
- bug!("write_field_name: unexpected type `{}`", ty)
- }
- }
- }
-
- pub fn storage_live(&mut self, local: mir::Local) -> EvalResult<'tcx, LocalValue> {
- trace!("{:?} is now live", local);
-
- let ty = self.frame().mir.local_decls[local].ty;
- let init = self.init_value(ty)?;
- // StorageLive *always* kills the value that's currently stored
- Ok(mem::replace(&mut self.frame_mut().locals[local], LocalValue::Live(init)))
- }
-
- fn init_value(&mut self, ty: Ty<'tcx>) -> EvalResult<'tcx, Value> {
- let ty = self.monomorphize(ty, self.substs());
- let layout = self.layout_of(ty)?;
- Ok(match layout.abi {
- layout::Abi::Scalar(..) => Value::Scalar(ScalarMaybeUndef::Undef),
- layout::Abi::ScalarPair(..) => Value::ScalarPair(
- ScalarMaybeUndef::Undef,
- ScalarMaybeUndef::Undef,
- ),
- _ => Value::ByRef(self.alloc_ptr(layout)?.into(), layout.align),
- })
+ truncate(value, ty.size)
}
}
-impl<'mir, 'tcx> Frame<'mir, 'tcx> {
- fn set_local(&mut self, local: mir::Local, value: Value) -> EvalResult<'tcx> {
- match self.locals[local] {
- LocalValue::Dead => err!(DeadLocal),
- LocalValue::Live(ref mut local) => {
- *local = value;
- Ok(())
- }
- }
- }
-
- /// Returns the old value of the local
- pub fn storage_dead(&mut self, local: mir::Local) -> LocalValue {
- trace!("{:?} is now dead", local);
-
- mem::replace(&mut self.locals[local], LocalValue::Dead)
- }
-}
use std::hash::Hash;
use rustc::mir::interpret::{AllocId, EvalResult, Scalar, Pointer, AccessKind, GlobalId};
-use super::{EvalContext, Place, ValTy, Memory};
+use super::{EvalContext, PlaceTy, OpTy, Memory};
use rustc::mir;
-use rustc::ty::{self, Ty};
+use rustc::ty::{self, layout::TyLayout};
use rustc::ty::layout::Size;
use syntax::source_map::Span;
use syntax::ast::Mutability;
fn eval_fn_call<'a>(
ecx: &mut EvalContext<'a, 'mir, 'tcx, Self>,
instance: ty::Instance<'tcx>,
- destination: Option<(Place, mir::BasicBlock)>,
- args: &[ValTy<'tcx>],
+ destination: Option<(PlaceTy<'tcx>, mir::BasicBlock)>,
+ args: &[OpTy<'tcx>],
span: Span,
- sig: ty::FnSig<'tcx>,
) -> EvalResult<'tcx, bool>;
/// directly process an intrinsic without pushing a stack frame.
fn call_intrinsic<'a>(
ecx: &mut EvalContext<'a, 'mir, 'tcx, Self>,
instance: ty::Instance<'tcx>,
- args: &[ValTy<'tcx>],
- dest: Place,
- dest_layout: ty::layout::TyLayout<'tcx>,
+ args: &[OpTy<'tcx>],
+ dest: PlaceTy<'tcx>,
target: mir::BasicBlock,
) -> EvalResult<'tcx>;
ecx: &EvalContext<'a, 'mir, 'tcx, Self>,
bin_op: mir::BinOp,
left: Scalar,
- left_ty: Ty<'tcx>,
+ left_layout: TyLayout<'tcx>,
right: Scalar,
- right_ty: Ty<'tcx>,
+ right_layout: TyLayout<'tcx>,
) -> EvalResult<'tcx, Option<(Scalar, bool)>>;
/// Called when trying to mark machine defined `MemoryKinds` as static
/// Returns a pointer to the allocated memory
fn box_alloc<'a>(
ecx: &mut EvalContext<'a, 'mir, 'tcx, Self>,
- ty: Ty<'tcx>,
- dest: Place,
+ dest: PlaceTy<'tcx>,
) -> EvalResult<'tcx>;
/// Called when trying to access a global declared with a `linkage` attribute
+//! The memory subsystem.
+//!
+//! Generally, we use `Pointer` to denote memory addresses. However, some operations
+//! have a "size"-like parameter, and they take `Scalar` for the address because
+//! if the size is 0, then the pointer can also be a (properly aligned, non-NULL)
+//! integer. It is crucial that these operations call `check_align` *before*
+//! short-circuiting the empty case!
+
use std::collections::VecDeque;
use std::hash::{Hash, Hasher};
use std::ptr;
use rustc::ty::ParamEnv;
use rustc::ty::query::TyCtxtAt;
use rustc::ty::layout::{self, Align, TargetDataLayout, Size};
-use rustc::mir::interpret::{Pointer, AllocId, Allocation, AccessKind, Value, ScalarMaybeUndef,
- EvalResult, Scalar, EvalErrorKind, GlobalId, AllocType};
-pub use rustc::mir::interpret::{write_target_uint, write_target_int, read_target_uint};
+use rustc::mir::interpret::{Pointer, AllocId, Allocation, AccessKind, ScalarMaybeUndef,
+ EvalResult, Scalar, EvalErrorKind, GlobalId, AllocType, truncate};
+pub use rustc::mir::interpret::{write_target_uint, read_target_uint};
use rustc_data_structures::fx::{FxHashSet, FxHashMap, FxHasher};
use syntax::ast::Mutability;
use super::{EvalContext, Machine};
+
////////////////////////////////////////////////////////////////////////////////
// Allocations and pointers
////////////////////////////////////////////////////////////////////////////////
/// Actual memory allocations (arbitrary bytes, may contain pointers into other allocations).
alloc_map: FxHashMap<AllocId, Allocation>,
- /// The current stack frame. Used to check accesses against locks.
- pub cur_frame: usize,
-
pub tcx: TyCtxtAt<'a, 'tcx, 'tcx>,
}
data,
alloc_kind,
alloc_map,
- cur_frame,
tcx: _,
} = self;
*data == other.data
&& *alloc_kind == other.alloc_kind
&& *alloc_map == other.alloc_map
- && *cur_frame == other.cur_frame
}
}
data,
alloc_kind: _,
alloc_map: _,
- cur_frame,
tcx: _,
} = self;
data.hash(state);
- cur_frame.hash(state);
// We ignore some fields which don't change between evaluation steps.
alloc_kind: FxHashMap::default(),
alloc_map: FxHashMap::default(),
tcx,
- cur_frame: usize::max_value(),
}
}
self.tcx.data_layout.endian
}
- /// Check that the pointer is aligned AND non-NULL.
+ /// Check that the pointer is aligned AND non-NULL. This supports scalars
+ /// for the benefit of other parts of miri that need to check alignment even for ZST.
pub fn check_align(&self, ptr: Scalar, required_align: Align) -> EvalResult<'tcx> {
// Check non-NULL/Undef, extract offset
let (offset, alloc_align) = match ptr {
}
}
+ /// Check if the pointer is "in-bounds". Notice that a pointer pointing at the end
+ /// of an allocation (i.e., at the first *inaccessible* location) *is* considered
+ /// in-bounds! This follows C's/LLVM's rules.
pub fn check_bounds(&self, ptr: Pointer, access: bool) -> EvalResult<'tcx> {
let alloc = self.get(ptr.alloc_id)?;
let allocation_size = alloc.bytes.len() as u64;
assert!(self.tcx.is_static(def_id).is_some());
EvalErrorKind::ReferencedConstant(err).into()
}).map(|val| {
- self.tcx.const_value_to_allocation(val)
+ self.tcx.const_to_allocation(val)
})
}
/// Byte accessors
impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> Memory<'a, 'mir, 'tcx, M> {
+ /// This checks alignment!
fn get_bytes_unchecked(
&self,
ptr: Pointer,
Ok(&alloc.bytes[offset..offset + size.bytes() as usize])
}
+ /// This checks alignment!
fn get_bytes_unchecked_mut(
&mut self,
ptr: Pointer,
) -> EvalResult<'tcx, &mut [u8]> {
assert_ne!(size.bytes(), 0);
self.clear_relocations(ptr, size)?;
- self.mark_definedness(ptr.into(), size, true)?;
+ self.mark_definedness(ptr, size, true)?;
self.get_bytes_unchecked_mut(ptr, size, align)
}
}
Some(MemoryKind::Stack) => {},
}
if let Some(mut alloc) = alloc {
- // ensure llvm knows not to put this into immutable memroy
+ // ensure llvm knows not to put this into immutable memory
alloc.runtime_mutability = mutability;
let alloc = self.tcx.intern_const_alloc(alloc);
self.tcx.alloc_map.lock().set_id_memory(alloc_id, alloc);
length: u64,
nonoverlapping: bool,
) -> EvalResult<'tcx> {
- // Empty accesses don't need to be valid pointers, but they should still be aligned
- self.check_align(src, src_align)?;
- self.check_align(dest, dest_align)?;
if size.bytes() == 0 {
+ // Nothing to do for ZST, other than checking alignment and non-NULLness.
+ self.check_align(src, src_align)?;
+ self.check_align(dest, dest_align)?;
return Ok(());
}
let src = src.to_ptr()?;
new_relocations
};
+ // This also checks alignment.
let src_bytes = self.get_bytes_unchecked(src, size, src_align)?.as_ptr();
let dest_bytes = self.get_bytes_mut(dest, size * length, dest_align)?.as_mut_ptr();
pub fn read_bytes(&self, ptr: Scalar, size: Size) -> EvalResult<'tcx, &[u8]> {
// Empty accesses don't need to be valid pointers, but they should still be non-NULL
let align = Align::from_bytes(1, 1).unwrap();
- self.check_align(ptr, align)?;
if size.bytes() == 0 {
+ self.check_align(ptr, align)?;
return Ok(&[]);
}
self.get_bytes(ptr.to_ptr()?, size, align)
pub fn write_bytes(&mut self, ptr: Scalar, src: &[u8]) -> EvalResult<'tcx> {
// Empty accesses don't need to be valid pointers, but they should still be non-NULL
let align = Align::from_bytes(1, 1).unwrap();
- self.check_align(ptr, align)?;
if src.is_empty() {
+ self.check_align(ptr, align)?;
return Ok(());
}
let bytes = self.get_bytes_mut(ptr.to_ptr()?, Size::from_bytes(src.len() as u64), align)?;
pub fn write_repeat(&mut self, ptr: Scalar, val: u8, count: Size) -> EvalResult<'tcx> {
// Empty accesses don't need to be valid pointers, but they should still be non-NULL
let align = Align::from_bytes(1, 1).unwrap();
- self.check_align(ptr, align)?;
if count.bytes() == 0 {
+ self.check_align(ptr, align)?;
return Ok(());
}
let bytes = self.get_bytes_mut(ptr.to_ptr()?, count, align)?;
Ok(())
}
+ /// Read a *non-ZST* scalar
pub fn read_scalar(&self, ptr: Pointer, ptr_align: Align, size: Size) -> EvalResult<'tcx, ScalarMaybeUndef> {
self.check_relocation_edges(ptr, size)?; // Make sure we don't read part of a pointer as a pointer
let endianness = self.endianness();
+ // get_bytes_unchecked tests alignment
let bytes = self.get_bytes_unchecked(ptr, size, ptr_align.min(self.int_align(size)))?;
// Undef check happens *after* we established that the alignment is correct.
// We must not return Ok() for unaligned pointers!
self.read_scalar(ptr, ptr_align, self.pointer_size())
}
+ /// Write a *non-ZST* scalar
pub fn write_scalar(
&mut self,
- ptr: Scalar,
+ ptr: Pointer,
ptr_align: Align,
val: ScalarMaybeUndef,
type_size: Size,
- type_align: Align,
- signed: bool,
) -> EvalResult<'tcx> {
let endianness = self.endianness();
- self.check_align(ptr, ptr_align)?;
let val = match val {
ScalarMaybeUndef::Scalar(scalar) => scalar,
val.offset.bytes() as u128
}
- Scalar::Bits { size: 0, .. } => {
- // nothing to do for ZSTs
- assert_eq!(type_size.bytes(), 0);
- return Ok(());
- }
-
Scalar::Bits { bits, size } => {
assert_eq!(size as u64, type_size.bytes());
+ assert_eq!(truncate(bits, Size::from_bytes(size.into())), bits,
+ "Unexpected value of size {} when writing to memory", size);
bits
},
};
- let ptr = ptr.to_ptr()?;
-
{
- let dst = self.get_bytes_mut(ptr, type_size, ptr_align.min(type_align))?;
- if signed {
- write_target_int(endianness, dst, bytes as i128).unwrap();
- } else {
- write_target_uint(endianness, dst, bytes).unwrap();
- }
+ // get_bytes_mut checks alignment
+ let dst = self.get_bytes_mut(ptr, type_size, ptr_align)?;
+ write_target_uint(endianness, dst, bytes).unwrap();
}
// See if we have to also write a relocation
Ok(())
}
- pub fn write_ptr_sized_unsigned(&mut self, ptr: Pointer, ptr_align: Align, val: ScalarMaybeUndef) -> EvalResult<'tcx> {
+ pub fn write_ptr_sized(&mut self, ptr: Pointer, ptr_align: Align, val: ScalarMaybeUndef) -> EvalResult<'tcx> {
let ptr_size = self.pointer_size();
- self.write_scalar(ptr.into(), ptr_align, val, ptr_size, ptr_align, false)
+ self.write_scalar(ptr.into(), ptr_align, val, ptr_size)
}
fn int_align(&self, size: Size) -> Align {
pub fn mark_definedness(
&mut self,
- ptr: Scalar,
+ ptr: Pointer,
size: Size,
new_state: bool,
) -> EvalResult<'tcx> {
if size.bytes() == 0 {
return Ok(());
}
- let ptr = ptr.to_ptr()?;
let alloc = self.get_mut(ptr.alloc_id)?;
alloc.undef_mask.set_range(
ptr.offset,
pub trait HasMemory<'a, 'mir, 'tcx: 'a + 'mir, M: Machine<'mir, 'tcx>> {
fn memory_mut(&mut self) -> &mut Memory<'a, 'mir, 'tcx, M>;
fn memory(&self) -> &Memory<'a, 'mir, 'tcx, M>;
-
- /// Convert the value into a pointer (or a pointer-sized integer). If the value is a ByRef,
- /// this may have to perform a load.
- fn into_ptr(
- &self,
- value: Value,
- ) -> EvalResult<'tcx, ScalarMaybeUndef> {
- Ok(match value {
- Value::ByRef(ptr, align) => {
- self.memory().read_ptr_sized(ptr.to_ptr()?, align)?
- }
- Value::Scalar(ptr) |
- Value::ScalarPair(ptr, _) => ptr,
- }.into())
- }
-
- fn into_ptr_vtable_pair(
- &self,
- value: Value,
- ) -> EvalResult<'tcx, (ScalarMaybeUndef, Pointer)> {
- match value {
- Value::ByRef(ref_ptr, align) => {
- let mem = self.memory();
- let ptr = mem.read_ptr_sized(ref_ptr.to_ptr()?, align)?.into();
- let vtable = mem.read_ptr_sized(
- ref_ptr.ptr_offset(mem.pointer_size(), &mem.tcx.data_layout)?.to_ptr()?,
- align
- )?.unwrap_or_err()?.to_ptr()?;
- Ok((ptr, vtable))
- }
-
- Value::ScalarPair(ptr, vtable) => Ok((ptr, vtable.unwrap_or_err()?.to_ptr()?)),
- _ => bug!("expected ptr and vtable, got {:?}", value),
- }
- }
-
- fn into_slice(
- &self,
- value: Value,
- ) -> EvalResult<'tcx, (ScalarMaybeUndef, u64)> {
- match value {
- Value::ByRef(ref_ptr, align) => {
- let mem = self.memory();
- let ptr = mem.read_ptr_sized(ref_ptr.to_ptr()?, align)?.into();
- let len = mem.read_ptr_sized(
- ref_ptr.ptr_offset(mem.pointer_size(), &mem.tcx.data_layout)?.to_ptr()?,
- align
- )?.unwrap_or_err()?.to_bits(mem.pointer_size())? as u64;
- Ok((ptr, len))
- }
- Value::ScalarPair(ptr, val) => {
- let len = val.unwrap_or_err()?.to_bits(self.memory().pointer_size())?;
- Ok((ptr, len as u64))
- }
- Value::Scalar(_) => bug!("expected ptr and length, got {:?}", value),
- }
- }
}
impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> HasMemory<'a, 'mir, 'tcx, M> for Memory<'a, 'mir, 'tcx, M> {
//! An interpreter for MIR used in CTFE and by miri
mod cast;
-mod const_eval;
mod eval_context;
mod place;
+mod operand;
mod machine;
mod memory;
mod operator;
mod step;
mod terminator;
mod traits;
+mod const_eval;
+mod validity;
pub use self::eval_context::{
- EvalContext, Frame, StackPopCleanup,
- TyAndPacked, ValTy,
+ EvalContext, Frame, StackPopCleanup, LocalValue,
};
-pub use self::place::{Place, PlaceExtra};
+pub use self::place::{Place, PlaceExtra, PlaceTy, MemPlace, MPlaceTy};
pub use self::memory::{Memory, MemoryKind, HasMemory};
mk_borrowck_eval_cx,
mk_eval_cx,
CompileTimeEvaluator,
- const_value_to_allocation_provider,
+ const_to_allocation_provider,
const_eval_provider,
- const_val_field,
+ const_field,
const_variant_index,
- value_to_const_value,
+ op_to_const,
};
pub use self::machine::Machine;
-pub use self::memory::{write_target_uint, write_target_int, read_target_uint};
-
-use rustc::ty::layout::TyLayout;
-
-pub fn sign_extend(value: u128, layout: TyLayout<'_>) -> u128 {
- let size = layout.size.bits();
- assert!(layout.abi.is_signed());
- // sign extend
- let shift = 128 - size;
- // shift the unsigned value to the left
- // and back to the right as signed (essentially fills with FF on the left)
- (((value << shift) as i128) >> shift) as u128
-}
-
-pub fn truncate(value: u128, layout: TyLayout<'_>) -> u128 {
- let size = layout.size.bits();
- let shift = 128 - size;
- // truncate (shift left to drop out leftover values, shift right to fill with zeroes)
- (value << shift) >> shift
-}
+pub use self::operand::{Value, ValTy, Operand, OpTy};
--- /dev/null
+//! Functions concerning immediate values and operands, and reading from operands.
+//! All high-level functions to read from memory work on operands as sources.
+
+use std::convert::TryInto;
+
+use rustc::mir;
+use rustc::ty::layout::{self, Align, LayoutOf, TyLayout, HasDataLayout, IntegerExt};
+use rustc_data_structures::indexed_vec::Idx;
+
+use rustc::mir::interpret::{
+ GlobalId, ConstValue, Scalar, EvalResult, Pointer, ScalarMaybeUndef, EvalErrorKind
+};
+use super::{EvalContext, Machine, MemPlace, MPlaceTy, PlaceExtra, MemoryKind};
+
+/// A `Value` represents a single immediate self-contained Rust value.
+///
+/// For optimization of a few very common cases, there is also a representation for a pair of
+/// primitive values (`ScalarPair`). It allows Miri to avoid making allocations for checked binary
+/// operations and fat pointers. This idea was taken from rustc's codegen.
+/// In particular, thanks to `ScalarPair`, arithmetic operations and casts can be entirely
+/// defined on `Value`, and do not have to work with a `Place`.
+#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
+pub enum Value {
+ Scalar(ScalarMaybeUndef),
+ ScalarPair(ScalarMaybeUndef, ScalarMaybeUndef),
+}
+
+impl<'tcx> Value {
+ pub fn new_slice(
+ val: Scalar,
+ len: u64,
+ cx: impl HasDataLayout
+ ) -> Self {
+ Value::ScalarPair(val.into(), Scalar::Bits {
+ bits: len as u128,
+ size: cx.data_layout().pointer_size.bytes() as u8,
+ }.into())
+ }
+
+ pub fn new_dyn_trait(val: Scalar, vtable: Pointer) -> Self {
+ Value::ScalarPair(val.into(), Scalar::Ptr(vtable).into())
+ }
+
+ #[inline]
+ pub fn to_scalar_or_undef(self) -> ScalarMaybeUndef {
+ match self {
+ Value::Scalar(val) => val,
+ Value::ScalarPair(..) => bug!("Got a fat pointer where a scalar was expected"),
+ }
+ }
+
+ #[inline]
+ pub fn to_scalar(self) -> EvalResult<'tcx, Scalar> {
+ self.to_scalar_or_undef().not_undef()
+ }
+
+ /// Convert the value into a pointer (or a pointer-sized integer).
+ /// Throws away the second half of a ScalarPair!
+ #[inline]
+ pub fn to_scalar_ptr(self) -> EvalResult<'tcx, Scalar> {
+ match self {
+ Value::Scalar(ptr) |
+ Value::ScalarPair(ptr, _) => ptr.not_undef(),
+ }
+ }
+
+ pub fn to_scalar_dyn_trait(self) -> EvalResult<'tcx, (Scalar, Pointer)> {
+ match self {
+ Value::ScalarPair(ptr, vtable) =>
+ Ok((ptr.not_undef()?, vtable.to_ptr()?)),
+ _ => bug!("expected ptr and vtable, got {:?}", self),
+ }
+ }
+
+ pub fn to_scalar_slice(self, cx: impl HasDataLayout) -> EvalResult<'tcx, (Scalar, u64)> {
+ match self {
+ Value::ScalarPair(ptr, val) => {
+ let len = val.to_bits(cx.data_layout().pointer_size)?;
+ Ok((ptr.not_undef()?, len as u64))
+ }
+ _ => bug!("expected ptr and length, got {:?}", self),
+ }
+ }
+}
+
+// ScalarPair needs a type to interpret, so we often have a value and a type together
+// as input for binary and cast operations.
+#[derive(Copy, Clone, Debug)]
+pub struct ValTy<'tcx> {
+ pub value: Value,
+ pub layout: TyLayout<'tcx>,
+}
+
+impl<'tcx> ::std::ops::Deref for ValTy<'tcx> {
+ type Target = Value;
+ #[inline(always)]
+ fn deref(&self) -> &Value {
+ &self.value
+ }
+}
+
+/// An `Operand` is the result of computing a `mir::Operand`. It can be immediate,
+/// or still in memory. The latter is an optimization, to delay reading that chunk of
+/// memory and to avoid having to store arbitrary-sized data here.
+#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
+pub enum Operand {
+ Immediate(Value),
+ Indirect(MemPlace),
+}
+
+impl Operand {
+ #[inline]
+ pub fn from_ptr(ptr: Pointer, align: Align) -> Self {
+ Operand::Indirect(MemPlace::from_ptr(ptr, align))
+ }
+
+ #[inline]
+ pub fn from_scalar_value(val: Scalar) -> Self {
+ Operand::Immediate(Value::Scalar(val.into()))
+ }
+
+ #[inline]
+ pub fn to_mem_place(self) -> MemPlace {
+ match self {
+ Operand::Indirect(mplace) => mplace,
+ _ => bug!("to_mem_place: expected Operand::Indirect, got {:?}", self),
+
+ }
+ }
+
+ #[inline]
+ pub fn to_immediate(self) -> Value {
+ match self {
+ Operand::Immediate(val) => val,
+ _ => bug!("to_immediate: expected Operand::Immediate, got {:?}", self),
+
+ }
+ }
+}
+
+#[derive(Copy, Clone, Debug)]
+pub struct OpTy<'tcx> {
+ pub op: Operand,
+ pub layout: TyLayout<'tcx>,
+}
+
+impl<'tcx> ::std::ops::Deref for OpTy<'tcx> {
+ type Target = Operand;
+ #[inline(always)]
+ fn deref(&self) -> &Operand {
+ &self.op
+ }
+}
+
+impl<'tcx> From<MPlaceTy<'tcx>> for OpTy<'tcx> {
+ #[inline(always)]
+ fn from(mplace: MPlaceTy<'tcx>) -> Self {
+ OpTy {
+ op: Operand::Indirect(*mplace),
+ layout: mplace.layout
+ }
+ }
+}
+
+impl<'tcx> From<ValTy<'tcx>> for OpTy<'tcx> {
+ #[inline(always)]
+ fn from(val: ValTy<'tcx>) -> Self {
+ OpTy {
+ op: Operand::Immediate(val.value),
+ layout: val.layout
+ }
+ }
+}
+
+impl<'tcx> OpTy<'tcx> {
+ #[inline]
+ pub fn from_ptr(ptr: Pointer, align: Align, layout: TyLayout<'tcx>) -> Self {
+ OpTy { op: Operand::from_ptr(ptr, align), layout }
+ }
+
+ #[inline]
+ pub fn from_aligned_ptr(ptr: Pointer, layout: TyLayout<'tcx>) -> Self {
+ OpTy { op: Operand::from_ptr(ptr, layout.align), layout }
+ }
+
+ #[inline]
+ pub fn from_scalar_value(val: Scalar, layout: TyLayout<'tcx>) -> Self {
+ OpTy { op: Operand::Immediate(Value::Scalar(val.into())), layout }
+ }
+}
+
+// Use the existing layout if given (but sanity check in debug mode),
+// or compute the layout.
+#[inline(always)]
+fn from_known_layout<'tcx>(
+ layout: Option<TyLayout<'tcx>>,
+ compute: impl FnOnce() -> EvalResult<'tcx, TyLayout<'tcx>>
+) -> EvalResult<'tcx, TyLayout<'tcx>> {
+ match layout {
+ None => compute(),
+ Some(layout) => {
+ if cfg!(debug_assertions) {
+ let layout2 = compute()?;
+ assert_eq!(layout.details, layout2.details,
+ "Mismatch in layout of supposedly equal-layout types {:?} and {:?}",
+ layout.ty, layout2.ty);
+ }
+ Ok(layout)
+ }
+ }
+}
+
+impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> EvalContext<'a, 'mir, 'tcx, M> {
+ /// Try reading a value in memory; this is interesting particularily for ScalarPair.
+ /// Return None if the layout does not permit loading this as a value.
+ fn try_read_value_from_mplace(
+ &self,
+ mplace: MPlaceTy<'tcx>,
+ ) -> EvalResult<'tcx, Option<Value>> {
+ if mplace.extra != PlaceExtra::None {
+ return Ok(None);
+ }
+ let (ptr, ptr_align) = mplace.to_scalar_ptr_align();
+
+ if mplace.layout.size.bytes() == 0 {
+ // Not all ZSTs have a layout we would handle below, so just short-circuit them
+ // all here.
+ self.memory.check_align(ptr, ptr_align)?;
+ return Ok(Some(Value::Scalar(Scalar::zst().into())));
+ }
+
+ let ptr = ptr.to_ptr()?;
+ match mplace.layout.abi {
+ layout::Abi::Scalar(..) => {
+ let scalar = self.memory.read_scalar(ptr, ptr_align, mplace.layout.size)?;
+ Ok(Some(Value::Scalar(scalar)))
+ }
+ layout::Abi::ScalarPair(ref a, ref b) => {
+ let (a, b) = (&a.value, &b.value);
+ let (a_size, b_size) = (a.size(self), b.size(self));
+ let a_ptr = ptr;
+ let b_offset = a_size.abi_align(b.align(self));
+ assert!(b_offset.bytes() > 0); // we later use the offset to test which field to use
+ let b_ptr = ptr.offset(b_offset, self)?.into();
+ let a_val = self.memory.read_scalar(a_ptr, ptr_align, a_size)?;
+ let b_val = self.memory.read_scalar(b_ptr, ptr_align, b_size)?;
+ Ok(Some(Value::ScalarPair(a_val, b_val)))
+ }
+ _ => Ok(None),
+ }
+ }
+
+ /// Try returning an immediate value for the operand.
+ /// If the layout does not permit loading this as a value, return where in memory
+ /// we can find the data.
+ /// Note that for a given layout, this operation will either always fail or always
+ /// succeed! Whether it succeeds depends on whether the layout can be represented
+ /// in a `Value`, not on which data is stored there currently.
+ pub(super) fn try_read_value(
+ &self,
+ src: OpTy<'tcx>,
+ ) -> EvalResult<'tcx, Result<Value, MemPlace>> {
+ Ok(match src.try_as_mplace() {
+ Ok(mplace) => {
+ if let Some(val) = self.try_read_value_from_mplace(mplace)? {
+ Ok(val)
+ } else {
+ Err(*mplace)
+ }
+ },
+ Err(val) => Ok(val),
+ })
+ }
+
+ /// Read a value from a place, asserting that that is possible with the given layout.
+ #[inline(always)]
+ pub fn read_value(&self, op: OpTy<'tcx>) -> EvalResult<'tcx, ValTy<'tcx>> {
+ if let Ok(value) = self.try_read_value(op)? {
+ Ok(ValTy { value, layout: op.layout })
+ } else {
+ bug!("primitive read failed for type: {:?}", op.layout.ty);
+ }
+ }
+
+ /// Read a scalar from a place
+ pub fn read_scalar(&self, op: OpTy<'tcx>) -> EvalResult<'tcx, ScalarMaybeUndef> {
+ match *self.read_value(op)? {
+ Value::ScalarPair(..) => bug!("got ScalarPair for type: {:?}", op.layout.ty),
+ Value::Scalar(val) => Ok(val),
+ }
+ }
+
+ pub fn uninit_operand(&mut self, layout: TyLayout<'tcx>) -> EvalResult<'tcx, Operand> {
+ // This decides which types we will use the Immediate optimization for, and hence should
+ // match what `try_read_value` and `eval_place_to_op` support.
+ if layout.is_zst() {
+ return Ok(Operand::Immediate(Value::Scalar(Scalar::zst().into())));
+ }
+
+ Ok(match layout.abi {
+ layout::Abi::Scalar(..) =>
+ Operand::Immediate(Value::Scalar(ScalarMaybeUndef::Undef)),
+ layout::Abi::ScalarPair(..) =>
+ Operand::Immediate(Value::ScalarPair(
+ ScalarMaybeUndef::Undef,
+ ScalarMaybeUndef::Undef,
+ )),
+ _ => {
+ trace!("Forcing allocation for local of type {:?}", layout.ty);
+ Operand::Indirect(
+ *self.allocate(layout, MemoryKind::Stack)?
+ )
+ }
+ })
+ }
+
+ /// Projection functions
+ pub fn operand_field(
+ &self,
+ op: OpTy<'tcx>,
+ field: u64,
+ ) -> EvalResult<'tcx, OpTy<'tcx>> {
+ let base = match op.try_as_mplace() {
+ Ok(mplace) => {
+ // The easy case
+ let field = self.mplace_field(mplace, field)?;
+ return Ok(field.into());
+ },
+ Err(value) => value
+ };
+
+ let field = field.try_into().unwrap();
+ let field_layout = op.layout.field(self, field)?;
+ if field_layout.size.bytes() == 0 {
+ let val = Value::Scalar(Scalar::zst().into());
+ return Ok(OpTy { op: Operand::Immediate(val), layout: field_layout });
+ }
+ let offset = op.layout.fields.offset(field);
+ let value = match base {
+ // the field covers the entire type
+ _ if offset.bytes() == 0 && field_layout.size == op.layout.size => base,
+ // extract fields from types with `ScalarPair` ABI
+ Value::ScalarPair(a, b) => {
+ let val = if offset.bytes() == 0 { a } else { b };
+ Value::Scalar(val)
+ },
+ Value::Scalar(val) =>
+ bug!("field access on non aggregate {:#?}, {:#?}", val, op.layout),
+ };
+ Ok(OpTy { op: Operand::Immediate(value), layout: field_layout })
+ }
+
+ pub(super) fn operand_downcast(
+ &self,
+ op: OpTy<'tcx>,
+ variant: usize,
+ ) -> EvalResult<'tcx, OpTy<'tcx>> {
+ // Downcasts only change the layout
+ Ok(match op.try_as_mplace() {
+ Ok(mplace) => {
+ self.mplace_downcast(mplace, variant)?.into()
+ },
+ Err(..) => {
+ let layout = op.layout.for_variant(self, variant);
+ OpTy { layout, ..op }
+ }
+ })
+ }
+
+ // Take an operand, representing a pointer, and dereference it -- that
+ // will always be a MemPlace.
+ pub(super) fn deref_operand(
+ &self,
+ src: OpTy<'tcx>,
+ ) -> EvalResult<'tcx, MPlaceTy<'tcx>> {
+ let val = self.read_value(src)?;
+ trace!("deref to {} on {:?}", val.layout.ty, val);
+ Ok(self.ref_to_mplace(val)?)
+ }
+
+ pub fn operand_projection(
+ &self,
+ base: OpTy<'tcx>,
+ proj_elem: &mir::PlaceElem<'tcx>,
+ ) -> EvalResult<'tcx, OpTy<'tcx>> {
+ use rustc::mir::ProjectionElem::*;
+ Ok(match *proj_elem {
+ Field(field, _) => self.operand_field(base, field.index() as u64)?,
+ Downcast(_, variant) => self.operand_downcast(base, variant)?,
+ Deref => self.deref_operand(base)?.into(),
+ // The rest should only occur as mplace, we do not use Immediates for types
+ // allowing such operations. This matches place_projection forcing an allocation.
+ Subslice { .. } | ConstantIndex { .. } | Index(_) => {
+ let mplace = base.to_mem_place();
+ self.mplace_projection(mplace, proj_elem)?.into()
+ }
+ })
+ }
+
+ // Evaluate a place with the goal of reading from it. This lets us sometimes
+ // avoid allocations. If you already know the layout, you can pass it in
+ // to avoid looking it up again.
+ fn eval_place_to_op(
+ &mut self,
+ mir_place: &mir::Place<'tcx>,
+ layout: Option<TyLayout<'tcx>>,
+ ) -> EvalResult<'tcx, OpTy<'tcx>> {
+ use rustc::mir::Place::*;
+ Ok(match *mir_place {
+ Local(mir::RETURN_PLACE) => return err!(ReadFromReturnPointer),
+ Local(local) => {
+ let op = *self.frame().locals[local].access()?;
+ let layout = from_known_layout(layout,
+ || self.layout_of_local(self.cur_frame(), local))?;
+ OpTy { op, layout }
+ },
+
+ Projection(ref proj) => {
+ let op = self.eval_place_to_op(&proj.base, None)?;
+ self.operand_projection(op, &proj.elem)?
+ }
+
+ // Everything else is an mplace, so we just call `eval_place`.
+ // Note that getting an mplace for a static aways requires `&mut`,
+ // so this does not "cost" us anything in terms if mutability.
+ Promoted(_) | Static(_) => {
+ let place = self.eval_place(mir_place)?;
+ place.to_mem_place().into()
+ }
+ })
+ }
+
+ /// Evaluate the operand, returning a place where you can then find the data.
+ /// if you already know the layout, you can save two some table lookups
+ /// by passing it in here.
+ pub fn eval_operand(
+ &mut self,
+ mir_op: &mir::Operand<'tcx>,
+ layout: Option<TyLayout<'tcx>>,
+ ) -> EvalResult<'tcx, OpTy<'tcx>> {
+ use rustc::mir::Operand::*;
+ let op = match *mir_op {
+ // FIXME: do some more logic on `move` to invalidate the old location
+ Copy(ref place) |
+ Move(ref place) =>
+ self.eval_place_to_op(place, layout)?,
+
+ Constant(ref constant) => {
+ let layout = from_known_layout(layout, || {
+ let ty = self.monomorphize(mir_op.ty(self.mir(), *self.tcx), self.substs());
+ self.layout_of(ty)
+ })?;
+ let op = self.const_value_to_op(constant.literal.val)?;
+ OpTy { op, layout }
+ }
+ };
+ trace!("{:?}: {:?}", mir_op, *op);
+ Ok(op)
+ }
+
+ /// Evaluate a bunch of operands at once
+ pub(crate) fn eval_operands(
+ &mut self,
+ ops: &[mir::Operand<'tcx>],
+ ) -> EvalResult<'tcx, Vec<OpTy<'tcx>>> {
+ ops.into_iter()
+ .map(|op| self.eval_operand(op, None))
+ .collect()
+ }
+
+ // Also used e.g. when miri runs into a constant.
+ // Unfortunately, this needs an `&mut` to be able to allocate a copy of a `ByRef`
+ // constant. This bleeds up to `eval_operand` needing `&mut`.
+ pub fn const_value_to_op(
+ &mut self,
+ val: ConstValue<'tcx>,
+ ) -> EvalResult<'tcx, Operand> {
+ match val {
+ ConstValue::Unevaluated(def_id, substs) => {
+ let instance = self.resolve(def_id, substs)?;
+ self.global_to_op(GlobalId {
+ instance,
+ promoted: None,
+ })
+ }
+ ConstValue::ByRef(alloc, offset) => {
+ // FIXME: Allocate new AllocId for all constants inside
+ let id = self.memory.allocate_value(alloc.clone(), MemoryKind::Stack)?;
+ Ok(Operand::from_ptr(Pointer::new(id, offset), alloc.align))
+ },
+ ConstValue::ScalarPair(a, b) =>
+ Ok(Operand::Immediate(Value::ScalarPair(a.into(), b))),
+ ConstValue::Scalar(x) =>
+ Ok(Operand::Immediate(Value::Scalar(x.into()))),
+ }
+ }
+
+ pub(super) fn global_to_op(&mut self, gid: GlobalId<'tcx>) -> EvalResult<'tcx, Operand> {
+ let cv = self.const_eval(gid)?;
+ self.const_value_to_op(cv.val)
+ }
+
+ /// We cannot do self.read_value(self.eval_operand) due to eval_operand taking &mut self,
+ /// so this helps avoid unnecessary let.
+ #[inline]
+ pub fn eval_operand_and_read_value(
+ &mut self,
+ op: &mir::Operand<'tcx>,
+ layout: Option<TyLayout<'tcx>>,
+ ) -> EvalResult<'tcx, ValTy<'tcx>> {
+ let op = self.eval_operand(op, layout)?;
+ self.read_value(op)
+ }
+
+ /// reads a tag and produces the corresponding variant index
+ pub fn read_discriminant_as_variant_index(
+ &self,
+ rval: OpTy<'tcx>,
+ ) -> EvalResult<'tcx, usize> {
+ match rval.layout.variants {
+ layout::Variants::Single { index } => Ok(index),
+ layout::Variants::Tagged { .. } => {
+ let discr_val = self.read_discriminant_value(rval)?;
+ rval.layout.ty
+ .ty_adt_def()
+ .expect("tagged layout for non adt")
+ .discriminants(self.tcx.tcx)
+ .position(|var| var.val == discr_val)
+ .ok_or_else(|| EvalErrorKind::InvalidDiscriminant.into())
+ }
+ layout::Variants::NicheFilling { .. } => {
+ let discr_val = self.read_discriminant_value(rval)?;
+ assert_eq!(discr_val as usize as u128, discr_val);
+ Ok(discr_val as usize)
+ },
+ }
+ }
+
+ pub fn read_discriminant_value(
+ &self,
+ rval: OpTy<'tcx>,
+ ) -> EvalResult<'tcx, u128> {
+ trace!("read_discriminant_value {:#?}", rval.layout);
+ if rval.layout.abi == layout::Abi::Uninhabited {
+ return err!(Unreachable);
+ }
+
+ match rval.layout.variants {
+ layout::Variants::Single { index } => {
+ let discr_val = rval.layout.ty.ty_adt_def().map_or(
+ index as u128,
+ |def| def.discriminant_for_variant(*self.tcx, index).val);
+ return Ok(discr_val);
+ }
+ layout::Variants::Tagged { .. } |
+ layout::Variants::NicheFilling { .. } => {},
+ }
+ let discr_op = self.operand_field(rval, 0)?;
+ let discr_val = self.read_value(discr_op)?;
+ trace!("discr value: {:?}", discr_val);
+ let raw_discr = discr_val.to_scalar()?;
+ Ok(match rval.layout.variants {
+ layout::Variants::Single { .. } => bug!(),
+ // FIXME: We should catch invalid discriminants here!
+ layout::Variants::Tagged { .. } => {
+ if discr_val.layout.ty.is_signed() {
+ let i = raw_discr.to_bits(discr_val.layout.size)? as i128;
+ // going from layout tag type to typeck discriminant type
+ // requires first sign extending with the layout discriminant
+ let shift = 128 - discr_val.layout.size.bits();
+ let sexted = (i << shift) >> shift;
+ // and then zeroing with the typeck discriminant type
+ let discr_ty = rval.layout.ty
+ .ty_adt_def().expect("tagged layout corresponds to adt")
+ .repr
+ .discr_type();
+ let discr_ty = layout::Integer::from_attr(self.tcx.tcx, discr_ty);
+ let shift = 128 - discr_ty.size().bits();
+ let truncatee = sexted as u128;
+ (truncatee << shift) >> shift
+ } else {
+ raw_discr.to_bits(discr_val.layout.size)?
+ }
+ },
+ layout::Variants::NicheFilling {
+ dataful_variant,
+ ref niche_variants,
+ niche_start,
+ ..
+ } => {
+ let variants_start = *niche_variants.start() as u128;
+ let variants_end = *niche_variants.end() as u128;
+ match raw_discr {
+ Scalar::Ptr(_) => {
+ assert!(niche_start == 0);
+ assert!(variants_start == variants_end);
+ dataful_variant as u128
+ },
+ Scalar::Bits { bits: raw_discr, size } => {
+ assert_eq!(size as u64, discr_val.layout.size.bytes());
+ let discr = raw_discr.wrapping_sub(niche_start)
+ .wrapping_add(variants_start);
+ if variants_start <= discr && discr <= variants_end {
+ discr
+ } else {
+ dataful_variant as u128
+ }
+ },
+ }
+ }
+ })
+ }
+
+}
use rustc::mir;
-use rustc::ty::{self, Ty, layout};
+use rustc::ty::{self, layout::{self, TyLayout}};
use syntax::ast::FloatTy;
-use rustc::ty::layout::{LayoutOf, TyLayout};
use rustc_apfloat::ieee::{Double, Single};
use rustc_apfloat::Float;
+use rustc::mir::interpret::{EvalResult, Scalar};
-use super::{EvalContext, Place, Machine, ValTy};
+use super::{EvalContext, PlaceTy, Value, Machine, ValTy};
-use rustc::mir::interpret::{EvalResult, Scalar, Value};
impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> EvalContext<'a, 'mir, 'tcx, M> {
- fn binop_with_overflow(
- &self,
- op: mir::BinOp,
- left: ValTy<'tcx>,
- right: ValTy<'tcx>,
- ) -> EvalResult<'tcx, (Scalar, bool)> {
- let left_val = self.value_to_scalar(left)?;
- let right_val = self.value_to_scalar(right)?;
- self.binary_op(op, left_val, left.ty, right_val, right.ty)
- }
-
/// Applies the binary operation `op` to the two operands and writes a tuple of the result
/// and a boolean signifying the potential overflow to the destination.
- pub fn intrinsic_with_overflow(
+ pub fn binop_with_overflow(
&mut self,
op: mir::BinOp,
left: ValTy<'tcx>,
right: ValTy<'tcx>,
- dest: Place,
- dest_ty: Ty<'tcx>,
+ dest: PlaceTy<'tcx>,
) -> EvalResult<'tcx> {
- let (val, overflowed) = self.binop_with_overflow(op, left, right)?;
+ let (val, overflowed) = self.binary_op(op, left, right)?;
let val = Value::ScalarPair(val.into(), Scalar::from_bool(overflowed).into());
- let valty = ValTy {
- value: val,
- ty: dest_ty,
- };
- self.write_value(valty, dest)
+ self.write_value(val, dest)
}
/// Applies the binary operation `op` to the arguments and writes the result to the
- /// destination. Returns `true` if the operation overflowed.
- pub fn intrinsic_overflowing(
+ /// destination.
+ pub fn binop_ignore_overflow(
&mut self,
op: mir::BinOp,
left: ValTy<'tcx>,
right: ValTy<'tcx>,
- dest: Place,
- dest_ty: Ty<'tcx>,
- ) -> EvalResult<'tcx, bool> {
- let (val, overflowed) = self.binop_with_overflow(op, left, right)?;
- self.write_scalar(dest, val, dest_ty)?;
- Ok(overflowed)
+ dest: PlaceTy<'tcx>,
+ ) -> EvalResult<'tcx> {
+ let (val, _overflowed) = self.binary_op(op, left, right)?;
+ self.write_scalar(val, dest)
}
}
pub fn binary_op(
&self,
bin_op: mir::BinOp,
- left: Scalar,
- left_ty: Ty<'tcx>,
- right: Scalar,
- right_ty: Ty<'tcx>,
+ ValTy { value: left, layout: left_layout }: ValTy<'tcx>,
+ ValTy { value: right, layout: right_layout }: ValTy<'tcx>,
) -> EvalResult<'tcx, (Scalar, bool)> {
use rustc::mir::BinOp::*;
- let left_layout = self.layout_of(left_ty)?;
- let right_layout = self.layout_of(right_ty)?;
+ let left = left.to_scalar()?;
+ let right = right.to_scalar()?;
let left_kind = match left_layout.abi {
layout::Abi::Scalar(ref scalar) => scalar.value,
- _ => return err!(TypeNotPrimitive(left_ty)),
+ _ => return err!(TypeNotPrimitive(left_layout.ty)),
};
let right_kind = match right_layout.abi {
layout::Abi::Scalar(ref scalar) => scalar.value,
- _ => return err!(TypeNotPrimitive(right_ty)),
+ _ => return err!(TypeNotPrimitive(right_layout.ty)),
};
trace!("Running binary op {:?}: {:?} ({:?}), {:?} ({:?})", bin_op, left, left_kind, right, right_kind);
// I: Handle operations that support pointers
if !left_kind.is_float() && !right_kind.is_float() {
- if let Some(handled) = M::try_ptr_op(self, bin_op, left, left_ty, right, right_ty)? {
+ if let Some(handled) =
+ M::try_ptr_op(self, bin_op, left, left_layout, right, right_layout)?
+ {
return Ok(handled);
}
}
}
}
- if let ty::TyFloat(fty) = left_ty.sty {
+ if let ty::TyFloat(fty) = left_layout.ty.sty {
macro_rules! float_math {
($ty:path, $size:expr) => {{
let l = <$ty>::from_bits(l);
}
}
- let size = self.layout_of(left_ty).unwrap().size.bytes() as u8;
+ let size = left_layout.size.bytes() as u8;
// only ints left
let val = match bin_op {
"unimplemented binary op {:?}: {:?} ({:?}), {:?} ({:?})",
bin_op,
left,
- left_ty,
+ left_layout.ty,
right,
- right_ty,
+ right_layout.ty,
);
return err!(Unimplemented(msg));
}
+//! Computations on places -- field projections, going from mir::Place, and writing
+//! into a place.
+//! All high-level functions to write to memory work on places as destinations.
+
+use std::hash::{Hash, Hasher};
+use std::convert::TryFrom;
+
use rustc::mir;
-use rustc::ty::{self, Ty, TyCtxt};
-use rustc::ty::layout::{self, Align, LayoutOf, TyLayout};
+use rustc::ty::{self, Ty};
+use rustc::ty::layout::{self, Size, Align, LayoutOf, TyLayout, HasDataLayout};
use rustc_data_structures::indexed_vec::Idx;
-use rustc::mir::interpret::{GlobalId, Value, Scalar, EvalResult, Pointer, ScalarMaybeUndef};
-use super::{EvalContext, Machine, ValTy};
-use interpret::memory::HasMemory;
+use rustc::mir::interpret::{
+ GlobalId, Scalar, EvalResult, Pointer, ScalarMaybeUndef
+};
+use super::{EvalContext, Machine, Value, ValTy, Operand, OpTy, MemoryKind};
+
+#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
+pub struct MemPlace {
+ /// A place may have an integral pointer for ZSTs, and since it might
+ /// be turned back into a reference before ever being dereferenced.
+ /// However, it may never be undef.
+ pub ptr: Scalar,
+ pub align: Align,
+ pub extra: PlaceExtra,
+}
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
pub enum Place {
/// A place referring to a value allocated in the `Memory` system.
- Ptr {
- /// A place may have an invalid (integral or undef) pointer,
- /// since it might be turned back into a reference
- /// before ever being dereferenced.
- ptr: ScalarMaybeUndef,
- align: Align,
- extra: PlaceExtra,
- },
+ Ptr(MemPlace),
- /// A place referring to a value on the stack. Represented by a stack frame index paired with
- /// a Mir local index.
- Local { frame: usize, local: mir::Local },
+ /// To support alloc-free locals, we are able to write directly to a local.
+ /// (Without that optimization, we'd just always be a `MemPlace`.)
+ Local {
+ frame: usize,
+ local: mir::Local,
+ },
}
+// Extra information for fat pointers / places
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
pub enum PlaceExtra {
None,
Length(u64),
Vtable(Pointer),
- DowncastVariant(usize),
}
-impl<'tcx> Place {
- /// Produces a Place that will error if attempted to be read from
- pub fn undef() -> Self {
- Self::from_scalar_ptr(ScalarMaybeUndef::Undef, Align::from_bytes(1, 1).unwrap())
+#[derive(Copy, Clone, Debug)]
+pub struct PlaceTy<'tcx> {
+ place: Place,
+ pub layout: TyLayout<'tcx>,
+}
+
+impl<'tcx> ::std::ops::Deref for PlaceTy<'tcx> {
+ type Target = Place;
+ #[inline(always)]
+ fn deref(&self) -> &Place {
+ &self.place
+ }
+}
+
+/// A MemPlace with its layout. Constructing it is only possible in this module.
+#[derive(Copy, Clone, Debug)]
+pub struct MPlaceTy<'tcx> {
+ mplace: MemPlace,
+ pub layout: TyLayout<'tcx>,
+}
+
+impl<'tcx> ::std::ops::Deref for MPlaceTy<'tcx> {
+ type Target = MemPlace;
+ #[inline(always)]
+ fn deref(&self) -> &MemPlace {
+ &self.mplace
}
+}
+
+impl<'tcx> From<MPlaceTy<'tcx>> for PlaceTy<'tcx> {
+ #[inline(always)]
+ fn from(mplace: MPlaceTy<'tcx>) -> Self {
+ PlaceTy {
+ place: Place::Ptr(mplace.mplace),
+ layout: mplace.layout
+ }
+ }
+}
- pub fn from_scalar_ptr(ptr: ScalarMaybeUndef, align: Align) -> Self {
- Place::Ptr {
+impl MemPlace {
+ #[inline(always)]
+ pub fn from_scalar_ptr(ptr: Scalar, align: Align) -> Self {
+ MemPlace {
ptr,
align,
extra: PlaceExtra::None,
}
}
+ #[inline(always)]
+ pub fn from_ptr(ptr: Pointer, align: Align) -> Self {
+ Self::from_scalar_ptr(ptr.into(), align)
+ }
+
+ #[inline(always)]
+ pub fn to_scalar_ptr_align(self) -> (Scalar, Align) {
+ assert_eq!(self.extra, PlaceExtra::None);
+ (self.ptr, self.align)
+ }
+
+ /// Extract the ptr part of the mplace
+ #[inline(always)]
+ pub fn to_ptr(self) -> EvalResult<'tcx, Pointer> {
+ // At this point, we forget about the alignment information -- the place has been turned into a reference,
+ // and no matter where it came from, it now must be aligned.
+ self.to_scalar_ptr_align().0.to_ptr()
+ }
+
+ /// Turn a mplace into a (thin or fat) pointer, as a reference, pointing to the same space.
+ /// This is the inverse of `ref_to_mplace`.
+ pub fn to_ref(self, cx: impl HasDataLayout) -> Value {
+ // We ignore the alignment of the place here -- special handling for packed structs ends
+ // at the `&` operator.
+ match self.extra {
+ PlaceExtra::None => Value::Scalar(self.ptr.into()),
+ PlaceExtra::Length(len) => Value::new_slice(self.ptr.into(), len, cx),
+ PlaceExtra::Vtable(vtable) => Value::new_dyn_trait(self.ptr.into(), vtable),
+ }
+ }
+}
+
+impl<'tcx> MPlaceTy<'tcx> {
+ #[inline]
+ fn from_aligned_ptr(ptr: Pointer, layout: TyLayout<'tcx>) -> Self {
+ MPlaceTy { mplace: MemPlace::from_ptr(ptr, layout.align), layout }
+ }
+
+ #[inline]
+ pub(super) fn len(self) -> u64 {
+ // Sanity check
+ let ty_len = match self.layout.fields {
+ layout::FieldPlacement::Array { count, .. } => count,
+ _ => bug!("Length for non-array layout {:?} requested", self.layout),
+ };
+ if let PlaceExtra::Length(len) = self.extra {
+ len
+ } else {
+ ty_len
+ }
+ }
+}
+
+// Validation needs to hash MPlaceTy, but we cannot hash Layout -- so we just hash the type
+impl<'tcx> Hash for MPlaceTy<'tcx> {
+ fn hash<H: Hasher>(&self, state: &mut H) {
+ self.mplace.hash(state);
+ self.layout.ty.hash(state);
+ }
+}
+impl<'tcx> PartialEq for MPlaceTy<'tcx> {
+ fn eq(&self, other: &Self) -> bool {
+ self.mplace == other.mplace && self.layout.ty == other.layout.ty
+ }
+}
+impl<'tcx> Eq for MPlaceTy<'tcx> {}
+
+impl<'tcx> OpTy<'tcx> {
+ #[inline(always)]
+ pub fn try_as_mplace(self) -> Result<MPlaceTy<'tcx>, Value> {
+ match *self {
+ Operand::Indirect(mplace) => Ok(MPlaceTy { mplace, layout: self.layout }),
+ Operand::Immediate(value) => Err(value),
+ }
+ }
+
+ #[inline(always)]
+ pub fn to_mem_place(self) -> MPlaceTy<'tcx> {
+ self.try_as_mplace().unwrap()
+ }
+}
+
+impl<'tcx> Place {
+ /// Produces a Place that will error if attempted to be read from or written to
+ #[inline]
+ pub fn null(cx: impl HasDataLayout) -> Self {
+ Self::from_scalar_ptr(Scalar::ptr_null(cx), Align::from_bytes(1, 1).unwrap())
+ }
+
+ #[inline]
+ pub fn from_scalar_ptr(ptr: Scalar, align: Align) -> Self {
+ Place::Ptr(MemPlace::from_scalar_ptr(ptr, align))
+ }
+
+ #[inline]
pub fn from_ptr(ptr: Pointer, align: Align) -> Self {
- Self::from_scalar_ptr(ScalarMaybeUndef::Scalar(ptr.into()), align)
+ Place::Ptr(MemPlace::from_ptr(ptr, align))
}
- pub fn to_ptr_align_extra(self) -> (ScalarMaybeUndef, Align, PlaceExtra) {
+ #[inline]
+ pub fn to_mem_place(self) -> MemPlace {
match self {
- Place::Ptr { ptr, align, extra } => (ptr, align, extra),
- _ => bug!("to_ptr_and_extra: expected Place::Ptr, got {:?}", self),
+ Place::Ptr(mplace) => mplace,
+ _ => bug!("to_mem_place: expected Place::Ptr, got {:?}", self),
}
}
- pub fn to_ptr_align(self) -> (ScalarMaybeUndef, Align) {
- let (ptr, align, _extra) = self.to_ptr_align_extra();
- (ptr, align)
+ #[inline]
+ pub fn to_scalar_ptr_align(self) -> (Scalar, Align) {
+ self.to_mem_place().to_scalar_ptr_align()
}
+ #[inline]
pub fn to_ptr(self) -> EvalResult<'tcx, Pointer> {
- // At this point, we forget about the alignment information -- the place has been turned into a reference,
- // and no matter where it came from, it now must be aligned.
- self.to_ptr_align().0.unwrap_or_err()?.to_ptr()
- }
-
- pub(super) fn elem_ty_and_len(
- self,
- ty: Ty<'tcx>,
- tcx: TyCtxt<'_, 'tcx, '_>
- ) -> (Ty<'tcx>, u64) {
- match ty.sty {
- ty::TyArray(elem, n) => (elem, n.unwrap_usize(tcx)),
-
- ty::TySlice(elem) => {
- match self {
- Place::Ptr { extra: PlaceExtra::Length(len), .. } => (elem, len),
- _ => {
- bug!(
- "elem_ty_and_len of a TySlice given non-slice place: {:?}",
- self
- )
- }
- }
- }
+ self.to_mem_place().to_ptr()
+ }
+}
- _ => bug!("elem_ty_and_len expected array or slice, got {:?}", ty),
- }
+impl<'tcx> PlaceTy<'tcx> {
+ /// Produces a Place that will error if attempted to be read from or written to
+ #[inline]
+ pub fn null(cx: impl HasDataLayout, layout: TyLayout<'tcx>) -> Self {
+ PlaceTy { place: Place::from_scalar_ptr(Scalar::ptr_null(cx), layout.align), layout }
+ }
+
+ #[inline]
+ pub fn to_mem_place(self) -> MPlaceTy<'tcx> {
+ MPlaceTy { mplace: self.place.to_mem_place(), layout: self.layout }
}
}
impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> EvalContext<'a, 'mir, 'tcx, M> {
- /// Reads a value from the place without going through the intermediate step of obtaining
- /// a `miri::Place`
- pub fn try_read_place(
+ /// Take a value, which represents a (thin or fat) reference, and make it a place.
+ /// Alignment is just based on the type. This is the inverse of `MemPlace::to_ref`.
+ pub fn ref_to_mplace(
+ &self, val: ValTy<'tcx>
+ ) -> EvalResult<'tcx, MPlaceTy<'tcx>> {
+ let pointee_type = val.layout.ty.builtin_deref(true).unwrap().ty;
+ let layout = self.layout_of(pointee_type)?;
+ let mplace = match self.tcx.struct_tail(pointee_type).sty {
+ ty::TyDynamic(..) => {
+ let (ptr, vtable) = val.to_scalar_dyn_trait()?;
+ MemPlace {
+ ptr,
+ align: layout.align,
+ extra: PlaceExtra::Vtable(vtable),
+ }
+ }
+ ty::TyStr | ty::TySlice(_) => {
+ let (ptr, len) = val.to_scalar_slice(self)?;
+ MemPlace {
+ ptr,
+ align: layout.align,
+ extra: PlaceExtra::Length(len),
+ }
+ }
+ _ => MemPlace {
+ ptr: val.to_scalar()?,
+ align: layout.align,
+ extra: PlaceExtra::None,
+ },
+ };
+ Ok(MPlaceTy { mplace, layout })
+ }
+
+ /// Offset a pointer to project to a field. Unlike place_field, this is always
+ /// possible without allocating, so it can take &self. Also return the field's layout.
+ /// This supports both struct and array fields.
+ #[inline(always)]
+ pub fn mplace_field(
&self,
- place: &mir::Place<'tcx>,
- ) -> EvalResult<'tcx, Option<Value>> {
- use rustc::mir::Place::*;
- match *place {
- // Might allow this in the future, right now there's no way to do this from Rust code anyway
- Local(mir::RETURN_PLACE) => err!(ReadFromReturnPointer),
- // Directly reading a local will always succeed
- Local(local) => self.frame().locals[local].access().map(Some),
- // No fast path for statics. Reading from statics is rare and would require another
- // Machine function to handle differently in miri.
- Promoted(_) |
- Static(_) => Ok(None),
- Projection(ref proj) => self.try_read_place_projection(proj),
- }
+ base: MPlaceTy<'tcx>,
+ field: u64,
+ ) -> EvalResult<'tcx, MPlaceTy<'tcx>> {
+ // Not using the layout method because we want to compute on u64
+ let offset = match base.layout.fields {
+ layout::FieldPlacement::Arbitrary { ref offsets, .. } =>
+ offsets[usize::try_from(field).unwrap()],
+ layout::FieldPlacement::Array { stride, .. } => {
+ let len = base.len();
+ assert!(field < len, "Tried to access element {} of array/slice with length {}", field, len);
+ stride * field
+ }
+ layout::FieldPlacement::Union(count) => {
+ assert!(field < count as u64, "Tried to access field {} of union with {} fields", field, count);
+ // Offset is always 0
+ Size::from_bytes(0)
+ }
+ };
+ // the only way conversion can fail if is this is an array (otherwise we already panicked
+ // above). In that case, all fields are equal.
+ let field_layout = base.layout.field(self, usize::try_from(field).unwrap_or(0))?;
+
+ // Adjust offset
+ let offset = match base.extra {
+ PlaceExtra::Vtable(vtable) => {
+ let (_, align) = self.read_size_and_align_from_vtable(vtable)?;
+ // FIXME: Is this right? Should we always do this, or only when actually
+ // accessing the field to which the vtable applies?
+ offset.abi_align(align)
+ }
+ _ => {
+ // No adjustment needed
+ offset
+ }
+ };
+
+ let ptr = base.ptr.ptr_offset(offset, self)?;
+ let align = base.align.min(field_layout.align);
+ let extra = if !field_layout.is_unsized() {
+ PlaceExtra::None
+ } else {
+ assert!(base.extra != PlaceExtra::None, "Expected fat ptr");
+ base.extra
+ };
+
+ Ok(MPlaceTy { mplace: MemPlace { ptr, align, extra }, layout: field_layout })
}
- pub fn read_field(
+ // Iterates over all fields of an array. Much more efficient than doing the
+ // same by repeatedly calling `mplace_array`.
+ pub fn mplace_array_fields(
&self,
- base: Value,
- variant: Option<usize>,
- field: mir::Field,
- mut base_layout: TyLayout<'tcx>,
- ) -> EvalResult<'tcx, (Value, TyLayout<'tcx>)> {
- if let Some(variant_index) = variant {
- base_layout = base_layout.for_variant(self, variant_index);
- }
- let field_index = field.index();
- let field = base_layout.field(self, field_index)?;
- if field.size.bytes() == 0 {
- return Ok((
- Value::Scalar(ScalarMaybeUndef::Scalar(Scalar::Bits { bits: 0, size: 0 })),
- field,
- ));
- }
- let offset = base_layout.fields.offset(field_index);
- let value = match base {
- // the field covers the entire type
- Value::ScalarPair(..) |
- Value::Scalar(_) if offset.bytes() == 0 && field.size == base_layout.size => base,
- // extract fields from types with `ScalarPair` ABI
- Value::ScalarPair(a, b) => {
- let val = if offset.bytes() == 0 { a } else { b };
- Value::Scalar(val)
- },
- Value::ByRef(base_ptr, align) => {
- let offset = base_layout.fields.offset(field_index);
- let ptr = base_ptr.ptr_offset(offset, self)?;
- let align = align.min(base_layout.align).min(field.align);
- assert!(!field.is_unsized());
- Value::ByRef(ptr, align)
- },
- Value::Scalar(val) => bug!("field access on non aggregate {:#?}, {:#?}", val, base_layout),
+ base: MPlaceTy<'tcx>,
+ ) -> EvalResult<'tcx, impl Iterator<Item=EvalResult<'tcx, MPlaceTy<'tcx>>> + 'a> {
+ let len = base.len();
+ let stride = match base.layout.fields {
+ layout::FieldPlacement::Array { stride, .. } => stride,
+ _ => bug!("mplace_array_fields: expected an array layout"),
};
- Ok((value, field))
+ let layout = base.layout.field(self, 0)?;
+ let dl = &self.tcx.data_layout;
+ Ok((0..len).map(move |i| {
+ let ptr = base.ptr.ptr_offset(i * stride, dl)?;
+ Ok(MPlaceTy {
+ mplace: MemPlace { ptr, align: base.align, extra: PlaceExtra::None },
+ layout
+ })
+ }))
}
- fn try_read_place_projection(
+ pub fn mplace_subslice(
&self,
- proj: &mir::PlaceProjection<'tcx>,
- ) -> EvalResult<'tcx, Option<Value>> {
- use rustc::mir::ProjectionElem::*;
- let base = match self.try_read_place(&proj.base)? {
- Some(base) => base,
- None => return Ok(None),
+ base: MPlaceTy<'tcx>,
+ from: u64,
+ to: u64,
+ ) -> EvalResult<'tcx, MPlaceTy<'tcx>> {
+ let len = base.len();
+ assert!(from <= len - to);
+
+ // Not using layout method because that works with usize, and does not work with slices
+ // (that have count 0 in their layout).
+ let from_offset = match base.layout.fields {
+ layout::FieldPlacement::Array { stride, .. } =>
+ stride * from,
+ _ => bug!("Unexpected layout of index access: {:#?}", base.layout),
};
- let base_ty = self.place_ty(&proj.base);
- let base_layout = self.layout_of(base_ty)?;
- match proj.elem {
- Field(field, _) => Ok(Some(self.read_field(base, None, field, base_layout)?.0)),
- // The NullablePointer cases should work fine, need to take care for normal enums
- Downcast(..) |
- Subslice { .. } |
- // reading index 0 or index 1 from a ByVal or ByVal pair could be optimized
- ConstantIndex { .. } | Index(_) |
- // No way to optimize this projection any better than the normal place path
- Deref => Ok(None),
- }
+ let ptr = base.ptr.ptr_offset(from_offset, self)?;
+
+ // Compute extra and new layout
+ let inner_len = len - to - from;
+ let (extra, ty) = match base.layout.ty.sty {
+ ty::TyArray(inner, _) =>
+ (PlaceExtra::None, self.tcx.mk_array(inner, inner_len)),
+ ty::TySlice(..) =>
+ (PlaceExtra::Length(inner_len), base.layout.ty),
+ _ =>
+ bug!("cannot subslice non-array type: `{:?}`", base.layout.ty),
+ };
+ let layout = self.layout_of(ty)?;
+
+ Ok(MPlaceTy {
+ mplace: MemPlace { ptr, align: base.align, extra },
+ layout
+ })
+ }
+
+ pub fn mplace_downcast(
+ &self,
+ base: MPlaceTy<'tcx>,
+ variant: usize,
+ ) -> EvalResult<'tcx, MPlaceTy<'tcx>> {
+ // Downcasts only change the layout
+ assert_eq!(base.extra, PlaceExtra::None);
+ Ok(MPlaceTy { layout: base.layout.for_variant(self, variant), ..base })
+ }
+
+ /// Project into an mplace
+ pub fn mplace_projection(
+ &self,
+ base: MPlaceTy<'tcx>,
+ proj_elem: &mir::PlaceElem<'tcx>,
+ ) -> EvalResult<'tcx, MPlaceTy<'tcx>> {
+ use rustc::mir::ProjectionElem::*;
+ Ok(match *proj_elem {
+ Field(field, _) => self.mplace_field(base, field.index() as u64)?,
+ Downcast(_, variant) => self.mplace_downcast(base, variant)?,
+ Deref => self.deref_operand(base.into())?,
+
+ Index(local) => {
+ let n = *self.frame().locals[local].access()?;
+ let n_layout = self.layout_of(self.tcx.types.usize)?;
+ let n = self.read_scalar(OpTy { op: n, layout: n_layout })?;
+ let n = n.to_bits(self.tcx.data_layout.pointer_size)?;
+ self.mplace_field(base, u64::try_from(n).unwrap())?
+ }
+
+ ConstantIndex {
+ offset,
+ min_length,
+ from_end,
+ } => {
+ let n = base.len();
+ assert!(n >= min_length as u64);
+
+ let index = if from_end {
+ n - u64::from(offset)
+ } else {
+ u64::from(offset)
+ };
+
+ self.mplace_field(base, index)?
+ }
+
+ Subslice { from, to } =>
+ self.mplace_subslice(base, u64::from(from), u64::from(to))?,
+ })
}
- /// Returns a value and (in case of a ByRef) if we are supposed to use aligned accesses.
- pub(super) fn eval_and_read_place(
+ /// Get the place of a field inside the place, and also the field's type.
+ /// Just a convenience function, but used quite a bit.
+ pub fn place_field(
&mut self,
- place: &mir::Place<'tcx>,
- ) -> EvalResult<'tcx, Value> {
- // Shortcut for things like accessing a fat pointer's field,
- // which would otherwise (in the `eval_place` path) require moving a `ScalarPair` to memory
- // and returning an `Place::Ptr` to it
- if let Some(val) = self.try_read_place(place)? {
- return Ok(val);
- }
- let place = self.eval_place(place)?;
- self.read_place(place)
+ base: PlaceTy<'tcx>,
+ field: u64,
+ ) -> EvalResult<'tcx, PlaceTy<'tcx>> {
+ // FIXME: We could try to be smarter and avoid allocation for fields that span the
+ // entire place.
+ let mplace = self.force_allocation(base)?;
+ Ok(self.mplace_field(mplace, field)?.into())
}
- pub fn read_place(&self, place: Place) -> EvalResult<'tcx, Value> {
- match place {
- Place::Ptr { ptr, align, extra } => {
- assert_eq!(extra, PlaceExtra::None);
- Ok(Value::ByRef(ptr.unwrap_or_err()?, align))
+ pub fn place_downcast(
+ &mut self,
+ base: PlaceTy<'tcx>,
+ variant: usize,
+ ) -> EvalResult<'tcx, PlaceTy<'tcx>> {
+ // Downcast just changes the layout
+ Ok(match base.place {
+ Place::Ptr(mplace) =>
+ self.mplace_downcast(MPlaceTy { mplace, layout: base.layout }, variant)?.into(),
+ Place::Local { .. } => {
+ let layout = base.layout.for_variant(&self, variant);
+ PlaceTy { layout, ..base }
}
- Place::Local { frame, local } => self.stack[frame].locals[local].access(),
- }
+ })
}
- pub fn eval_place(&mut self, mir_place: &mir::Place<'tcx>) -> EvalResult<'tcx, Place> {
+ /// Project into a place
+ pub fn place_projection(
+ &mut self,
+ base: PlaceTy<'tcx>,
+ proj_elem: &mir::ProjectionElem<'tcx, mir::Local, Ty<'tcx>>,
+ ) -> EvalResult<'tcx, PlaceTy<'tcx>> {
+ use rustc::mir::ProjectionElem::*;
+ Ok(match *proj_elem {
+ Field(field, _) => self.place_field(base, field.index() as u64)?,
+ Downcast(_, variant) => self.place_downcast(base, variant)?,
+ Deref => self.deref_operand(self.place_to_op(base)?)?.into(),
+ // For the other variants, we have to force an allocation.
+ // This matches `operand_projection`.
+ Subslice { .. } | ConstantIndex { .. } | Index(_) => {
+ let mplace = self.force_allocation(base)?;
+ self.mplace_projection(mplace, proj_elem)?.into()
+ }
+ })
+ }
+
+ /// Compute a place. You should only use this if you intend to write into this
+ /// place; for reading, a more efficient alternative is `eval_place_for_read`.
+ pub fn eval_place(&mut self, mir_place: &mir::Place<'tcx>) -> EvalResult<'tcx, PlaceTy<'tcx>> {
use rustc::mir::Place::*;
let place = match *mir_place {
- Local(mir::RETURN_PLACE) => self.frame().return_place,
- Local(local) => Place::Local {
- frame: self.cur_frame(),
- local,
+ Local(mir::RETURN_PLACE) => PlaceTy {
+ place: self.frame().return_place,
+ layout: self.layout_of_local(self.cur_frame(), mir::RETURN_PLACE)?,
+ },
+ Local(local) => PlaceTy {
+ place: Place::Local {
+ frame: self.cur_frame(),
+ local,
+ },
+ layout: self.layout_of_local(self.cur_frame(), local)?,
},
Promoted(ref promoted) => {
let instance = self.frame().instance;
- let val = self.read_global_as_value(GlobalId {
+ let op = self.global_to_op(GlobalId {
instance,
promoted: Some(promoted.0),
})?;
- if let Value::ByRef(ptr, align) = val {
- Place::Ptr {
- ptr: ptr.into(),
- align,
- extra: PlaceExtra::None,
- }
- } else {
- bug!("evaluated promoted and got {:#?}", val);
+ let mplace = op.to_mem_place();
+ let ty = self.monomorphize(promoted.1, self.substs());
+ PlaceTy {
+ place: Place::Ptr(mplace),
+ layout: self.layout_of(ty)?,
}
}
Static(ref static_) => {
- let layout = self.layout_of(self.place_ty(mir_place))?;
+ let ty = self.monomorphize(static_.ty, self.substs());
+ let layout = self.layout_of(ty)?;
let instance = ty::Instance::mono(*self.tcx, static_.def_id);
let cid = GlobalId {
instance,
promoted: None
};
let alloc = Machine::init_static(self, cid)?;
- Place::Ptr {
- ptr: ScalarMaybeUndef::Scalar(Scalar::Ptr(alloc.into())),
- align: layout.align,
- extra: PlaceExtra::None,
- }
+ MPlaceTy::from_aligned_ptr(alloc.into(), layout).into()
}
Projection(ref proj) => {
- let ty = self.place_ty(&proj.base);
let place = self.eval_place(&proj.base)?;
- return self.eval_place_projection(place, ty, &proj.elem);
+ self.place_projection(place, &proj.elem)?
}
};
- self.dump_local(place);
+ self.dump_place(place.place);
Ok(place)
}
- pub fn place_field(
+ /// Write a scalar to a place
+ pub fn write_scalar(
&mut self,
- base: Place,
- field: mir::Field,
- mut base_layout: TyLayout<'tcx>,
- ) -> EvalResult<'tcx, (Place, TyLayout<'tcx>)> {
- match base {
- Place::Ptr { extra: PlaceExtra::DowncastVariant(variant_index), .. } => {
- base_layout = base_layout.for_variant(&self, variant_index);
- }
- _ => {}
- }
- let field_index = field.index();
- let field = base_layout.field(&self, field_index)?;
- let offset = base_layout.fields.offset(field_index);
+ val: impl Into<ScalarMaybeUndef>,
+ dest: PlaceTy<'tcx>,
+ ) -> EvalResult<'tcx> {
+ self.write_value(Value::Scalar(val.into()), dest)
+ }
- // Do not allocate in trivial cases
- let (base_ptr, base_align, base_extra) = match base {
- Place::Ptr { ptr, align, extra } => (ptr, align, extra),
+ /// Write a value to a place
+ pub fn write_value(
+ &mut self,
+ src_val: Value,
+ dest: PlaceTy<'tcx>,
+ ) -> EvalResult<'tcx> {
+ trace!("write_value: {:?} <- {:?}", *dest, src_val);
+ // See if we can avoid an allocation. This is the counterpart to `try_read_value`,
+ // but not factored as a separate function.
+ let mplace = match dest.place {
Place::Local { frame, local } => {
- match (self.stack[frame].locals[local].access()?, &base_layout.abi) {
- // in case the field covers the entire type, just return the value
- (Value::Scalar(_), &layout::Abi::Scalar(_)) |
- (Value::ScalarPair(..), &layout::Abi::ScalarPair(..))
- if offset.bytes() == 0 && field.size == base_layout.size => {
- return Ok((base, field))
+ match *self.stack[frame].locals[local].access_mut()? {
+ Operand::Immediate(ref mut dest_val) => {
+ // Yay, we can just change the local directly.
+ *dest_val = src_val;
+ return Ok(());
},
- _ => self.force_allocation(base)?.to_ptr_align_extra(),
+ Operand::Indirect(mplace) => mplace, // already in memory
}
- }
+ },
+ Place::Ptr(mplace) => mplace, // already in memory
};
- let offset = match base_extra {
- PlaceExtra::Vtable(tab) => {
- let (_, align) = self.size_and_align_of_dst(
- base_layout.ty,
- base_ptr.to_value_with_vtable(tab),
- )?;
- offset.abi_align(align)
+ // This is already in memory, write there.
+ let dest = MPlaceTy { mplace, layout: dest.layout };
+ self.write_value_to_mplace(src_val, dest)
+ }
+
+ /// Write a value to memory
+ fn write_value_to_mplace(
+ &mut self,
+ value: Value,
+ dest: MPlaceTy<'tcx>,
+ ) -> EvalResult<'tcx> {
+ let (ptr, ptr_align) = dest.to_scalar_ptr_align();
+ // Note that it is really important that the type here is the right one, and matches the type things are read at.
+ // In case `src_val` is a `ScalarPair`, we don't do any magic here to handle padding properly, which is only
+ // correct if we never look at this data with the wrong type.
+
+ // Nothing to do for ZSTs, other than checking alignment
+ if dest.layout.size.bytes() == 0 {
+ self.memory.check_align(ptr, ptr_align)?;
+ return Ok(());
+ }
+
+ let ptr = ptr.to_ptr()?;
+ match value {
+ Value::Scalar(scalar) => {
+ self.memory.write_scalar(
+ ptr, ptr_align.min(dest.layout.align), scalar, dest.layout.size
+ )
}
- _ => offset,
- };
+ Value::ScalarPair(a_val, b_val) => {
+ let (a, b) = match dest.layout.abi {
+ layout::Abi::ScalarPair(ref a, ref b) => (&a.value, &b.value),
+ _ => bug!("write_value_to_mplace: invalid ScalarPair layout: {:#?}", dest.layout)
+ };
+ let (a_size, b_size) = (a.size(&self), b.size(&self));
+ let (a_align, b_align) = (a.align(&self), b.align(&self));
+ let b_offset = a_size.abi_align(b_align);
+ let b_ptr = ptr.offset(b_offset, &self)?.into();
- let ptr = base_ptr.ptr_offset(offset, &self)?;
- let align = base_align.min(base_layout.align).min(field.align);
- let extra = if !field.is_unsized() {
- PlaceExtra::None
- } else {
- match base_extra {
- PlaceExtra::None => bug!("expected fat pointer"),
- PlaceExtra::DowncastVariant(..) => {
- bug!("Rust doesn't support unsized fields in enum variants")
- }
- PlaceExtra::Vtable(_) |
- PlaceExtra::Length(_) => {}
+ self.memory.write_scalar(ptr, ptr_align.min(a_align), a_val, a_size)?;
+ self.memory.write_scalar(b_ptr, ptr_align.min(b_align), b_val, b_size)
}
- base_extra
- };
+ }
+ }
- Ok((Place::Ptr { ptr, align, extra }, field))
+ /// Copy the data from an operand to a place
+ pub fn copy_op(
+ &mut self,
+ src: OpTy<'tcx>,
+ dest: PlaceTy<'tcx>,
+ ) -> EvalResult<'tcx> {
+ assert_eq!(src.layout.size, dest.layout.size,
+ "Size mismatch when copying!\nsrc: {:#?}\ndest: {:#?}", src, dest);
+
+ // Let us see if the layout is simple so we take a shortcut, avoid force_allocation.
+ let (src_ptr, src_align) = match self.try_read_value(src)? {
+ Ok(src_val) =>
+ // Yay, we got a value that we can write directly. We write with the
+ // *source layout*, because that was used to load, and if they do not match
+ // this is a transmute we want to support.
+ return self.write_value(src_val, PlaceTy { place: *dest, layout: src.layout }),
+ Err(mplace) => mplace.to_scalar_ptr_align(),
+ };
+ // Slow path, this does not fit into an immediate. Just memcpy.
+ trace!("copy_op: {:?} <- {:?}", *dest, *src);
+ let (dest_ptr, dest_align) = self.force_allocation(dest)?.to_scalar_ptr_align();
+ self.memory.copy(
+ src_ptr, src_align,
+ dest_ptr, dest_align,
+ src.layout.size, false
+ )
}
- pub fn val_to_place(&self, val: Value, ty: Ty<'tcx>) -> EvalResult<'tcx, Place> {
- let layout = self.layout_of(ty)?;
- Ok(match self.tcx.struct_tail(ty).sty {
- ty::TyDynamic(..) => {
- let (ptr, vtable) = self.into_ptr_vtable_pair(val)?;
- Place::Ptr {
- ptr,
- align: layout.align,
- extra: PlaceExtra::Vtable(vtable),
- }
- }
- ty::TyStr | ty::TySlice(_) => {
- let (ptr, len) = self.into_slice(val)?;
- Place::Ptr {
- ptr,
- align: layout.align,
- extra: PlaceExtra::Length(len),
- }
+ /// Make sure that a place is in memory, and return where it is.
+ /// This is essentially `force_to_memplace`.
+ pub fn force_allocation(
+ &mut self,
+ place: PlaceTy<'tcx>,
+ ) -> EvalResult<'tcx, MPlaceTy<'tcx>> {
+ let mplace = match place.place {
+ Place::Local { frame, local } => {
+ // FIXME: Consider not doing anything for a ZST, and just returning
+ // a fake pointer?
+
+ // We need the layout of the local. We can NOT use the layout we got,
+ // that might e.g. be a downcast variant!
+ let local_layout = self.layout_of_local(frame, local)?;
+ // Make sure it has a place
+ let rval = *self.stack[frame].locals[local].access()?;
+ let mplace = self.allocate_op(OpTy { op: rval, layout: local_layout })?.mplace;
+ // This might have allocated the flag
+ *self.stack[frame].locals[local].access_mut()? =
+ Operand::Indirect(mplace);
+ // done
+ mplace
}
- _ => Place::from_scalar_ptr(self.into_ptr(val)?, layout.align),
- })
+ Place::Ptr(mplace) => mplace
+ };
+ // Return with the original layout, so that the caller can go on
+ Ok(MPlaceTy { mplace, layout: place.layout })
}
- pub fn place_index(
+ pub fn allocate(
&mut self,
- base: Place,
- outer_ty: Ty<'tcx>,
- n: u64,
- ) -> EvalResult<'tcx, Place> {
- // Taking the outer type here may seem odd; it's needed because for array types, the outer type gives away the length.
- let base = self.force_allocation(base)?;
- let (base_ptr, align) = base.to_ptr_align();
-
- let (elem_ty, len) = base.elem_ty_and_len(outer_ty, self.tcx.tcx);
- let elem_size = self.layout_of(elem_ty)?.size;
- assert!(
- n < len,
- "Tried to access element {} of array/slice with length {}",
- n,
- len
- );
- let ptr = base_ptr.ptr_offset(elem_size * n, &*self)?;
- Ok(Place::Ptr {
- ptr,
- align,
- extra: PlaceExtra::None,
- })
+ layout: TyLayout<'tcx>,
+ kind: MemoryKind<M::MemoryKinds>,
+ ) -> EvalResult<'tcx, MPlaceTy<'tcx>> {
+ assert!(!layout.is_unsized(), "cannot alloc memory for unsized type");
+ let ptr = self.memory.allocate(layout.size, layout.align, kind)?;
+ Ok(MPlaceTy::from_aligned_ptr(ptr, layout))
}
- pub(super) fn place_downcast(
+ /// Make a place for an operand, allocating if needed
+ pub fn allocate_op(
&mut self,
- base: Place,
- variant: usize,
- ) -> EvalResult<'tcx, Place> {
- // FIXME(solson)
- let base = self.force_allocation(base)?;
- let (ptr, align) = base.to_ptr_align();
- let extra = PlaceExtra::DowncastVariant(variant);
- Ok(Place::Ptr { ptr, align, extra })
+ OpTy { op, layout }: OpTy<'tcx>,
+ ) -> EvalResult<'tcx, MPlaceTy<'tcx>> {
+ Ok(match op {
+ Operand::Indirect(mplace) => MPlaceTy { mplace, layout },
+ Operand::Immediate(value) => {
+ // FIXME: Is stack always right here?
+ let ptr = self.allocate(layout, MemoryKind::Stack)?;
+ self.write_value_to_mplace(value, ptr)?;
+ ptr
+ },
+ })
}
- pub fn eval_place_projection(
+ pub fn write_discriminant_value(
&mut self,
- base: Place,
- base_ty: Ty<'tcx>,
- proj_elem: &mir::ProjectionElem<'tcx, mir::Local, Ty<'tcx>>,
- ) -> EvalResult<'tcx, Place> {
- use rustc::mir::ProjectionElem::*;
- match *proj_elem {
- Field(field, _) => {
- let layout = self.layout_of(base_ty)?;
- Ok(self.place_field(base, field, layout)?.0)
- }
-
- Downcast(_, variant) => {
- self.place_downcast(base, variant)
- }
-
- Deref => {
- let val = self.read_place(base)?;
-
- let pointee_type = match base_ty.sty {
- ty::TyRawPtr(ref tam) => tam.ty,
- ty::TyRef(_, ty, _) => ty,
- ty::TyAdt(def, _) if def.is_box() => base_ty.boxed_ty(),
- _ => bug!("can only deref pointer types"),
- };
-
- trace!("deref to {} on {:?}", pointee_type, val);
-
- self.val_to_place(val, pointee_type)
+ variant_index: usize,
+ dest: PlaceTy<'tcx>,
+ ) -> EvalResult<'tcx> {
+ match dest.layout.variants {
+ layout::Variants::Single { index } => {
+ if index != variant_index {
+ // If the layout of an enum is `Single`, all
+ // other variants are necessarily uninhabited.
+ assert_eq!(dest.layout.for_variant(&self, variant_index).abi,
+ layout::Abi::Uninhabited);
+ }
}
-
- Index(local) => {
- let value = self.frame().locals[local].access()?;
- let ty = self.tcx.types.usize;
- let n = self
- .value_to_scalar(ValTy { value, ty })?
- .to_bits(self.tcx.data_layout.pointer_size)?;
- self.place_index(base, base_ty, n as u64)
+ layout::Variants::Tagged { ref tag, .. } => {
+ let discr_val = dest.layout.ty.ty_adt_def().unwrap()
+ .discriminant_for_variant(*self.tcx, variant_index)
+ .val;
+
+ // raw discriminants for enums are isize or bigger during
+ // their computation, but the in-memory tag is the smallest possible
+ // representation
+ let size = tag.value.size(self.tcx.tcx);
+ let shift = 128 - size.bits();
+ let discr_val = (discr_val << shift) >> shift;
+
+ let discr_dest = self.place_field(dest, 0)?;
+ self.write_scalar(Scalar::Bits {
+ bits: discr_val,
+ size: size.bytes() as u8,
+ }, discr_dest)?;
}
-
- ConstantIndex {
- offset,
- min_length,
- from_end,
+ layout::Variants::NicheFilling {
+ dataful_variant,
+ ref niche_variants,
+ niche_start,
+ ..
} => {
- // FIXME(solson)
- let base = self.force_allocation(base)?;
- let (base_ptr, align) = base.to_ptr_align();
-
- let (elem_ty, n) = base.elem_ty_and_len(base_ty, self.tcx.tcx);
- let elem_size = self.layout_of(elem_ty)?.size;
- assert!(n >= min_length as u64);
-
- let index = if from_end {
- n - u64::from(offset)
- } else {
- u64::from(offset)
- };
-
- let ptr = base_ptr.ptr_offset(elem_size * index, &self)?;
- Ok(Place::Ptr { ptr, align, extra: PlaceExtra::None })
+ if variant_index != dataful_variant {
+ let niche_dest =
+ self.place_field(dest, 0)?;
+ let niche_value = ((variant_index - niche_variants.start()) as u128)
+ .wrapping_add(niche_start);
+ self.write_scalar(Scalar::Bits {
+ bits: niche_value,
+ size: niche_dest.layout.size.bytes() as u8,
+ }, niche_dest)?;
+ }
}
+ }
- Subslice { from, to } => {
- // FIXME(solson)
- let base = self.force_allocation(base)?;
- let (base_ptr, align) = base.to_ptr_align();
-
- let (elem_ty, n) = base.elem_ty_and_len(base_ty, self.tcx.tcx);
- let elem_size = self.layout_of(elem_ty)?.size;
- assert!(u64::from(from) <= n - u64::from(to));
- let ptr = base_ptr.ptr_offset(elem_size * u64::from(from), &self)?;
- // sublicing arrays produces arrays
- let extra = if self.type_is_sized(base_ty) {
- PlaceExtra::None
- } else {
- PlaceExtra::Length(n - u64::from(to) - u64::from(from))
- };
- Ok(Place::Ptr { ptr, align, extra })
+ Ok(())
+ }
+
+ /// Every place can be read from, so we can turm them into an operand
+ #[inline(always)]
+ pub fn place_to_op(&self, place: PlaceTy<'tcx>) -> EvalResult<'tcx, OpTy<'tcx>> {
+ let op = match place.place {
+ Place::Ptr(mplace) => {
+ Operand::Indirect(mplace)
}
- }
+ Place::Local { frame, local } =>
+ *self.stack[frame].locals[local].access()?
+ };
+ Ok(OpTy { op, layout: place.layout })
}
- pub fn place_ty(&self, place: &mir::Place<'tcx>) -> Ty<'tcx> {
- self.monomorphize(
- place.ty(self.mir(), *self.tcx).to_ty(*self.tcx),
- self.substs(),
- )
+ /// Turn a place that is a dyn trait (i.e., PlaceExtra::Vtable and the appropriate layout)
+ /// or a slice into the specific fixed-size place and layout that is given by the vtable/len.
+ /// This "unpacks" the existential quantifier, so to speak.
+ pub fn unpack_unsized_mplace(&self, mplace: MPlaceTy<'tcx>) -> EvalResult<'tcx, MPlaceTy<'tcx>> {
+ trace!("Unpacking {:?} ({:?})", *mplace, mplace.layout.ty);
+ let layout = match mplace.extra {
+ PlaceExtra::Vtable(vtable) => {
+ // the drop function signature
+ let drop_instance = self.read_drop_type_from_vtable(vtable)?;
+ trace!("Found drop fn: {:?}", drop_instance);
+ let fn_sig = drop_instance.ty(*self.tcx).fn_sig(*self.tcx);
+ let fn_sig = self.tcx.normalize_erasing_late_bound_regions(self.param_env, &fn_sig);
+ // the drop function takes *mut T where T is the type being dropped, so get that
+ let ty = fn_sig.inputs()[0].builtin_deref(true).unwrap().ty;
+ let layout = self.layout_of(ty)?;
+ // Sanity checks
+ let (size, align) = self.read_size_and_align_from_vtable(vtable)?;
+ assert_eq!(size, layout.size);
+ assert_eq!(align.abi(), layout.align.abi()); // only ABI alignment is preserved
+ // FIXME: More checks for the vtable? We could make sure it is exactly
+ // the one one would expect for this type.
+ // Done!
+ layout
+ },
+ PlaceExtra::Length(len) => {
+ let ty = self.tcx.mk_array(mplace.layout.field(self, 0)?.ty, len);
+ self.layout_of(ty)?
+ }
+ PlaceExtra::None => bug!("Expected a fat pointer"),
+ };
+ trace!("Unpacked type: {:?}", layout.ty);
+ Ok(MPlaceTy {
+ mplace: MemPlace { extra: PlaceExtra::None, ..*mplace },
+ layout
+ })
}
}
//! The main entry point is the `step` method.
use rustc::mir;
+use rustc::ty::layout::LayoutOf;
+use rustc::mir::interpret::{EvalResult, Scalar};
-use rustc::mir::interpret::EvalResult;
use super::{EvalContext, Machine};
+/// Classify whether an operator is "left-homogeneous", i.e. the LHS has the
+/// same type as the result.
+#[inline]
+fn binop_left_homogeneous(op: mir::BinOp) -> bool {
+ use rustc::mir::BinOp::*;
+ match op {
+ Add | Sub | Mul | Div | Rem | BitXor | BitAnd | BitOr |
+ Offset | Shl | Shr =>
+ true,
+ Eq | Ne | Lt | Le | Gt | Ge =>
+ false,
+ }
+}
+/// Classify whether an operator is "right-homogeneous", i.e. the RHS has the
+/// same type as the LHS.
+#[inline]
+fn binop_right_homogeneous(op: mir::BinOp) -> bool {
+ use rustc::mir::BinOp::*;
+ match op {
+ Add | Sub | Mul | Div | Rem | BitXor | BitAnd | BitOr |
+ Eq | Ne | Lt | Le | Gt | Ge =>
+ true,
+ Offset | Shl | Shr =>
+ false,
+ }
+}
+
impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> EvalContext<'a, 'mir, 'tcx, M> {
pub fn inc_step_counter_and_detect_loops(&mut self) -> EvalResult<'tcx, ()> {
/// The number of steps between loop detector snapshots.
}
fn statement(&mut self, stmt: &mir::Statement<'tcx>) -> EvalResult<'tcx> {
- trace!("{:?}", stmt);
+ debug!("{:?}", stmt);
use rustc::mir::StatementKind::*;
variant_index,
} => {
let dest = self.eval_place(place)?;
- let dest_ty = self.place_ty(place);
- self.write_discriminant_value(dest_ty, dest, variant_index)?;
+ self.write_discriminant_value(variant_index, dest)?;
}
// Mark locals as alive
// Mark locals as dead
StorageDead(local) => {
- let old_val = self.frame_mut().storage_dead(local);
+ let old_val = self.storage_dead(local);
self.deallocate_local(old_val)?;
}
Ok(())
}
+ /// Evaluate an assignment statement.
+ ///
+ /// There is no separate `eval_rvalue` function. Instead, the code for handling each rvalue
+ /// type writes its results directly into the memory specified by the place.
+ fn eval_rvalue_into_place(
+ &mut self,
+ rvalue: &mir::Rvalue<'tcx>,
+ place: &mir::Place<'tcx>,
+ ) -> EvalResult<'tcx> {
+ let dest = self.eval_place(place)?;
+
+ use rustc::mir::Rvalue::*;
+ match *rvalue {
+ Use(ref operand) => {
+ // Avoid recomputing the layout
+ let op = self.eval_operand(operand, Some(dest.layout))?;
+ self.copy_op(op, dest)?;
+ }
+
+ BinaryOp(bin_op, ref left, ref right) => {
+ let layout = if binop_left_homogeneous(bin_op) { Some(dest.layout) } else { None };
+ let left = self.eval_operand_and_read_value(left, layout)?;
+ let layout = if binop_right_homogeneous(bin_op) { Some(left.layout) } else { None };
+ let right = self.eval_operand_and_read_value(right, layout)?;
+ self.binop_ignore_overflow(
+ bin_op,
+ left,
+ right,
+ dest,
+ )?;
+ }
+
+ CheckedBinaryOp(bin_op, ref left, ref right) => {
+ // Due to the extra boolean in the result, we can never reuse the `dest.layout`.
+ let left = self.eval_operand_and_read_value(left, None)?;
+ let layout = if binop_right_homogeneous(bin_op) { Some(left.layout) } else { None };
+ let right = self.eval_operand_and_read_value(right, layout)?;
+ self.binop_with_overflow(
+ bin_op,
+ left,
+ right,
+ dest,
+ )?;
+ }
+
+ UnaryOp(un_op, ref operand) => {
+ // The operand always has the same type as the result.
+ let val = self.eval_operand_and_read_value(operand, Some(dest.layout))?;
+ let val = self.unary_op(un_op, val.to_scalar()?, dest.layout)?;
+ self.write_scalar(val, dest)?;
+ }
+
+ Aggregate(ref kind, ref operands) => {
+ let (dest, active_field_index) = match **kind {
+ mir::AggregateKind::Adt(adt_def, variant_index, _, active_field_index) => {
+ self.write_discriminant_value(variant_index, dest)?;
+ if adt_def.is_enum() {
+ (self.place_downcast(dest, variant_index)?, active_field_index)
+ } else {
+ (dest, active_field_index)
+ }
+ }
+ _ => (dest, None)
+ };
+
+ for (i, operand) in operands.iter().enumerate() {
+ let op = self.eval_operand(operand, None)?;
+ // Ignore zero-sized fields.
+ if !op.layout.is_zst() {
+ let field_index = active_field_index.unwrap_or(i);
+ let field_dest = self.place_field(dest, field_index as u64)?;
+ self.copy_op(op, field_dest)?;
+ }
+ }
+ }
+
+ Repeat(ref operand, _) => {
+ let op = self.eval_operand(operand, None)?;
+ let dest = self.force_allocation(dest)?;
+ let length = dest.len();
+
+ if length > 0 {
+ // write the first
+ let first = self.mplace_field(dest, 0)?;
+ self.copy_op(op, first.into())?;
+
+ if length > 1 {
+ // copy the rest
+ let (dest, dest_align) = first.to_scalar_ptr_align();
+ let rest = dest.ptr_offset(first.layout.size, &self)?;
+ self.memory.copy_repeatedly(
+ dest, dest_align, rest, dest_align, first.layout.size, length - 1, true
+ )?;
+ }
+ }
+ }
+
+ Len(ref place) => {
+ // FIXME(CTFE): don't allow computing the length of arrays in const eval
+ let src = self.eval_place(place)?;
+ let mplace = self.force_allocation(src)?;
+ let len = mplace.len();
+ let size = self.memory.pointer_size().bytes() as u8;
+ self.write_scalar(
+ Scalar::Bits {
+ bits: len as u128,
+ size,
+ },
+ dest,
+ )?;
+ }
+
+ Ref(_, _, ref place) => {
+ let src = self.eval_place(place)?;
+ let val = self.force_allocation(src)?.to_ref(&self);
+ self.write_value(val, dest)?;
+ }
+
+ NullaryOp(mir::NullOp::Box, _) => {
+ M::box_alloc(self, dest)?;
+ }
+
+ NullaryOp(mir::NullOp::SizeOf, ty) => {
+ let ty = self.monomorphize(ty, self.substs());
+ let layout = self.layout_of(ty)?;
+ assert!(!layout.is_unsized(),
+ "SizeOf nullary MIR operator called for unsized type");
+ let size = self.memory.pointer_size().bytes() as u8;
+ self.write_scalar(
+ Scalar::Bits {
+ bits: layout.size.bytes() as u128,
+ size,
+ },
+ dest,
+ )?;
+ }
+
+ Cast(kind, ref operand, cast_ty) => {
+ debug_assert_eq!(self.monomorphize(cast_ty, self.substs()), dest.layout.ty);
+ let src = self.eval_operand(operand, None)?;
+ self.cast(src, kind, dest)?;
+ }
+
+ Discriminant(ref place) => {
+ let place = self.eval_place(place)?;
+ let discr_val = self.read_discriminant_value(self.place_to_op(place)?)?;
+ let size = dest.layout.size.bytes() as u8;
+ self.write_scalar(Scalar::Bits {
+ bits: discr_val,
+ size,
+ }, dest)?;
+ }
+ }
+
+ self.dump_place(*dest);
+
+ Ok(())
+ }
+
fn terminator(&mut self, terminator: &mir::Terminator<'tcx>) -> EvalResult<'tcx> {
- trace!("{:?}", terminator.kind);
+ debug!("{:?}", terminator.kind);
self.tcx.span = terminator.source_info.span;
self.memory.tcx.span = terminator.source_info.span;
self.eval_terminator(terminator)?;
if !self.stack.is_empty() {
- trace!("// {:?}", self.frame().block);
+ debug!("// {:?}", self.frame().block);
}
Ok(())
}
use rustc::mir::BasicBlock;
-use rustc::ty::{self, Ty};
+use rustc::ty::{self, layout::LayoutOf};
use syntax::source_map::Span;
-use rustc::mir::interpret::{EvalResult, Value};
-use interpret::{Machine, ValTy, EvalContext, Place, PlaceExtra};
+use rustc::mir::interpret::EvalResult;
+use interpret::{Machine, EvalContext, PlaceTy, PlaceExtra, OpTy, Operand};
impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> EvalContext<'a, 'mir, 'tcx, M> {
- pub(crate) fn drop_place(
+ pub(crate) fn drop_in_place(
&mut self,
- place: Place,
+ place: PlaceTy<'tcx>,
instance: ty::Instance<'tcx>,
- ty: Ty<'tcx>,
span: Span,
target: BasicBlock,
) -> EvalResult<'tcx> {
- trace!("drop_place: {:#?}", place);
+ trace!("drop_in_place: {:?},\n {:?}, {:?}", *place, place.layout.ty, instance);
// We take the address of the object. This may well be unaligned, which is fine for us here.
// However, unaligned accesses will probably make the actual drop implementation fail -- a problem shared
// by rustc.
- let val = match self.force_allocation(place)? {
- Place::Ptr {
- ptr,
- align: _,
- extra: PlaceExtra::Vtable(vtable),
- } => ptr.to_value_with_vtable(vtable),
- Place::Ptr {
- ptr,
- align: _,
- extra: PlaceExtra::Length(len),
- } => ptr.to_value_with_len(len, self.tcx.tcx),
- Place::Ptr {
- ptr,
- align: _,
- extra: PlaceExtra::None,
- } => Value::Scalar(ptr),
- _ => bug!("force_allocation broken"),
- };
- self.drop(val, instance, ty, span, target)
- }
+ let place = self.force_allocation(place)?;
- fn drop(
- &mut self,
- arg: Value,
- instance: ty::Instance<'tcx>,
- ty: Ty<'tcx>,
- span: Span,
- target: BasicBlock,
- ) -> EvalResult<'tcx> {
- trace!("drop: {:#?}, {:?}, {:?}", arg, ty.sty, instance.def);
-
- let instance = match ty.sty {
+ let (instance, place) = match place.layout.ty.sty {
ty::TyDynamic(..) => {
- if let Value::ScalarPair(_, vtable) = arg {
- self.read_drop_type_from_vtable(vtable.unwrap_or_err()?.to_ptr()?)?
- } else {
- bug!("expected fat ptr, got {:?}", arg);
- }
+ // Dropping a trait object.
+ let vtable = match place.extra {
+ PlaceExtra::Vtable(vtable) => vtable,
+ _ => bug!("Expected vtable when dropping {:#?}", place),
+ };
+ let place = self.unpack_unsized_mplace(place)?;
+ let instance = self.read_drop_type_from_vtable(vtable)?;
+ (instance, place)
}
- _ => instance,
+ _ => (instance, place),
};
- // the drop function expects a reference to the value
- let valty = ValTy {
- value: arg,
- ty: self.tcx.mk_mut_ptr(ty),
+ let fn_sig = instance.ty(*self.tcx).fn_sig(*self.tcx);
+ let fn_sig = self.tcx.normalize_erasing_late_bound_regions(self.param_env, &fn_sig);
+
+ let arg = OpTy {
+ op: Operand::Immediate(place.to_ref(&self)),
+ layout: self.layout_of(self.tcx.mk_mut_ptr(place.layout.ty))?,
};
- let fn_sig = self.tcx.fn_sig(instance.def_id()).skip_binder().clone();
+ // This should always be (), but getting it from the sig seems
+ // easier than creating a layout of ().
+ let dest = PlaceTy::null(&self, self.layout_of(fn_sig.output())?);
self.eval_fn_call(
instance,
- Some((Place::undef(), target)),
- &[valty],
+ Some((dest, target)),
+ &[arg],
span,
fn_sig,
)
use rustc::mir;
use rustc::ty::{self, Ty};
-use rustc::ty::layout::{LayoutOf, Size};
+use rustc::ty::layout::LayoutOf;
use syntax::source_map::Span;
use rustc_target::spec::abi::Abi;
-use rustc::mir::interpret::{EvalResult, Scalar, Value};
-use super::{EvalContext, Place, Machine, ValTy};
+use rustc::mir::interpret::{EvalResult, Scalar};
+use super::{EvalContext, Machine, Value, OpTy, PlaceTy, ValTy, Operand};
use rustc_data_structures::indexed_vec::Idx;
-use interpret::memory::HasMemory;
mod drop;
use rustc::mir::TerminatorKind::*;
match terminator.kind {
Return => {
- self.dump_local(self.frame().return_place);
+ self.dump_place(self.frame().return_place);
self.pop_stack_frame()?
}
ref targets,
..
} => {
- let discr_val = self.eval_operand(discr)?;
- let discr_prim = self.value_to_scalar(discr_val)?;
- let discr_layout = self.layout_of(discr_val.ty).unwrap();
- trace!("SwitchInt({:?}, {:#?})", discr_prim, discr_layout);
+ let discr_val = self.eval_operand(discr, None)?;
+ let discr = self.read_value(discr_val)?;
+ trace!("SwitchInt({:?})", *discr);
// Branch to the `otherwise` case by default, if no match is found.
let mut target_block = targets[targets.len() - 1];
for (index, &const_int) in values.iter().enumerate() {
// Compare using binary_op
- let const_int = Scalar::Bits { bits: const_int, size: discr_layout.size.bytes() as u8 };
- let res = self.binary_op(mir::BinOp::Eq,
- discr_prim, discr_val.ty,
- const_int, discr_val.ty
+ let const_int = Scalar::Bits { bits: const_int, size: discr.layout.size.bytes() as u8 };
+ let (res, _) = self.binary_op(mir::BinOp::Eq,
+ discr,
+ ValTy { value: Value::Scalar(const_int.into()), layout: discr.layout }
)?;
- if res.0.to_bits(Size::from_bytes(1))? != 0 {
+ if res.to_bool()? {
target_block = targets[index];
break;
}
None => None,
};
- let func = self.eval_operand(func)?;
- let (fn_def, sig) = match func.ty.sty {
+ let func = self.eval_operand(func, None)?;
+ let (fn_def, sig) = match func.layout.ty.sty {
ty::TyFnPtr(sig) => {
- let fn_ptr = self.value_to_scalar(func)?.to_ptr()?;
+ let fn_ptr = self.read_scalar(func)?.to_ptr()?;
let instance = self.memory.get_fn(fn_ptr)?;
let instance_ty = instance.ty(*self.tcx);
match instance_ty.sty {
}
ty::TyFnDef(def_id, substs) => (
self.resolve(def_id, substs)?,
- func.ty.fn_sig(*self.tcx),
+ func.layout.ty.fn_sig(*self.tcx),
),
_ => {
- let msg = format!("can't handle callee of type {:?}", func.ty);
+ let msg = format!("can't handle callee of type {:?}", func.layout.ty);
return err!(Unimplemented(msg));
}
};
- let args = self.operands_to_args(args)?;
+ let args = self.eval_operands(args)?;
let sig = self.tcx.normalize_erasing_late_bound_regions(
ty::ParamEnv::reveal_all(),
&sig,
self.eval_fn_call(
fn_def,
destination,
- &args,
+ &args[..],
terminator.source_info.span,
sig,
)?;
} => {
// FIXME(CTFE): forbid drop in const eval
let place = self.eval_place(location)?;
- let ty = self.place_ty(location);
- let ty = self.tcx.subst_and_normalize_erasing_regions(
- self.substs(),
- ty::ParamEnv::reveal_all(),
- &ty,
- );
+ let ty = place.layout.ty;
trace!("TerminatorKind::drop: {:?}, type {}", location, ty);
let instance = ::monomorphize::resolve_drop_in_place(*self.tcx, ty);
- self.drop_place(
+ self.drop_in_place(
place,
instance,
- ty,
terminator.source_info.span,
target,
)?;
target,
..
} => {
- let cond_val = self.eval_operand_to_scalar(cond)?.to_bool()?;
+ let cond_val = self.eval_operand_and_read_value(cond, None)?.to_scalar()?.to_bool()?;
if expected == cond_val {
self.goto_block(target);
} else {
use rustc::mir::interpret::EvalErrorKind::*;
return match *msg {
BoundsCheck { ref len, ref index } => {
- let len = self.eval_operand_to_scalar(len)
- .expect("can't eval len")
+ let len = self.eval_operand_and_read_value(len, None)
+ .expect("can't eval len").to_scalar()?
.to_bits(self.memory().pointer_size())? as u64;
- let index = self.eval_operand_to_scalar(index)
- .expect("can't eval index")
+ let index = self.eval_operand_and_read_value(index, None)
+ .expect("can't eval index").to_scalar()?
.to_bits(self.memory().pointer_size())? as u64;
err!(BoundsCheck { len, index })
}
fn eval_fn_call(
&mut self,
instance: ty::Instance<'tcx>,
- destination: Option<(Place, mir::BasicBlock)>,
- args: &[ValTy<'tcx>],
+ destination: Option<(PlaceTy<'tcx>, mir::BasicBlock)>,
+ args: &[OpTy<'tcx>],
span: Span,
sig: ty::FnSig<'tcx>,
) -> EvalResult<'tcx> {
trace!("eval_fn_call: {:#?}", instance);
+ if let Some((place, _)) = destination {
+ assert_eq!(place.layout.ty, sig.output());
+ }
match instance.def {
ty::InstanceDef::Intrinsic(..) => {
let (ret, target) = match destination {
Some(dest) => dest,
_ => return err!(Unreachable),
};
- let ty = sig.output();
- let layout = self.layout_of(ty)?;
- M::call_intrinsic(self, instance, args, ret, layout, target)?;
- self.dump_local(ret);
+ M::call_intrinsic(self, instance, args, ret, target)?;
+ self.dump_place(*ret);
Ok(())
}
// FIXME: figure out why we can't just go through the shim
ty::InstanceDef::ClosureOnceShim { .. } => {
- if M::eval_fn_call(self, instance, destination, args, span, sig)? {
+ if M::eval_fn_call(self, instance, destination, args, span)? {
return Ok(());
}
let mut arg_locals = self.frame().mir.args_iter();
match sig.abi {
// closure as closure once
Abi::RustCall => {
- for (arg_local, &valty) in arg_locals.zip(args) {
+ for (arg_local, &op) in arg_locals.zip(args) {
let dest = self.eval_place(&mir::Place::Local(arg_local))?;
- self.write_value(valty, dest)?;
+ self.copy_op(op, dest)?;
}
}
// non capture closure as fn ptr
// and need to pack arguments
Abi::Rust => {
trace!(
- "arg_locals: {:#?}",
- self.frame().mir.args_iter().collect::<Vec<_>>()
+ "args: {:#?}",
+ self.frame().mir.args_iter().zip(args.iter())
+ .map(|(local, arg)| (local, **arg, arg.layout.ty)).collect::<Vec<_>>()
);
- trace!("args: {:#?}", args);
let local = arg_locals.nth(1).unwrap();
- for (i, &valty) in args.into_iter().enumerate() {
+ for (i, &op) in args.into_iter().enumerate() {
let dest = self.eval_place(&mir::Place::Local(local).field(
mir::Field::new(i),
- valty.ty,
+ op.layout.ty,
))?;
- self.write_value(valty, dest)?;
+ self.copy_op(op, dest)?;
}
}
_ => bug!("bad ABI for ClosureOnceShim: {:?}", sig.abi),
ty::InstanceDef::CloneShim(..) |
ty::InstanceDef::Item(_) => {
// Push the stack frame, and potentially be entirely done if the call got hooked
- if M::eval_fn_call(self, instance, destination, args, span, sig)? {
+ if M::eval_fn_call(self, instance, destination, args, span)? {
+ // TODO: Can we make it return the frame to push, instead
+ // of the hook doing half of the work and us doing the argument
+ // initialization?
return Ok(());
}
let mut arg_locals = self.frame().mir.args_iter();
trace!("ABI: {:?}", sig.abi);
trace!(
- "arg_locals: {:#?}",
- self.frame().mir.args_iter().collect::<Vec<_>>()
+ "args: {:#?}",
+ self.frame().mir.args_iter().zip(args.iter())
+ .map(|(local, arg)| (local, **arg, arg.layout.ty)).collect::<Vec<_>>()
);
- trace!("args: {:#?}", args);
match sig.abi {
Abi::RustCall => {
assert_eq!(args.len(), 2);
// write first argument
let first_local = arg_locals.next().unwrap();
let dest = self.eval_place(&mir::Place::Local(first_local))?;
- self.write_value(args[0], dest)?;
+ self.copy_op(args[0], dest)?;
}
// unpack and write all other args
- let layout = self.layout_of(args[1].ty)?;
- if let ty::TyTuple(_) = args[1].ty.sty {
+ let layout = args[1].layout;
+ if let ty::TyTuple(_) = layout.ty.sty {
if layout.is_zst() {
// Nothing to do, no need to unpack zsts
return Ok(());
}
if self.frame().mir.args_iter().count() == layout.fields.count() + 1 {
for (i, arg_local) in arg_locals.enumerate() {
- let field = mir::Field::new(i);
- let (value, layout) = self.read_field(args[1].value, None, field, layout)?;
+ let arg = self.operand_field(args[1], i as u64)?;
let dest = self.eval_place(&mir::Place::Local(arg_local))?;
- let valty = ValTy {
- value,
- ty: layout.ty,
- };
- self.write_value(valty, dest)?;
+ self.copy_op(arg, dest)?;
}
} else {
trace!("manual impl of rust-call ABI");
let dest = self.eval_place(
&mir::Place::Local(arg_locals.next().unwrap()),
)?;
- self.write_value(args[1], dest)?;
+ self.copy_op(args[1], dest)?;
}
} else {
bug!(
- "rust-call ABI tuple argument was {:#?}, {:#?}",
- args[1].ty,
+ "rust-call ABI tuple argument was {:#?}",
layout
);
}
}
_ => {
- for (arg_local, &valty) in arg_locals.zip(args) {
+ for (arg_local, &op) in arg_locals.zip(args) {
let dest = self.eval_place(&mir::Place::Local(arg_local))?;
- self.write_value(valty, dest)?;
+ self.copy_op(op, dest)?;
}
}
}
ty::InstanceDef::Virtual(_, idx) => {
let ptr_size = self.memory.pointer_size();
let ptr_align = self.tcx.data_layout.pointer_align;
- let (ptr, vtable) = self.into_ptr_vtable_pair(args[0].value)?;
+ let (ptr, vtable) = self.read_value(args[0])?.to_scalar_dyn_trait()?;
let fn_ptr = self.memory.read_ptr_sized(
vtable.offset(ptr_size * (idx as u64 + 3), &self)?,
ptr_align
- )?.unwrap_or_err()?.to_ptr()?;
+ )?.to_ptr()?;
let instance = self.memory.get_fn(fn_ptr)?;
+
+ // We have to patch the self argument, in particular get the layout
+ // expected by the actual function. Cannot just use "field 0" due to
+ // Box<self>.
let mut args = args.to_vec();
- let ty = self.layout_of(args[0].ty)?.field(&self, 0)?.ty;
- args[0].ty = ty;
- args[0].value = Value::Scalar(ptr);
+ let pointee = args[0].layout.ty.builtin_deref(true).unwrap().ty;
+ let fake_fat_ptr_ty = self.tcx.mk_mut_ptr(pointee);
+ args[0].layout = self.layout_of(fake_fat_ptr_ty)?.field(&self, 0)?;
+ args[0].op = Operand::Immediate(Value::Scalar(ptr.into())); // strip vtable
+ trace!("Patched self operand to {:#?}", args[0]);
// recurse with concrete function
self.eval_fn_call(instance, destination, &args, span, sig)
}
let drop = ::monomorphize::resolve_drop_in_place(*self.tcx, ty);
let drop = self.memory.create_fn_alloc(drop);
- self.memory.write_ptr_sized_unsigned(vtable, ptr_align, Scalar::Ptr(drop).into())?;
+ self.memory.write_ptr_sized(vtable, ptr_align, Scalar::Ptr(drop).into())?;
let size_ptr = vtable.offset(ptr_size, &self)?;
- self.memory.write_ptr_sized_unsigned(size_ptr, ptr_align, Scalar::Bits {
+ self.memory.write_ptr_sized(size_ptr, ptr_align, Scalar::Bits {
bits: size as u128,
size: ptr_size.bytes() as u8,
}.into())?;
let align_ptr = vtable.offset(ptr_size * 2, &self)?;
- self.memory.write_ptr_sized_unsigned(align_ptr, ptr_align, Scalar::Bits {
+ self.memory.write_ptr_sized(align_ptr, ptr_align, Scalar::Bits {
bits: align as u128,
size: ptr_size.bytes() as u8,
}.into())?;
let instance = self.resolve(def_id, substs)?;
let fn_ptr = self.memory.create_fn_alloc(instance);
let method_ptr = vtable.offset(ptr_size * (3 + i as u64), &self)?;
- self.memory.write_ptr_sized_unsigned(method_ptr, ptr_align, Scalar::Ptr(fn_ptr).into())?;
+ self.memory.write_ptr_sized(method_ptr, ptr_align, Scalar::Ptr(fn_ptr).into())?;
}
}
) -> EvalResult<'tcx, ty::Instance<'tcx>> {
// we don't care about the pointee type, we just want a pointer
let pointer_align = self.tcx.data_layout.pointer_align;
- let drop_fn = self.memory.read_ptr_sized(vtable, pointer_align)?.unwrap_or_err()?.to_ptr()?;
+ let drop_fn = self.memory.read_ptr_sized(vtable, pointer_align)?.to_ptr()?;
self.memory.get_fn(drop_fn)
}
) -> EvalResult<'tcx, (Size, Align)> {
let pointer_size = self.memory.pointer_size();
let pointer_align = self.tcx.data_layout.pointer_align;
- let size = self.memory.read_ptr_sized(vtable.offset(pointer_size, self)?, pointer_align)?.unwrap_or_err()?.to_bits(pointer_size)? as u64;
+ let size = self.memory.read_ptr_sized(vtable.offset(pointer_size, self)?, pointer_align)?.to_bits(pointer_size)? as u64;
let align = self.memory.read_ptr_sized(
vtable.offset(pointer_size * 2, self)?,
pointer_align
- )?.unwrap_or_err()?.to_bits(pointer_size)? as u64;
+ )?.to_bits(pointer_size)? as u64;
Ok((Size::from_bytes(size), Align::from_bytes(align, align).unwrap()))
}
}
--- /dev/null
+use std::fmt::Write;
+
+use syntax_pos::symbol::Symbol;
+use rustc::ty::layout::{self, Size, Primitive};
+use rustc::ty::{self, Ty};
+use rustc_data_structures::fx::FxHashSet;
+use rustc::mir::interpret::{
+ Scalar, AllocType, EvalResult, ScalarMaybeUndef, EvalErrorKind
+};
+
+use super::{
+ MPlaceTy, Machine, EvalContext
+};
+
+macro_rules! validation_failure{
+ ($what:expr, $where:expr, $details:expr) => {{
+ let where_ = path_format($where);
+ let where_ = if where_.is_empty() {
+ String::new()
+ } else {
+ format!(" at {}", where_)
+ };
+ err!(ValidationFailure(format!(
+ "encountered {}{}, but expected {}",
+ $what, where_, $details,
+ )))
+ }};
+ ($what:expr, $where:expr) => {{
+ let where_ = path_format($where);
+ let where_ = if where_.is_empty() {
+ String::new()
+ } else {
+ format!(" at {}", where_)
+ };
+ err!(ValidationFailure(format!(
+ "encountered {}{}",
+ $what, where_,
+ )))
+ }};
+}
+
+/// We want to show a nice path to the invalid field for diagnotsics,
+/// but avoid string operations in the happy case where no error happens.
+/// So we track a `Vec<PathElem>` where `PathElem` contains all the data we
+/// need to later print something for the user.
+#[derive(Copy, Clone, Debug)]
+pub enum PathElem {
+ Field(Symbol),
+ ClosureVar(Symbol),
+ ArrayElem(usize),
+ TupleElem(usize),
+ Deref,
+ Tag,
+}
+
+// Adding a Deref and making a copy of the path to be put into the queue
+// always go together. This one does it with only new allocation.
+fn path_clone_and_deref(path: &Vec<PathElem>) -> Vec<PathElem> {
+ let mut new_path = Vec::with_capacity(path.len()+1);
+ new_path.clone_from(path);
+ new_path.push(PathElem::Deref);
+ new_path
+}
+
+/// Format a path
+fn path_format(path: &Vec<PathElem>) -> String {
+ use self::PathElem::*;
+
+ let mut out = String::new();
+ for elem in path.iter() {
+ match elem {
+ Field(name) => write!(out, ".{}", name).unwrap(),
+ ClosureVar(name) => write!(out, ".<closure-var({})>", name).unwrap(),
+ TupleElem(idx) => write!(out, ".{}", idx).unwrap(),
+ ArrayElem(idx) => write!(out, "[{}]", idx).unwrap(),
+ Deref =>
+ // This does not match Rust syntax, but it is more readable for long paths -- and
+ // some of the other items here also are not Rust syntax. Actually we can't
+ // even use the usual syntax because we are just showing the projections,
+ // not the root.
+ write!(out, ".<deref>").unwrap(),
+ Tag => write!(out, ".<enum-tag>").unwrap(),
+ }
+ }
+ out
+}
+
+impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> EvalContext<'a, 'mir, 'tcx, M> {
+ fn validate_scalar(
+ &self,
+ value: ScalarMaybeUndef,
+ size: Size,
+ scalar: &layout::Scalar,
+ path: &Vec<PathElem>,
+ ty: Ty,
+ ) -> EvalResult<'tcx> {
+ trace!("validate scalar: {:#?}, {:#?}, {:#?}, {}", value, size, scalar, ty);
+ let (lo, hi) = scalar.valid_range.clone().into_inner();
+
+ let value = match value {
+ ScalarMaybeUndef::Scalar(scalar) => scalar,
+ ScalarMaybeUndef::Undef => return validation_failure!("undefined bytes", path),
+ };
+
+ let bits = match value {
+ Scalar::Bits { bits, size: value_size } => {
+ assert_eq!(value_size as u64, size.bytes());
+ bits
+ },
+ Scalar::Ptr(_) => {
+ let ptr_size = self.memory.pointer_size();
+ let ptr_max = u128::max_value() >> (128 - ptr_size.bits());
+ return if lo > hi {
+ if lo - hi == 1 {
+ // no gap, all values are ok
+ Ok(())
+ } else if hi < ptr_max || lo > 1 {
+ let max = u128::max_value() >> (128 - size.bits());
+ validation_failure!(
+ "pointer",
+ path,
+ format!("something in the range {:?} or {:?}", 0..=lo, hi..=max)
+ )
+ } else {
+ Ok(())
+ }
+ } else if hi < ptr_max || lo > 1 {
+ validation_failure!(
+ "pointer",
+ path,
+ format!("something in the range {:?}", scalar.valid_range)
+ )
+ } else {
+ Ok(())
+ };
+ },
+ };
+
+ // char gets a special treatment, because its number space is not contiguous so `TyLayout`
+ // has no special checks for chars
+ match ty.sty {
+ ty::TyChar => {
+ debug_assert_eq!(size.bytes(), 4);
+ if ::std::char::from_u32(bits as u32).is_none() {
+ return validation_failure!(
+ "character",
+ path,
+ "a valid unicode codepoint"
+ );
+ }
+ }
+ _ => {},
+ }
+
+ use std::ops::RangeInclusive;
+ let in_range = |bound: RangeInclusive<u128>| bound.contains(&bits);
+ if lo > hi {
+ if in_range(0..=hi) || in_range(lo..=u128::max_value()) {
+ Ok(())
+ } else {
+ validation_failure!(
+ bits,
+ path,
+ format!("something in the range {:?} or {:?}", ..=hi, lo..)
+ )
+ }
+ } else {
+ if in_range(scalar.valid_range.clone()) {
+ Ok(())
+ } else {
+ validation_failure!(
+ bits,
+ path,
+ format!("something in the range {:?}", scalar.valid_range)
+ )
+ }
+ }
+ }
+
+ /// This function checks the memory where `dest` points to. The place must be sized
+ /// (i.e., dest.extra == PlaceExtra::None).
+ /// It will error if the bits at the destination do not match the ones described by the layout.
+ /// The `path` may be pushed to, but the part that is present when the function
+ /// starts must not be changed!
+ pub fn validate_mplace(
+ &self,
+ dest: MPlaceTy<'tcx>,
+ path: &mut Vec<PathElem>,
+ seen: &mut FxHashSet<(MPlaceTy<'tcx>)>,
+ todo: &mut Vec<(MPlaceTy<'tcx>, Vec<PathElem>)>,
+ ) -> EvalResult<'tcx> {
+ self.memory.dump_alloc(dest.to_ptr()?.alloc_id);
+ trace!("validate_mplace: {:?}, {:#?}", *dest, dest.layout);
+
+ // Find the right variant. We have to handle this as a prelude, not via
+ // proper recursion with the new inner layout, to be able to later nicely
+ // print the field names of the enum field that is being accessed.
+ let (variant, dest) = match dest.layout.variants {
+ layout::Variants::NicheFilling { niche: ref tag, .. } |
+ layout::Variants::Tagged { ref tag, .. } => {
+ let size = tag.value.size(self);
+ // we first read the tag value as scalar, to be able to validate it
+ let tag_mplace = self.mplace_field(dest, 0)?;
+ let tag_value = self.read_scalar(tag_mplace.into())?;
+ path.push(PathElem::Tag);
+ self.validate_scalar(
+ tag_value, size, tag, &path, tag_mplace.layout.ty
+ )?;
+ path.pop(); // remove the element again
+ // then we read it again to get the index, to continue
+ let variant = self.read_discriminant_as_variant_index(dest.into())?;
+ let inner_dest = self.mplace_downcast(dest, variant)?;
+ // Put the variant projection onto the path, as a field
+ path.push(PathElem::Field(dest.layout.ty.ty_adt_def().unwrap().variants[variant].name));
+ trace!("variant layout: {:#?}", dest.layout);
+ (variant, inner_dest)
+ },
+ layout::Variants::Single { index } => {
+ (index, dest)
+ }
+ };
+
+ // Remember the length, in case we need to truncate
+ let path_len = path.len();
+
+ // Validate all fields
+ match dest.layout.fields {
+ // primitives are unions with zero fields
+ // We still check `layout.fields`, not `layout.abi`, because `layout.abi`
+ // is `Scalar` for newtypes around scalars, but we want to descend through the
+ // fields to get a proper `path`.
+ layout::FieldPlacement::Union(0) => {
+ match dest.layout.abi {
+ // nothing to do, whatever the pointer points to, it is never going to be read
+ layout::Abi::Uninhabited =>
+ return validation_failure!("a value of an uninhabited type", path),
+ // check that the scalar is a valid pointer or that its bit range matches the
+ // expectation.
+ layout::Abi::Scalar(ref scalar_layout) => {
+ let size = scalar_layout.value.size(self);
+ let value = self.read_value(dest.into())?;
+ let scalar = value.to_scalar_or_undef();
+ self.validate_scalar(scalar, size, scalar_layout, &path, dest.layout.ty)?;
+ if scalar_layout.value == Primitive::Pointer {
+ // ignore integer pointers, we can't reason about the final hardware
+ if let Scalar::Ptr(ptr) = scalar.not_undef()? {
+ let alloc_kind = self.tcx.alloc_map.lock().get(ptr.alloc_id);
+ if let Some(AllocType::Static(did)) = alloc_kind {
+ // statics from other crates are already checked.
+ // extern statics should not be validated as they have no body.
+ if !did.is_local() || self.tcx.is_foreign_item(did) {
+ return Ok(());
+ }
+ }
+ if value.layout.ty.builtin_deref(false).is_some() {
+ trace!("Recursing below ptr {:#?}", value);
+ let ptr_place = self.ref_to_mplace(value)?;
+ // we have not encountered this pointer+layout combination before
+ if seen.insert(ptr_place) {
+ todo.push((ptr_place, path_clone_and_deref(path)));
+ }
+ }
+ }
+ }
+ },
+ _ => bug!("bad abi for FieldPlacement::Union(0): {:#?}", dest.layout.abi),
+ }
+ }
+ layout::FieldPlacement::Union(_) => {
+ // We can't check unions, their bits are allowed to be anything.
+ // The fields don't need to correspond to any bit pattern of the union's fields.
+ // See https://github.com/rust-lang/rust/issues/32836#issuecomment-406875389
+ },
+ layout::FieldPlacement::Array { .. } => {
+ for (i, field) in self.mplace_array_fields(dest)?.enumerate() {
+ let field = field?;
+ path.push(PathElem::ArrayElem(i));
+ self.validate_mplace(field, path, seen, todo)?;
+ path.truncate(path_len);
+ }
+ },
+ layout::FieldPlacement::Arbitrary { ref offsets, .. } => {
+ // Fat pointers need special treatment.
+ if dest.layout.ty.builtin_deref(true).is_some() {
+ // This is a fat pointer.
+ let ptr = match self.ref_to_mplace(self.read_value(dest.into())?) {
+ Ok(ptr) => ptr,
+ Err(err) => match err.kind {
+ EvalErrorKind::ReadPointerAsBytes =>
+ return validation_failure!(
+ "fat pointer length is not a valid integer", path
+ ),
+ EvalErrorKind::ReadBytesAsPointer =>
+ return validation_failure!(
+ "fat pointer vtable is not a valid pointer", path
+ ),
+ _ => return Err(err),
+ }
+ };
+ let unpacked_ptr = self.unpack_unsized_mplace(ptr)?;
+ // for safe ptrs, recursively check it
+ if !dest.layout.ty.is_unsafe_ptr() {
+ trace!("Recursing below fat ptr {:?} (unpacked: {:?})", ptr, unpacked_ptr);
+ if seen.insert(unpacked_ptr) {
+ todo.push((unpacked_ptr, path_clone_and_deref(path)));
+ }
+ }
+ } else {
+ // Not a pointer, perform regular aggregate handling below
+ for i in 0..offsets.len() {
+ let field = self.mplace_field(dest, i as u64)?;
+ path.push(self.aggregate_field_path_elem(dest.layout.ty, variant, i));
+ self.validate_mplace(field, path, seen, todo)?;
+ path.truncate(path_len);
+ }
+ // FIXME: For a TyStr, check that this is valid UTF-8.
+ }
+ }
+ }
+ Ok(())
+ }
+
+ fn aggregate_field_path_elem(&self, ty: Ty<'tcx>, variant: usize, field: usize) -> PathElem {
+ match ty.sty {
+ // generators and closures.
+ ty::TyClosure(def_id, _) | ty::TyGenerator(def_id, _, _) => {
+ let node_id = self.tcx.hir.as_local_node_id(def_id).unwrap();
+ let freevar = self.tcx.with_freevars(node_id, |fv| fv[field]);
+ PathElem::ClosureVar(self.tcx.hir.name(freevar.var_id()))
+ }
+
+ // tuples
+ ty::TyTuple(_) => PathElem::TupleElem(field),
+
+ // enums
+ ty::TyAdt(def, ..) if def.is_enum() => {
+ let variant = &def.variants[variant];
+ PathElem::Field(variant.fields[field].ident.name)
+ }
+
+ // other ADTs
+ ty::TyAdt(def, _) => PathElem::Field(def.non_enum_variant().fields[field].ident.name),
+
+ // nothing else has an aggregate layout
+ _ => bug!("aggregate_field_path_elem: got non-aggregate type {:?}", ty),
+ }
+ }
+}
#![feature(const_fn)]
#![feature(core_intrinsics)]
#![feature(decl_macro)]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![feature(exhaustive_patterns)]
#![feature(range_contains)]
#![feature(rustc_diagnostic_macros)]
#![feature(unicode_internals)]
#![feature(step_trait)]
#![feature(slice_concat_ext)]
+#![feature(if_while_or_patterns)]
+#![feature(try_from)]
#![recursion_limit="256"]
shim::provide(providers);
transform::provide(providers);
providers.const_eval = interpret::const_eval_provider;
- providers.const_value_to_allocation = interpret::const_value_to_allocation_provider;
+ providers.const_to_allocation = interpret::const_to_allocation_provider;
providers.check_match = hair::pattern::check_match;
}
use rustc::mir::{NullOp, StatementKind, Statement, BasicBlock, LocalKind};
use rustc::mir::{TerminatorKind, ClearCrossCrate, SourceInfo, BinOp, ProjectionElem};
use rustc::mir::visit::{Visitor, PlaceContext};
-use rustc::mir::interpret::{ConstEvalErr, EvalErrorKind, ScalarMaybeUndef};
+use rustc::mir::interpret::{
+ ConstEvalErr, EvalErrorKind, ScalarMaybeUndef, Scalar, GlobalId, EvalResult
+};
use rustc::ty::{TyCtxt, self, Instance};
-use rustc::mir::interpret::{Value, Scalar, GlobalId, EvalResult};
-use interpret::EvalContext;
-use interpret::CompileTimeEvaluator;
-use interpret::{eval_promoted, mk_borrowck_eval_cx, ValTy};
+use interpret::{EvalContext, CompileTimeEvaluator, eval_promoted, mk_borrowck_eval_cx};
+use interpret::{Value, OpTy, MemoryKind};
use transform::{MirPass, MirSource};
use syntax::source_map::{Span, DUMMY_SP};
use rustc::ty::subst::Substs;
-use rustc_data_structures::indexed_vec::IndexVec;
+use rustc_data_structures::indexed_vec::{IndexVec, Idx};
use rustc::ty::ParamEnv;
use rustc::ty::layout::{
LayoutOf, TyLayout, LayoutError,
}
}
-type Const<'tcx> = (Value, TyLayout<'tcx>, Span);
+type Const<'tcx> = (OpTy<'tcx>, Span);
/// Finds optimization opportunities on the MIR.
struct ConstPropagator<'b, 'a, 'tcx:'a+'b> {
source_info: SourceInfo,
) -> Option<Const<'tcx>> {
self.ecx.tcx.span = source_info.span;
- match self.ecx.const_to_value(c.literal.val) {
- Ok(val) => {
+ match self.ecx.const_value_to_op(c.literal.val) {
+ Ok(op) => {
let layout = self.tcx.layout_of(self.param_env.and(c.literal.ty)).ok()?;
- Some((val, layout, c.span))
+ Some((OpTy { op, layout }, c.span))
},
Err(error) => {
let (stacktrace, span) = self.ecx.generate_stacktrace(None);
Place::Projection(ref proj) => match proj.elem {
ProjectionElem::Field(field, _) => {
trace!("field proj on {:?}", proj.base);
- let (base, layout, span) = self.eval_place(&proj.base, source_info)?;
- let valty = self.use_ecx(source_info, |this| {
- this.ecx.read_field(base, None, field, layout)
+ let (base, span) = self.eval_place(&proj.base, source_info)?;
+ let res = self.use_ecx(source_info, |this| {
+ this.ecx.operand_field(base, field.index() as u64)
})?;
- Some((valty.0, valty.1, span))
+ Some((res, span))
},
+ // We could get more projections by using e.g. `operand_projection`,
+ // but we do not even have the stack frame set up properly so
+ // an `Index` projection would throw us off-track.
_ => None,
},
Place::Promoted(ref promoted) => {
};
// cannot use `const_eval` here, because that would require having the MIR
// for the current function available, but we're producing said MIR right now
- let (value, _, ty) = self.use_ecx(source_info, |this| {
+ let res = self.use_ecx(source_info, |this| {
eval_promoted(&mut this.ecx, cid, this.mir, this.param_env)
})?;
- let val = (value, ty, source_info.span);
- trace!("evaluated promoted {:?} to {:?}", promoted, val);
- Some(val)
+ trace!("evaluated promoted {:?} to {:?}", promoted, res);
+ Some((res, source_info.span))
},
_ => None,
}
Rvalue::Discriminant(..) => None,
Rvalue::Cast(kind, ref operand, _) => {
- let (value, layout, span) = self.eval_operand(operand, source_info)?;
+ let (op, span) = self.eval_operand(operand, source_info)?;
self.use_ecx(source_info, |this| {
- let dest_ptr = this.ecx.alloc_ptr(place_layout)?;
- let place_align = place_layout.align;
- let dest = ::interpret::Place::from_ptr(dest_ptr, place_align);
- this.ecx.cast(ValTy { value, ty: layout.ty }, kind, place_layout.ty, dest)?;
- Ok((
- Value::ByRef(dest_ptr.into(), place_align),
- place_layout,
- span,
- ))
+ let dest = this.ecx.allocate(place_layout, MemoryKind::Stack)?;
+ this.ecx.cast(op, kind, dest.into())?;
+ Ok((dest.into(), span))
})
}
Rvalue::Len(_) => None,
Rvalue::NullaryOp(NullOp::SizeOf, ty) => {
type_size_of(self.tcx, self.param_env, ty).and_then(|n| Some((
- Value::Scalar(Scalar::Bits {
- bits: n as u128,
- size: self.tcx.data_layout.pointer_size.bytes() as u8,
- }.into()),
- self.tcx.layout_of(self.param_env.and(self.tcx.types.usize)).ok()?,
+ OpTy::from_scalar_value(
+ Scalar::Bits {
+ bits: n as u128,
+ size: self.tcx.data_layout.pointer_size.bytes() as u8,
+ },
+ self.tcx.layout_of(self.param_env.and(self.tcx.types.usize)).ok()?,
+ ),
span,
)))
}
return None;
}
- let val = self.eval_operand(arg, source_info)?;
- let prim = self.use_ecx(source_info, |this| {
- this.ecx.value_to_scalar(ValTy { value: val.0, ty: val.1.ty })
+ let (arg, _) = self.eval_operand(arg, source_info)?;
+ let val = self.use_ecx(source_info, |this| {
+ let prim = this.ecx.read_scalar(arg)?.not_undef()?;
+ this.ecx.unary_op(op, prim, arg.layout)
})?;
- let val = self.use_ecx(source_info, |this| this.ecx.unary_op(op, prim, val.1))?;
- Some((Value::Scalar(val.into()), place_layout, span))
+ Some((OpTy::from_scalar_value(val, place_layout), span))
}
Rvalue::CheckedBinaryOp(op, ref left, ref right) |
Rvalue::BinaryOp(op, ref left, ref right) => {
}
let r = self.use_ecx(source_info, |this| {
- this.ecx.value_to_scalar(ValTy { value: right.0, ty: right.1.ty })
+ this.ecx.read_value(right.0)
})?;
if op == BinOp::Shr || op == BinOp::Shl {
let left_ty = left.ty(self.mir, self.tcx);
.unwrap()
.size
.bits();
- let right_size = right.1.size;
- if r.to_bits(right_size).ok().map_or(false, |b| b >= left_bits as u128) {
+ let right_size = right.0.layout.size;
+ let r_bits = r.to_scalar().and_then(|r| r.to_bits(right_size));
+ if r_bits.ok().map_or(false, |b| b >= left_bits as u128) {
let source_scope_local_data = match self.mir.source_scope_local_data {
ClearCrossCrate::Set(ref data) => data,
ClearCrossCrate::Clear => return None,
}
let left = self.eval_operand(left, source_info)?;
let l = self.use_ecx(source_info, |this| {
- this.ecx.value_to_scalar(ValTy { value: left.0, ty: left.1.ty })
+ this.ecx.read_value(left.0)
})?;
trace!("const evaluating {:?} for {:?} and {:?}", op, left, right);
let (val, overflow) = self.use_ecx(source_info, |this| {
- this.ecx.binary_op(op, l, left.1.ty, r, right.1.ty)
+ this.ecx.binary_op(op, l, r)
})?;
let val = if let Rvalue::CheckedBinaryOp(..) = *rvalue {
Value::ScalarPair(
}
Value::Scalar(val.into())
};
- Some((val, place_layout, span))
+ let res = OpTy {
+ op: ::interpret::Operand::Immediate(val),
+ layout: place_layout,
+ };
+ Some((res, span))
},
}
}
if let TerminatorKind::Assert { expected, msg, cond, .. } = kind {
if let Some(value) = self.eval_operand(cond, source_info) {
trace!("assertion on {:?} should be {:?}", value, expected);
- if Value::Scalar(Scalar::from_bool(*expected).into()) != value.0 {
+ let expected = Value::Scalar(Scalar::from_bool(*expected).into());
+ if expected != value.0.to_immediate() {
// poison all places this operand references so that further code
// doesn't use the invalid value
match cond {
let len = self
.eval_operand(len, source_info)
.expect("len must be const");
- let len = match len.0 {
+ let len = match len.0.to_immediate() {
Value::Scalar(ScalarMaybeUndef::Scalar(Scalar::Bits {
bits, ..
})) => bits,
let index = self
.eval_operand(index, source_info)
.expect("index must be const");
- let index = match index.0 {
+ let index = match index.0.to_immediate() {
Value::Scalar(ScalarMaybeUndef::Scalar(Scalar::Bits {
bits, ..
})) => bits,
*unwind = Some(self.update_target(tgt));
} else if !self.in_cleanup_block {
// Unless this drop is in a cleanup block, add an unwind edge to
- // the orignal call's cleanup block
+ // the original call's cleanup block
*unwind = self.cleanup_block;
}
}
*cleanup = Some(self.update_target(tgt));
} else if !self.in_cleanup_block {
// Unless this call is in a cleanup block, add an unwind edge to
- // the orignal call's cleanup block
+ // the original call's cleanup block
*cleanup = self.cleanup_block;
}
}
*cleanup = Some(self.update_target(tgt));
} else if !self.in_cleanup_block {
// Unless this assert is in a cleanup block, add an unwind edge to
- // the orignal call's cleanup block
+ // the original call's cleanup block
*cleanup = self.cleanup_block;
}
}
let ref mut statement = blocks[loc.block].statements[loc.statement_index];
match statement.kind {
StatementKind::Assign(_, Rvalue::Ref(_, _, ref mut place)) => {
- // Find the underlying local for this (necessarilly interior) borrow.
+ // Find the underlying local for this (necessarily interior) borrow.
// HACK(eddyb) using a recursive function because of mutable borrows.
fn interior_base<'a, 'tcx>(place: &'a mut Place<'tcx>)
-> &'a mut Place<'tcx> {
let local_use = &visitor.locals_use[*local];
let opt_index_and_place = Self::try_get_item_source(local_use, mir);
// each local should be used twice:
- // in assign and in aggregate statments
+ // in assign and in aggregate statements
if local_use.use_count == 2 && opt_index_and_place.is_some() {
let (index, src_place) = opt_index_and_place.unwrap();
return Some((local_use, index, src_place));
if opt_size.is_some() && items.iter().all(
|l| l.is_some() && l.unwrap().2 == opt_src_place.unwrap()) {
- let indicies: Vec<_> = items.iter().map(|x| x.unwrap().1).collect();
- for i in 1..indicies.len() {
- if indicies[i - 1] + 1 != indicies[i] {
+ let indices: Vec<_> = items.iter().map(|x| x.unwrap().1).collect();
+ for i in 1..indices.len() {
+ if indices[i - 1] + 1 != indices[i] {
return;
}
}
- let min = *indicies.first().unwrap();
- let max = *indicies.last().unwrap();
+ let min = *indices.first().unwrap();
+ let max = *indices.last().unwrap();
for item in items {
let locals_use = item.unwrap().0;
let indent = depth * INDENT.len();
let children = match scope_tree.get(&parent) {
- Some(childs) => childs,
+ Some(children) => children,
None => return Ok(()),
};
fn visit_generic_args(&mut self, _: Span, generic_args: &'a GenericArgs) {
match *generic_args {
GenericArgs::AngleBracketed(ref data) => {
- data.args.iter().for_each(|arg| match arg {
- GenericArg::Type(ty) => self.visit_ty(ty),
- _ => {}
- });
+ for arg in &data.args {
+ self.visit_generic_arg(arg)
+ }
for type_binding in &data.bindings {
// Type bindings such as `Item=impl Debug` in `Iterator<Item=Debug>`
// are allowed to contain nested `impl Trait`.
extern crate syntax_pos;
extern crate rustc_data_structures;
-use rustc::hir::{self, GenericParamKind, PatKind};
+use rustc::hir::{self, PatKind};
use rustc::hir::def::Def;
use rustc::hir::def_id::{CRATE_DEF_INDEX, LOCAL_CRATE, CrateNum, DefId};
use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap};
}
fn visit_generics(&mut self, generics: &'tcx hir::Generics) {
- generics.params.iter().for_each(|param| match param.kind {
- GenericParamKind::Lifetime { .. } => {}
- GenericParamKind::Type { .. } => {
- for bound in ¶m.bounds {
- self.check_generic_bound(bound);
- }
+ for param in &generics.params {
+ for bound in ¶m.bounds {
+ self.check_generic_bound(bound);
}
- });
+ }
for predicate in &generics.where_clause.predicates {
match predicate {
&hir::WherePredicate::BoundPredicate(ref bound_pred) => {
use rustc_data_structures::sync::Lrc;
use resolve_imports::{ImportDirective, ImportDirectiveSubclass, NameResolution, ImportResolver};
-use macros::{InvocationData, LegacyBinding, MacroBinding};
+use macros::{InvocationData, LegacyBinding};
// NB: This module needs to be declared first so diagnostics are
// registered before they are used.
if let Some(impl_span) = maybe_impl_defid.map_or(None,
|def_id| resolver.definitions.opt_span(def_id)) {
err.span_label(reduce_impl_span_to_impl_keyword(cm, impl_span),
- "`Self` type implicitely declared here, on the `impl`");
+ "`Self` type implicitly declared here, on the `impl`");
}
},
Def::TyParam(typaram_defid) => {
.filter_map(|param| match param.kind {
GenericParamKind::Lifetime { .. } => None,
GenericParamKind::Type { ref default, .. } => {
- if found_default || default.is_some() {
- found_default = true;
- return Some((Ident::with_empty_ctxt(param.ident.name), Def::Err));
+ found_default |= default.is_some();
+ if found_default {
+ Some((Ident::with_empty_ctxt(param.ident.name), Def::Err))
+ } else {
+ None
}
- None
}
}));
proc_mac_errors: Vec<macros::ProcMacError>,
/// crate-local macro expanded `macro_export` referred to by a module-relative path
macro_expanded_macro_export_errors: BTreeSet<(Span, Span)>,
-
+ /// macro-expanded `macro_rules` shadowing existing macros
disallowed_shadowing: Vec<&'a LegacyBinding<'a>>,
arenas: &'a ResolverArenas<'a>,
}
ident.span = ident.span.modern();
+ let mut poisoned = None;
loop {
- let (opt_module, poisoned) = if let Some(node_id) = record_used_id {
+ let opt_module = if let Some(node_id) = record_used_id {
self.hygienic_lexical_parent_with_compatibility_fallback(module, &mut ident.span,
- node_id)
+ node_id, &mut poisoned)
} else {
- (self.hygienic_lexical_parent(module, &mut ident.span), None)
+ self.hygienic_lexical_parent(module, &mut ident.span)
};
module = unwrap_or!(opt_module, break);
let orig_current_module = self.current_module;
}
return Some(LexicalScopeBinding::Item(binding))
}
- _ if poisoned.is_some() => break,
Err(Determined) => continue,
Err(Undetermined) =>
span_bug!(ident.span, "undetermined resolution during main resolution pass"),
None
}
- fn hygienic_lexical_parent_with_compatibility_fallback(
- &mut self, module: Module<'a>, span: &mut Span, node_id: NodeId
- ) -> (Option<Module<'a>>, /* poisoned */ Option<NodeId>)
- {
+ fn hygienic_lexical_parent_with_compatibility_fallback(&mut self, module: Module<'a>,
+ span: &mut Span, node_id: NodeId,
+ poisoned: &mut Option<NodeId>)
+ -> Option<Module<'a>> {
if let module @ Some(..) = self.hygienic_lexical_parent(module, span) {
- return (module, None);
+ return module;
}
// We need to support the next case under a deprecation warning
// The macro is a proc macro derive
if module.expansion.looks_like_proc_macro_derive() {
if parent.expansion.is_descendant_of(span.ctxt().outer()) {
- return (module.parent, Some(node_id));
+ *poisoned = Some(node_id);
+ return module.parent;
}
}
}
}
- (None, None)
+ None
}
fn resolve_ident_in_module(&mut self,
HasTypeParameters(generics, rib_kind) => {
let mut function_type_rib = Rib::new(rib_kind);
let mut seen_bindings = FxHashMap();
- generics.params.iter().for_each(|param| match param.kind {
- GenericParamKind::Lifetime { .. } => {}
- GenericParamKind::Type { .. } => {
- let ident = param.ident.modern();
- debug!("with_type_parameter_rib: {}", param.id);
-
- if seen_bindings.contains_key(&ident) {
- let span = seen_bindings.get(&ident).unwrap();
- let err = ResolutionError::NameAlreadyUsedInTypeParameterList(
- ident.name,
- span,
- );
- resolve_error(self, param.ident.span, err);
- }
- seen_bindings.entry(ident).or_insert(param.ident.span);
+ for param in &generics.params {
+ match param.kind {
+ GenericParamKind::Lifetime { .. } => {}
+ GenericParamKind::Type { .. } => {
+ let ident = param.ident.modern();
+ debug!("with_type_parameter_rib: {}", param.id);
+
+ if seen_bindings.contains_key(&ident) {
+ let span = seen_bindings.get(&ident).unwrap();
+ let err = ResolutionError::NameAlreadyUsedInTypeParameterList(
+ ident.name,
+ span,
+ );
+ resolve_error(self, param.ident.span, err);
+ }
+ seen_bindings.entry(ident).or_insert(param.ident.span);
- // Plain insert (no renaming).
- let def = Def::TyParam(self.definitions.local_def_id(param.id));
- function_type_rib.bindings.insert(ident, def);
- self.record_def(param.id, PathResolution::new(def));
+ // Plain insert (no renaming).
+ let def = Def::TyParam(self.definitions.local_def_id(param.id));
+ function_type_rib.bindings.insert(ident, def);
+ self.record_def(param.id, PathResolution::new(def));
+ }
}
- });
+ }
self.ribs[TypeNS].push(function_type_rib);
}
} else if opt_ns == Some(MacroNS) {
assert!(ns == TypeNS);
self.resolve_lexical_macro_path_segment(ident, ns, record_used, record_used,
- false, path_span).map(MacroBinding::binding)
+ false, path_span).map(|(b, _)| b)
} else {
let record_used_id =
if record_used { crate_lint.node_id().or(Some(CRATE_NODE_ID)) } else { None };
vis.is_accessible_from(module.normal_ancestor_id, self)
}
+ fn report_ambiguity_error(
+ &self, name: Name, span: Span, _lexical: bool,
+ def1: Def, is_import1: bool, is_glob1: bool, from_expansion1: bool, span1: Span,
+ def2: Def, is_import2: bool, _is_glob2: bool, _from_expansion2: bool, span2: Span,
+ ) {
+ let participle = |is_import: bool| if is_import { "imported" } else { "defined" };
+ let msg1 = format!("`{}` could refer to the name {} here", name, participle(is_import1));
+ let msg2 =
+ format!("`{}` could also refer to the name {} here", name, participle(is_import2));
+ let note = if from_expansion1 {
+ Some(if let Def::Macro(..) = def1 {
+ format!("macro-expanded {} do not shadow",
+ if is_import1 { "macro imports" } else { "macros" })
+ } else {
+ format!("macro-expanded {} do not shadow when used in a macro invocation path",
+ if is_import1 { "imports" } else { "items" })
+ })
+ } else if is_glob1 {
+ Some(format!("consider adding an explicit import of `{}` to disambiguate", name))
+ } else {
+ None
+ };
+
+ let mut err = struct_span_err!(self.session, span, E0659, "`{}` is ambiguous", name);
+ err.span_note(span1, &msg1);
+ match def2 {
+ Def::Macro(..) if span2.is_dummy() =>
+ err.note(&format!("`{}` is also a builtin macro", name)),
+ _ => err.span_note(span2, &msg2),
+ };
+ if let Some(note) = note {
+ err.note(¬e);
+ }
+ err.emit();
+ }
+
fn report_errors(&mut self, krate: &Crate) {
self.report_shadowing_errors();
self.report_with_use_injections(krate);
}
for &AmbiguityError { span, name, b1, b2, lexical } in &self.ambiguity_errors {
- if !reported_spans.insert(span) { continue }
- let participle = |binding: &NameBinding| {
- if binding.is_import() { "imported" } else { "defined" }
- };
- let msg1 = format!("`{}` could refer to the name {} here", name, participle(b1));
- let msg2 = format!("`{}` could also refer to the name {} here", name, participle(b2));
- let note = if b1.expansion == Mark::root() || !lexical && b1.is_glob_import() {
- format!("consider adding an explicit import of `{}` to disambiguate", name)
- } else if let Def::Macro(..) = b1.def() {
- format!("macro-expanded {} do not shadow",
- if b1.is_import() { "macro imports" } else { "macros" })
- } else {
- format!("macro-expanded {} do not shadow when used in a macro invocation path",
- if b1.is_import() { "imports" } else { "items" })
- };
-
- let mut err = struct_span_err!(self.session, span, E0659, "`{}` is ambiguous", name);
- err.span_note(b1.span, &msg1);
- match b2.def() {
- Def::Macro(..) if b2.span.is_dummy() =>
- err.note(&format!("`{}` is also a builtin macro", name)),
- _ => err.span_note(b2.span, &msg2),
- };
- err.note(¬e).emit();
+ if reported_spans.insert(span) {
+ self.report_ambiguity_error(
+ name, span, lexical,
+ b1.def(), b1.is_import(), b1.is_glob_import(),
+ b1.expansion != Mark::root(), b1.span,
+ b2.def(), b2.is_import(), b2.is_glob_import(),
+ b2.expansion != Mark::root(), b2.span,
+ );
+ }
}
for &PrivacyError(span, name, binding) in &self.privacy_errors {
use std::mem;
use rustc_data_structures::sync::Lrc;
+crate struct FromPrelude(bool);
+crate struct FromExpansion(bool);
+
#[derive(Clone)]
pub struct InvocationData<'a> {
pub module: Cell<Module<'a>>,
pub span: Span,
}
+impl<'a> LegacyBinding<'a> {
+ fn def(&self) -> Def {
+ Def::Macro(self.def_id, MacroKind::Bang)
+ }
+}
+
pub struct ProcMacError {
crate_name: Symbol,
name: Symbol,
warn_msg: &'static str,
}
-#[derive(Copy, Clone)]
-pub enum MacroBinding<'a> {
- Legacy(&'a LegacyBinding<'a>),
- Global(&'a NameBinding<'a>),
- Modern(&'a NameBinding<'a>),
-}
-
-impl<'a> MacroBinding<'a> {
- pub fn span(self) -> Span {
- match self {
- MacroBinding::Legacy(binding) => binding.span,
- MacroBinding::Global(binding) | MacroBinding::Modern(binding) => binding.span,
- }
- }
-
- pub fn binding(self) -> &'a NameBinding<'a> {
- match self {
- MacroBinding::Global(binding) | MacroBinding::Modern(binding) => binding,
- MacroBinding::Legacy(_) => panic!("unexpected MacroBinding::Legacy"),
- }
- }
-
- pub fn def_ignoring_ambiguity(self) -> Def {
- match self {
- MacroBinding::Legacy(binding) => Def::Macro(binding.def_id, MacroKind::Bang),
- MacroBinding::Global(binding) | MacroBinding::Modern(binding) =>
- binding.def_ignoring_ambiguity(),
- }
- }
-}
-
impl<'a, 'crateloader: 'a> base::Resolver for Resolver<'a, 'crateloader> {
fn next_node_id(&mut self) -> ast::NodeId {
self.session.next_node_id()
None
}
- fn resolve_invoc(&mut self, invoc: &Invocation, scope: Mark, force: bool)
- -> Result<Option<Lrc<SyntaxExtension>>, Determinacy> {
- let def = match invoc.kind {
- InvocationKind::Attr { attr: None, .. } => return Ok(None),
- _ => self.resolve_invoc_to_def(invoc, scope, force)?,
+ fn resolve_macro_invocation(&mut self, invoc: &Invocation, scope: Mark, force: bool)
+ -> Result<Option<Lrc<SyntaxExtension>>, Determinacy> {
+ let (path, kind, derives_in_scope) = match invoc.kind {
+ InvocationKind::Attr { attr: None, .. } =>
+ return Ok(None),
+ InvocationKind::Attr { attr: Some(ref attr), ref traits, .. } =>
+ (&attr.path, MacroKind::Attr, &traits[..]),
+ InvocationKind::Bang { ref mac, .. } =>
+ (&mac.node.path, MacroKind::Bang, &[][..]),
+ InvocationKind::Derive { ref path, .. } =>
+ (path, MacroKind::Derive, &[][..]),
};
- if let Def::Macro(_, MacroKind::ProcMacroStub) = def {
- self.report_proc_macro_stub(invoc.span());
- return Err(Determinacy::Determined);
- } else if let Def::NonMacroAttr(attr_kind) = def {
- // Note that not only attributes, but anything in macro namespace can result in a
- // `Def::NonMacroAttr` definition (e.g. `inline!()`), so we must report the error
- // below for these cases.
- let is_attr_invoc =
- if let InvocationKind::Attr { .. } = invoc.kind { true } else { false };
- let path = invoc.path().expect("no path for non-macro attr");
- match attr_kind {
- NonMacroAttrKind::Tool | NonMacroAttrKind::DeriveHelper |
- NonMacroAttrKind::Custom if is_attr_invoc => {
- let features = self.session.features_untracked();
- if attr_kind == NonMacroAttrKind::Tool &&
- !features.tool_attributes {
- feature_err(&self.session.parse_sess, "tool_attributes",
- invoc.span(), GateIssue::Language,
- "tool attributes are unstable").emit();
- }
- if attr_kind == NonMacroAttrKind::Custom {
- assert!(path.segments.len() == 1);
- let name = path.segments[0].ident.name.as_str();
- if name.starts_with("rustc_") {
- if !features.rustc_attrs {
- let msg = "unless otherwise specified, attributes with the prefix \
- `rustc_` are reserved for internal compiler diagnostics";
- feature_err(&self.session.parse_sess, "rustc_attrs", invoc.span(),
- GateIssue::Language, &msg).emit();
- }
- } else if name.starts_with("derive_") {
- if !features.custom_derive {
- feature_err(&self.session.parse_sess, "custom_derive", invoc.span(),
- GateIssue::Language, EXPLAIN_DERIVE_UNDERSCORE).emit();
- }
- } else if !features.custom_attribute {
- let msg = format!("The attribute `{}` is currently unknown to the \
- compiler and may have meaning added to it in the \
- future", path);
- feature_err(&self.session.parse_sess, "custom_attribute", invoc.span(),
- GateIssue::Language, &msg).emit();
- }
- }
- return Ok(Some(Lrc::new(SyntaxExtension::NonMacroAttr {
- mark_used: attr_kind == NonMacroAttrKind::Tool,
- })));
- }
- _ => {
- self.report_non_macro_attr(path.span, def);
- return Err(Determinacy::Determined);
- }
- }
+ let (def, ext) = self.resolve_macro_to_def(path, kind, scope, derives_in_scope, force)?;
+
+ if let Def::Macro(def_id, _) = def {
+ self.macro_defs.insert(invoc.expansion_data.mark, def_id);
+ let normal_module_def_id =
+ self.macro_def_scope(invoc.expansion_data.mark).normal_ancestor_id;
+ self.definitions.add_parent_module_of_macro_def(invoc.expansion_data.mark,
+ normal_module_def_id);
+ invoc.expansion_data.mark.set_default_transparency(ext.default_transparency());
+ invoc.expansion_data.mark.set_is_builtin(def_id.krate == BUILTIN_MACROS_CRATE);
}
- let def_id = def.def_id();
-
- self.macro_defs.insert(invoc.expansion_data.mark, def_id);
- let normal_module_def_id =
- self.macro_def_scope(invoc.expansion_data.mark).normal_ancestor_id;
- self.definitions.add_parent_module_of_macro_def(invoc.expansion_data.mark,
- normal_module_def_id);
-
- self.unused_macros.remove(&def_id);
- let ext = self.get_macro(def);
- invoc.expansion_data.mark.set_default_transparency(ext.default_transparency());
- invoc.expansion_data.mark.set_is_builtin(def_id.krate == BUILTIN_MACROS_CRATE);
+
Ok(Some(ext))
}
- fn resolve_macro(&mut self, scope: Mark, path: &ast::Path, kind: MacroKind, force: bool)
- -> Result<Lrc<SyntaxExtension>, Determinacy> {
- self.resolve_macro_to_def(scope, path, kind, force).and_then(|def| {
- if let Def::Macro(_, MacroKind::ProcMacroStub) = def {
- self.report_proc_macro_stub(path.span);
- return Err(Determinacy::Determined);
- } else if let Def::NonMacroAttr(..) = def {
- self.report_non_macro_attr(path.span, def);
- return Err(Determinacy::Determined);
- }
- self.unused_macros.remove(&def.def_id());
- Ok(self.get_macro(def))
- })
+ fn resolve_macro_path(&mut self, path: &ast::Path, kind: MacroKind, scope: Mark,
+ derives_in_scope: &[ast::Path], force: bool)
+ -> Result<Lrc<SyntaxExtension>, Determinacy> {
+ Ok(self.resolve_macro_to_def(path, kind, scope, derives_in_scope, force)?.1)
}
fn check_unused_macros(&self) {
}
impl<'a, 'cl> Resolver<'a, 'cl> {
- fn report_proc_macro_stub(&self, span: Span) {
- self.session.span_err(span,
- "can't use a procedural macro from the same crate that defines it");
- }
+ fn resolve_macro_to_def(&mut self, path: &ast::Path, kind: MacroKind, scope: Mark,
+ derives_in_scope: &[ast::Path], force: bool)
+ -> Result<(Def, Lrc<SyntaxExtension>), Determinacy> {
+ let def = self.resolve_macro_to_def_inner(path, kind, scope, derives_in_scope, force);
- fn report_non_macro_attr(&self, span: Span, def: Def) {
- self.session.span_err(span, &format!("expected a macro, found {}", def.kind_name()));
- }
-
- fn resolve_invoc_to_def(&mut self, invoc: &Invocation, scope: Mark, force: bool)
- -> Result<Def, Determinacy> {
- let (attr, traits) = match invoc.kind {
- InvocationKind::Attr { ref attr, ref traits, .. } => (attr, traits),
- InvocationKind::Bang { ref mac, .. } => {
- return self.resolve_macro_to_def(scope, &mac.node.path, MacroKind::Bang, force);
- }
- InvocationKind::Derive { ref path, .. } => {
- return self.resolve_macro_to_def(scope, path, MacroKind::Derive, force);
+ // Report errors and enforce feature gates for the resolved macro.
+ if def != Err(Determinacy::Undetermined) {
+ // Do not report duplicated errors on every undetermined resolution.
+ for segment in &path.segments {
+ if let Some(args) = &segment.args {
+ self.session.span_err(args.span(), "generic arguments in macro path");
+ }
}
- };
-
- let path = attr.as_ref().unwrap().path.clone();
- let def = self.resolve_macro_to_def(scope, &path, MacroKind::Attr, force);
- if let Ok(Def::NonMacroAttr(NonMacroAttrKind::Custom)) = def {} else {
- return def;
}
- // At this point we've found that the `attr` is determinately unresolved and thus can be
- // interpreted as a custom attribute. Normally custom attributes are feature gated, but
- // it may be a custom attribute whitelisted by a derive macro and they do not require
- // a feature gate.
- //
- // So here we look through all of the derive annotations in scope and try to resolve them.
- // If they themselves successfully resolve *and* one of the resolved derive macros
- // whitelists this attribute's name, then this is a registered attribute and we can convert
- // it from a "generic custom attrite" into a "known derive helper attribute".
- enum ConvertToDeriveHelper { Yes, No, DontKnow }
- let mut convert_to_derive_helper = ConvertToDeriveHelper::No;
- let attr_name = path.segments[0].ident.name;
- for path in traits {
- match self.resolve_macro(scope, path, MacroKind::Derive, force) {
- Ok(ext) => if let SyntaxExtension::ProcMacroDerive(_, ref inert_attrs, _) = *ext {
- if inert_attrs.contains(&attr_name) {
- convert_to_derive_helper = ConvertToDeriveHelper::Yes;
- break
- }
- },
- Err(Determinacy::Undetermined) =>
- convert_to_derive_helper = ConvertToDeriveHelper::DontKnow,
- Err(Determinacy::Determined) => {}
- }
- }
+ let def = def?;
- match convert_to_derive_helper {
- ConvertToDeriveHelper::Yes => Ok(Def::NonMacroAttr(NonMacroAttrKind::DeriveHelper)),
- ConvertToDeriveHelper::No => def,
- ConvertToDeriveHelper::DontKnow => Err(Determinacy::determined(force)),
+ if path.segments.len() > 1 {
+ if kind != MacroKind::Bang {
+ if def != Def::NonMacroAttr(NonMacroAttrKind::Tool) &&
+ !self.session.features_untracked().proc_macro_path_invoc {
+ let msg = format!("non-ident {} paths are unstable", kind.descr());
+ emit_feature_err(&self.session.parse_sess, "proc_macro_path_invoc",
+ path.span, GateIssue::Language, &msg);
+ }
+ }
}
- }
- fn resolve_macro_to_def(&mut self, scope: Mark, path: &ast::Path, kind: MacroKind, force: bool)
- -> Result<Def, Determinacy> {
- let def = self.resolve_macro_to_def_inner(scope, path, kind, force);
- if def != Err(Determinacy::Undetermined) {
- // Do not report duplicated errors on every undetermined resolution.
- path.segments.iter().find(|segment| segment.args.is_some()).map(|segment| {
- self.session.span_err(segment.args.as_ref().unwrap().span(),
- "generic arguments in macro path");
- });
- }
- if kind != MacroKind::Bang && path.segments.len() > 1 &&
- def != Ok(Def::NonMacroAttr(NonMacroAttrKind::Tool)) {
- if !self.session.features_untracked().proc_macro_path_invoc {
- emit_feature_err(
- &self.session.parse_sess,
- "proc_macro_path_invoc",
- path.span,
- GateIssue::Language,
- "paths of length greater than one in macro invocations are \
- currently unstable",
- );
+ match def {
+ Def::Macro(def_id, macro_kind) => {
+ self.unused_macros.remove(&def_id);
+ if macro_kind == MacroKind::ProcMacroStub {
+ let msg = "can't use a procedural macro from the same crate that defines it";
+ self.session.span_err(path.span, msg);
+ return Err(Determinacy::Determined);
+ }
+ }
+ Def::NonMacroAttr(attr_kind) => {
+ if kind == MacroKind::Attr {
+ let features = self.session.features_untracked();
+ if attr_kind == NonMacroAttrKind::Tool && !features.tool_attributes {
+ feature_err(&self.session.parse_sess, "tool_attributes", path.span,
+ GateIssue::Language, "tool attributes are unstable").emit();
+ }
+ if attr_kind == NonMacroAttrKind::Custom {
+ assert!(path.segments.len() == 1);
+ let name = path.segments[0].ident.name.as_str();
+ if name.starts_with("rustc_") {
+ if !features.rustc_attrs {
+ let msg = "unless otherwise specified, attributes with the prefix \
+ `rustc_` are reserved for internal compiler diagnostics";
+ feature_err(&self.session.parse_sess, "rustc_attrs", path.span,
+ GateIssue::Language, &msg).emit();
+ }
+ } else if name.starts_with("derive_") {
+ if !features.custom_derive {
+ feature_err(&self.session.parse_sess, "custom_derive", path.span,
+ GateIssue::Language, EXPLAIN_DERIVE_UNDERSCORE).emit();
+ }
+ } else if !features.custom_attribute {
+ let msg = format!("The attribute `{}` is currently unknown to the \
+ compiler and may have meaning added to it in the \
+ future", path);
+ feature_err(&self.session.parse_sess, "custom_attribute", path.span,
+ GateIssue::Language, &msg).emit();
+ }
+ }
+ } else {
+ // Not only attributes, but anything in macro namespace can result in
+ // `Def::NonMacroAttr` definition (e.g. `inline!()`), so we must report
+ // an error for those cases.
+ let msg = format!("expected a macro, found {}", def.kind_name());
+ self.session.span_err(path.span, &msg);
+ return Err(Determinacy::Determined);
+ }
}
+ _ => panic!("expected `Def::Macro` or `Def::NonMacroAttr`"),
}
- def
+
+ Ok((def, self.get_macro(def)))
}
- pub fn resolve_macro_to_def_inner(&mut self, scope: Mark, path: &ast::Path,
- kind: MacroKind, force: bool)
- -> Result<Def, Determinacy> {
+ pub fn resolve_macro_to_def_inner(&mut self, path: &ast::Path, kind: MacroKind, scope: Mark,
+ derives_in_scope: &[ast::Path], force: bool)
+ -> Result<Def, Determinacy> {
let ast::Path { ref segments, span } = *path;
let mut path: Vec<_> = segments.iter().map(|seg| seg.ident).collect();
let invocation = self.invocations[&scope];
}
let legacy_resolution = self.resolve_legacy_scope(&invocation.legacy_scope, path[0], false);
- let result = if let Some(MacroBinding::Legacy(binding)) = legacy_resolution {
- Ok(Def::Macro(binding.def_id, MacroKind::Bang))
+ let result = if let Some((legacy_binding, _)) = legacy_resolution {
+ Ok(legacy_binding.def())
} else {
match self.resolve_lexical_macro_path_segment(path[0], MacroNS, false, force,
kind == MacroKind::Attr, span) {
- Ok(binding) => Ok(binding.binding().def_ignoring_ambiguity()),
+ Ok((binding, _)) => Ok(binding.def_ignoring_ambiguity()),
Err(Determinacy::Undetermined) => return Err(Determinacy::Undetermined),
Err(Determinacy::Determined) => {
self.found_unresolved_macro = true;
self.current_module.nearest_item_scope().legacy_macro_resolutions.borrow_mut()
.push((scope, path[0], kind, result.ok()));
- result
+ if let Ok(Def::NonMacroAttr(NonMacroAttrKind::Custom)) = result {} else {
+ return result;
+ }
+
+ // At this point we've found that the `attr` is determinately unresolved and thus can be
+ // interpreted as a custom attribute. Normally custom attributes are feature gated, but
+ // it may be a custom attribute whitelisted by a derive macro and they do not require
+ // a feature gate.
+ //
+ // So here we look through all of the derive annotations in scope and try to resolve them.
+ // If they themselves successfully resolve *and* one of the resolved derive macros
+ // whitelists this attribute's name, then this is a registered attribute and we can convert
+ // it from a "generic custom attrite" into a "known derive helper attribute".
+ assert!(kind == MacroKind::Attr);
+ enum ConvertToDeriveHelper { Yes, No, DontKnow }
+ let mut convert_to_derive_helper = ConvertToDeriveHelper::No;
+ for derive in derives_in_scope {
+ match self.resolve_macro_path(derive, MacroKind::Derive, scope, &[], force) {
+ Ok(ext) => if let SyntaxExtension::ProcMacroDerive(_, ref inert_attrs, _) = *ext {
+ if inert_attrs.contains(&path[0].name) {
+ convert_to_derive_helper = ConvertToDeriveHelper::Yes;
+ break
+ }
+ },
+ Err(Determinacy::Undetermined) =>
+ convert_to_derive_helper = ConvertToDeriveHelper::DontKnow,
+ Err(Determinacy::Determined) => {}
+ }
+ }
+
+ match convert_to_derive_helper {
+ ConvertToDeriveHelper::Yes => Ok(Def::NonMacroAttr(NonMacroAttrKind::DeriveHelper)),
+ ConvertToDeriveHelper::No => result,
+ ConvertToDeriveHelper::DontKnow => Err(Determinacy::determined(force)),
+ }
}
// Resolve the initial segment of a non-global macro path
// (e.g. `foo` in `foo::bar!(); or `foo!();`).
// This is a variation of `fn resolve_ident_in_lexical_scope` that can be run during
// expansion and import resolution (perhaps they can be merged in the future).
- pub fn resolve_lexical_macro_path_segment(&mut self,
- mut ident: Ident,
- ns: Namespace,
- record_used: bool,
- force: bool,
- is_attr: bool,
- path_span: Span)
- -> Result<MacroBinding<'a>, Determinacy> {
+ crate fn resolve_lexical_macro_path_segment(
+ &mut self,
+ mut ident: Ident,
+ ns: Namespace,
+ record_used: bool,
+ force: bool,
+ is_attr: bool,
+ path_span: Span
+ ) -> Result<(&'a NameBinding<'a>, FromPrelude), Determinacy> {
// General principles:
// 1. Not controlled (user-defined) names should have higher priority than controlled names
// built into the language or standard library. This way we can add new names into the
// m::mac!();
// }
// This includes names from globs and from macro expansions.
- let mut potentially_ambiguous_result: Option<MacroBinding> = None;
+ let mut potentially_ambiguous_result: Option<(&NameBinding, FromPrelude)> = None;
enum WhereToResolve<'a> {
Module(Module<'a>),
path_span,
);
self.current_module = orig_current_module;
- binding.map(MacroBinding::Modern)
+ binding.map(|binding| (binding, FromPrelude(false)))
}
WhereToResolve::MacroPrelude => {
match self.macro_prelude.get(&ident.name).cloned() {
- Some(binding) => Ok(MacroBinding::Global(binding)),
+ Some(binding) => Ok((binding, FromPrelude(true))),
None => Err(Determinacy::Determined),
}
}
let binding = (Def::NonMacroAttr(NonMacroAttrKind::Builtin),
ty::Visibility::Public, ident.span, Mark::root())
.to_name_binding(self.arenas);
- Ok(MacroBinding::Global(binding))
+ Ok((binding, FromPrelude(true)))
} else {
Err(Determinacy::Determined)
}
let binding = (crate_root, ty::Visibility::Public,
ident.span, Mark::root()).to_name_binding(self.arenas);
- Ok(MacroBinding::Global(binding))
+ Ok((binding, FromPrelude(true)))
} else {
Err(Determinacy::Determined)
}
if use_prelude && is_known_tool(ident.name) {
let binding = (Def::ToolMod, ty::Visibility::Public,
ident.span, Mark::root()).to_name_binding(self.arenas);
- Ok(MacroBinding::Global(binding))
+ Ok((binding, FromPrelude(true)))
} else {
Err(Determinacy::Determined)
}
false,
path_span,
) {
- result = Ok(MacroBinding::Global(binding));
+ result = Ok((binding, FromPrelude(true)));
}
}
}
self.primitive_type_table.primitive_types.get(&ident.name).cloned() {
let binding = (Def::PrimTy(prim_ty), ty::Visibility::Public,
ident.span, Mark::root()).to_name_binding(self.arenas);
- Ok(MacroBinding::Global(binding))
+ Ok((binding, FromPrelude(true)))
} else {
Err(Determinacy::Determined)
}
return Ok(result);
}
- let binding = result.binding();
-
// Found a solution that is ambiguous with a previously found solution.
// Push an ambiguity error for later reporting and
// return something for better recovery.
if let Some(previous_result) = potentially_ambiguous_result {
- if binding.def() != previous_result.binding().def() {
+ if result.0.def() != previous_result.0.def() {
self.ambiguity_errors.push(AmbiguityError {
span: path_span,
name: ident.name,
- b1: previous_result.binding(),
- b2: binding,
+ b1: previous_result.0,
+ b2: result.0,
lexical: true,
});
return Ok(previous_result);
// Found a solution that's not an ambiguity yet, but is "suspicious" and
// can participate in ambiguities later on.
// Remember it and go search for other solutions in outer scopes.
- if binding.is_glob_import() || binding.expansion != Mark::root() {
+ if result.0.is_glob_import() || result.0.expansion != Mark::root() {
potentially_ambiguous_result = Some(result);
continue_search!();
let binding = (Def::NonMacroAttr(NonMacroAttrKind::Custom),
ty::Visibility::Public, ident.span, Mark::root())
.to_name_binding(self.arenas);
- Ok(MacroBinding::Global(binding))
+ Ok((binding, FromPrelude(true)))
} else {
Err(determinacy)
}
}
- pub fn resolve_legacy_scope(&mut self,
- mut scope: &'a Cell<LegacyScope<'a>>,
- ident: Ident,
- record_used: bool)
- -> Option<MacroBinding<'a>> {
+ crate fn resolve_legacy_scope(&mut self,
+ mut scope: &'a Cell<LegacyScope<'a>>,
+ ident: Ident,
+ record_used: bool)
+ -> Option<(&'a LegacyBinding<'a>, FromExpansion)> {
let ident = ident.modern();
let mut relative_depth: u32 = 0;
- let mut binding = None;
loop {
match scope.get() {
LegacyScope::Empty => break,
if record_used && relative_depth > 0 {
self.disallowed_shadowing.push(potential_binding);
}
- binding = Some(potential_binding);
- break
+ return Some((potential_binding, FromExpansion(relative_depth > 0)));
}
scope = &potential_binding.parent;
}
};
}
- let binding = if let Some(binding) = binding {
- MacroBinding::Legacy(binding)
- } else if let Some(binding) = self.macro_prelude.get(&ident.name).cloned() {
- MacroBinding::Global(binding)
- } else {
- return None;
- };
-
- Some(binding)
+ None
}
pub fn finalize_current_module_macro_resolutions(&mut self) {
let resolution = self.resolve_lexical_macro_path_segment(ident, MacroNS, true, true,
kind == MacroKind::Attr, span);
- let check_consistency = |this: &Self, binding: MacroBinding| {
+ let check_consistency = |this: &Self, new_def: Def| {
if let Some(def) = def {
if this.ambiguity_errors.is_empty() && this.disallowed_shadowing.is_empty() &&
- binding.def_ignoring_ambiguity() != def {
+ new_def != def && new_def != Def::Err {
// Make sure compilation does not succeed if preferred macro resolution
// has changed after the macro had been expanded. In theory all such
// situations should be reported as ambiguity errors, so this is span-bug.
};
match (legacy_resolution, resolution) {
- (Some(MacroBinding::Legacy(legacy_binding)), Ok(MacroBinding::Modern(binding))) => {
- if legacy_binding.def_id != binding.def_ignoring_ambiguity().def_id() {
- let msg1 = format!("`{}` could refer to the macro defined here", ident);
- let msg2 =
- format!("`{}` could also refer to the macro imported here", ident);
- self.session.struct_span_err(span, &format!("`{}` is ambiguous", ident))
- .span_note(legacy_binding.span, &msg1)
- .span_note(binding.span, &msg2)
- .emit();
- }
- },
(None, Err(_)) => {
assert!(def.is_none());
let bang = if kind == MacroKind::Bang { "!" } else { "" };
self.suggest_macro_name(&ident.as_str(), kind, &mut err, span);
err.emit();
},
- (Some(MacroBinding::Modern(_)), _) | (_, Ok(MacroBinding::Legacy(_))) => {
- span_bug!(span, "impossible macro resolution result");
- }
+ (Some((legacy_binding, FromExpansion(from_expansion))),
+ Ok((binding, FromPrelude(false)))) |
+ (Some((legacy_binding, FromExpansion(from_expansion @ true))),
+ Ok((binding, FromPrelude(true)))) => {
+ if legacy_binding.def() != binding.def_ignoring_ambiguity() {
+ self.report_ambiguity_error(
+ ident.name, span, true,
+ legacy_binding.def(), false, false,
+ from_expansion, legacy_binding.span,
+ binding.def(), binding.is_import(), binding.is_glob_import(),
+ binding.expansion != Mark::root(), binding.span,
+ );
+ }
+ },
+ // OK, non-macro-expanded legacy wins over macro prelude even if defs are different
+ (Some((legacy_binding, FromExpansion(false))), Ok((_, FromPrelude(true)))) |
// OK, unambiguous resolution
- (Some(binding), Err(_)) | (None, Ok(binding)) |
- // OK, legacy wins over global even if their definitions are different
- (Some(binding @ MacroBinding::Legacy(_)), Ok(MacroBinding::Global(_))) |
- // OK, modern wins over global even if their definitions are different
- (Some(MacroBinding::Global(_)), Ok(binding @ MacroBinding::Modern(_))) => {
- check_consistency(self, binding);
+ (Some((legacy_binding, _)), Err(_)) => {
+ check_consistency(self, legacy_binding.def());
}
- (Some(MacroBinding::Global(binding1)), Ok(MacroBinding::Global(binding2))) => {
- if binding1.def() != binding2.def() {
- span_bug!(span, "mismatch between same global macro resolutions");
+ // OK, unambiguous resolution
+ (None, Ok((binding, FromPrelude(from_prelude)))) => {
+ check_consistency(self, binding.def_ignoring_ambiguity());
+ if from_prelude {
+ self.record_use(ident, MacroNS, binding, span);
+ self.err_if_macro_use_proc_macro(ident.name, span, binding);
}
- check_consistency(self, MacroBinding::Global(binding1));
-
- self.record_use(ident, MacroNS, binding1, span);
- self.err_if_macro_use_proc_macro(ident.name, span, binding1);
- },
+ }
};
}
}
/// Error if `ext` is a Macros 1.1 procedural macro being imported by `#[macro_use]`
fn err_if_macro_use_proc_macro(&mut self, name: Name, use_span: Span,
binding: &NameBinding<'a>) {
- let krate = binding.def().def_id().krate;
+ let krate = match binding.def() {
+ Def::NonMacroAttr(..) | Def::Err => return,
+ Def::Macro(def_id, _) => def_id.krate,
+ _ => unreachable!(),
+ };
// Plugin-based syntax extensions are exempt from this check
if krate == BUILTIN_MACROS_CRATE { return; }
};
match self.resolve_ident_in_module(module, ident, ns, false, path_span) {
Err(Determined) => continue,
+ Ok(binding)
+ if !self.is_accessible_from(binding.vis, single_import.parent) => continue,
Ok(_) | Err(Undetermined) => return Err(Undetermined),
}
}
path_span,
);
self.current_module = orig_current_module;
+
match result {
Err(Determined) => continue,
+ Ok(binding)
+ if !self.is_accessible_from(binding.vis, glob_import.parent) => continue,
Ok(_) | Err(Undetermined) => return Err(Undetermined),
}
}
if let Some(ref generic_args) = seg.args {
match **generic_args {
ast::GenericArgs::AngleBracketed(ref data) => {
- data.args.iter().for_each(|arg| match arg {
- ast::GenericArg::Type(ty) => self.visit_ty(ty),
- _ => {}
- });
+ for arg in &data.args {
+ match arg {
+ ast::GenericArg::Type(ty) => self.visit_ty(ty),
+ _ => {}
+ }
+ }
}
ast::GenericArgs::Parenthesized(ref data) => {
for t in &data.inputs {
// Explicit types in the turbo-fish.
if let Some(ref generic_args) = seg.args {
if let ast::GenericArgs::AngleBracketed(ref data) = **generic_args {
- data.args.iter().for_each(|arg| match arg {
- ast::GenericArg::Type(ty) => self.visit_ty(ty),
- _ => {}
- });
+ for arg in &data.args {
+ match arg {
+ ast::GenericArg::Type(ty) => self.visit_ty(ty),
+ _ => {}
+ }
+ }
}
}
}
fn visit_generics(&mut self, generics: &'l ast::Generics) {
- generics.params.iter().for_each(|param| match param.kind {
- ast::GenericParamKind::Lifetime { .. } => {}
- ast::GenericParamKind::Type { ref default, .. } => {
- for bound in ¶m.bounds {
- if let ast::GenericBound::Trait(ref trait_ref, _) = *bound {
- self.process_path(trait_ref.trait_ref.ref_id, &trait_ref.trait_ref.path)
+ for param in &generics.params {
+ match param.kind {
+ ast::GenericParamKind::Lifetime { .. } => {}
+ ast::GenericParamKind::Type { ref default, .. } => {
+ for bound in ¶m.bounds {
+ if let ast::GenericBound::Trait(ref trait_ref, _) = *bound {
+ self.process_path(trait_ref.trait_ref.ref_id, &trait_ref.trait_ref.path)
+ }
+ }
+ if let Some(ref ty) = default {
+ self.visit_ty(&ty);
}
- }
- if let Some(ref ty) = default {
- self.visit_ty(&ty);
}
}
- });
+ }
}
fn visit_ty(&mut self, t: &'l ast::Ty) {
// into the types of its fields `(B, Vec<A>)`. These will get
// pushed onto the stack. Eventually, expanding `Vec<A>` will
// lead to us trying to push `A` a second time -- to prevent
- // infinite recusion, we notice that `A` was already pushed
+ // infinite recursion, we notice that `A` was already pushed
// once and stop.
let mut ty_stack = vec![(for_ty, 0)];
//! is parameterized by an instance of `AstConv`.
use rustc_data_structures::accumulate_vec::AccumulateVec;
-use hir::{self, GenericArg};
+use rustc_data_structures::array_vec::ArrayVec;
+use hir::{self, GenericArg, GenericArgs};
use hir::def::Def;
use hir::def_id::DefId;
+use hir::HirVec;
use middle::resolve_lifetime as rl;
use namespace::Namespace;
-use rustc::ty::subst::{Subst, Substs};
+use rustc::ty::subst::{Kind, Subst, Substs};
use rustc::traits;
use rustc::ty::{self, Ty, TyCtxt, ToPredicate, TypeFoldable};
-use rustc::ty::GenericParamDefKind;
+use rustc::ty::{GenericParamDef, GenericParamDefKind};
use rustc::ty::wf::object_region_bounds;
use rustc_target::spec::abi;
use std::slice;
use require_c_abi_if_variadic;
use util::common::ErrorReported;
use util::nodemap::{FxHashSet, FxHashMap};
-use errors::FatalError;
+use errors::{FatalError, DiagnosticId};
+use lint;
use std::iter;
use syntax::ast;
+use syntax::ptr::P;
use syntax::feature_gate::{GateIssue, emit_feature_err};
-use syntax_pos::Span;
+use syntax_pos::{Span, MultiSpan};
pub trait AstConv<'gcx, 'tcx> {
fn tcx<'a>(&'a self) -> TyCtxt<'a, 'gcx, 'tcx>;
span: Span,
}
-struct ParamRange {
- required: usize,
- accepted: usize
+#[derive(PartialEq)]
+enum GenericArgPosition {
+ Type,
+ Value, // e.g. functions
+ MethodCall,
+}
+
+// FIXME(#53525): these error codes should all be unified.
+struct GenericArgMismatchErrorCode {
+ lifetimes: (&'static str, &'static str),
+ types: (&'static str, &'static str),
}
/// Dummy type used for the `Self` of a `TraitRef` created for converting
-> &'tcx Substs<'tcx>
{
- let (substs, assoc_bindings) =
- item_segment.with_generic_args(|generic_args| {
- self.create_substs_for_ast_path(
- span,
- def_id,
- generic_args,
- item_segment.infer_types,
- None)
- });
+ let (substs, assoc_bindings) = item_segment.with_generic_args(|generic_args| {
+ self.create_substs_for_ast_path(
+ span,
+ def_id,
+ generic_args,
+ item_segment.infer_types,
+ None,
+ )
+ });
- assoc_bindings.first().map(|b| self.prohibit_projection(b.span));
+ assoc_bindings.first().map(|b| Self::prohibit_assoc_ty_binding(self.tcx(), b.span));
substs
}
+ /// Report error if there is an explicit type parameter when using `impl Trait`.
+ fn check_impl_trait(
+ tcx: TyCtxt,
+ span: Span,
+ seg: &hir::PathSegment,
+ generics: &ty::Generics,
+ ) -> bool {
+ let explicit = !seg.infer_types;
+ let impl_trait = generics.params.iter().any(|param| match param.kind {
+ ty::GenericParamDefKind::Type {
+ synthetic: Some(hir::SyntheticTyParamKind::ImplTrait), ..
+ } => true,
+ _ => false,
+ });
+
+ if explicit && impl_trait {
+ let mut err = struct_span_err! {
+ tcx.sess,
+ span,
+ E0632,
+ "cannot provide explicit type parameters when `impl Trait` is \
+ used in argument position."
+ };
+
+ err.emit();
+ }
+
+ impl_trait
+ }
+
+ /// Check that the correct number of generic arguments have been provided.
+ /// Used specifically for function calls.
+ pub fn check_generic_arg_count_for_call(
+ tcx: TyCtxt,
+ span: Span,
+ def: &ty::Generics,
+ seg: &hir::PathSegment,
+ is_method_call: bool,
+ ) -> bool {
+ let empty_args = P(hir::GenericArgs {
+ args: HirVec::new(), bindings: HirVec::new(), parenthesized: false,
+ });
+ let suppress_mismatch = Self::check_impl_trait(tcx, span, seg, &def);
+ Self::check_generic_arg_count(
+ tcx,
+ span,
+ def,
+ if let Some(ref args) = seg.args {
+ args
+ } else {
+ &empty_args
+ },
+ if is_method_call {
+ GenericArgPosition::MethodCall
+ } else {
+ GenericArgPosition::Value
+ },
+ def.parent.is_none() && def.has_self, // `has_self`
+ seg.infer_types || suppress_mismatch, // `infer_types`
+ GenericArgMismatchErrorCode {
+ lifetimes: ("E0090", "E0088"),
+ types: ("E0089", "E0087"),
+ },
+ )
+ }
+
+ /// Check that the correct number of generic arguments have been provided.
+ /// This is used both for datatypes and function calls.
+ fn check_generic_arg_count(
+ tcx: TyCtxt,
+ span: Span,
+ def: &ty::Generics,
+ args: &hir::GenericArgs,
+ position: GenericArgPosition,
+ has_self: bool,
+ infer_types: bool,
+ error_codes: GenericArgMismatchErrorCode,
+ ) -> bool {
+ // At this stage we are guaranteed that the generic arguments are in the correct order, e.g.
+ // that lifetimes will proceed types. So it suffices to check the number of each generic
+ // arguments in order to validate them with respect to the generic parameters.
+ let param_counts = def.own_counts();
+ let arg_counts = args.own_counts();
+ let infer_lifetimes = position != GenericArgPosition::Type && arg_counts.lifetimes == 0;
+
+ let mut defaults: ty::GenericParamCount = Default::default();
+ for param in &def.params {
+ match param.kind {
+ GenericParamDefKind::Lifetime => {}
+ GenericParamDefKind::Type { has_default, .. } => {
+ defaults.types += has_default as usize
+ }
+ };
+ }
+
+ if position != GenericArgPosition::Type && !args.bindings.is_empty() {
+ AstConv::prohibit_assoc_ty_binding(tcx, args.bindings[0].span);
+ }
+
+ // Prohibit explicit lifetime arguments if late-bound lifetime parameters are present.
+ if !infer_lifetimes {
+ if let Some(span_late) = def.has_late_bound_regions {
+ let msg = "cannot specify lifetime arguments explicitly \
+ if late bound lifetime parameters are present";
+ let note = "the late bound lifetime parameter is introduced here";
+ let span = args.args[0].span();
+ if position == GenericArgPosition::Value
+ && arg_counts.lifetimes != param_counts.lifetimes {
+ let mut err = tcx.sess.struct_span_err(span, msg);
+ err.span_note(span_late, note);
+ err.emit();
+ return true;
+ } else {
+ let mut multispan = MultiSpan::from_span(span);
+ multispan.push_span_label(span_late, note.to_string());
+ tcx.lint_node(lint::builtin::LATE_BOUND_LIFETIME_ARGUMENTS,
+ args.args[0].id(), multispan, msg);
+ return false;
+ }
+ }
+ }
+
+ let check_kind_count = |error_code: (&str, &str),
+ kind,
+ required,
+ permitted,
+ provided,
+ offset| {
+ // We enforce the following: `required` <= `provided` <= `permitted`.
+ // For kinds without defaults (i.e. lifetimes), `required == permitted`.
+ // For other kinds (i.e. types), `permitted` may be greater than `required`.
+ if required <= provided && provided <= permitted {
+ return false;
+ }
+
+ // Unfortunately lifetime and type parameter mismatches are typically styled
+ // differently in diagnostics, which means we have a few cases to consider here.
+ let (bound, quantifier) = if required != permitted {
+ if provided < required {
+ (required, "at least ")
+ } else { // provided > permitted
+ (permitted, "at most ")
+ }
+ } else {
+ (required, "")
+ };
+
+ let mut span = span;
+ let label = if required == permitted && provided > permitted {
+ let diff = provided - permitted;
+ if diff == 1 {
+ // In the case when the user has provided too many arguments,
+ // we want to point to the first unexpected argument.
+ let first_superfluous_arg: &GenericArg = &args.args[offset + permitted];
+ span = first_superfluous_arg.span();
+ }
+ format!(
+ "{}unexpected {} argument{}",
+ if diff != 1 { format!("{} ", diff) } else { String::new() },
+ kind,
+ if diff != 1 { "s" } else { "" },
+ )
+ } else {
+ format!(
+ "expected {}{} {} argument{}",
+ quantifier,
+ bound,
+ kind,
+ if required != 1 { "s" } else { "" },
+ )
+ };
+
+ tcx.sess.struct_span_err_with_code(
+ span,
+ &format!(
+ "wrong number of {} arguments: expected {}{}, found {}",
+ kind,
+ quantifier,
+ bound,
+ provided,
+ ),
+ DiagnosticId::Error({
+ if provided <= permitted {
+ error_code.0
+ } else {
+ error_code.1
+ }
+ }.into())
+ ).span_label(span, label).emit();
+
+ provided > required // `suppress_error`
+ };
+
+ if !infer_lifetimes || arg_counts.lifetimes > param_counts.lifetimes {
+ check_kind_count(
+ error_codes.lifetimes,
+ "lifetime",
+ param_counts.lifetimes,
+ param_counts.lifetimes,
+ arg_counts.lifetimes,
+ 0,
+ );
+ }
+ if !infer_types
+ || arg_counts.types > param_counts.types - defaults.types - has_self as usize {
+ check_kind_count(
+ error_codes.types,
+ "type",
+ param_counts.types - defaults.types - has_self as usize,
+ param_counts.types - has_self as usize,
+ arg_counts.types,
+ arg_counts.lifetimes,
+ )
+ } else {
+ false
+ }
+ }
+
+ /// Creates the relevant generic argument substitutions
+ /// corresponding to a set of generic parameters.
+ pub fn create_substs_for_generic_args<'a, 'b, A, P, I>(
+ tcx: TyCtxt<'a, 'gcx, 'tcx>,
+ def_id: DefId,
+ parent_substs: &[Kind<'tcx>],
+ has_self: bool,
+ self_ty: Option<Ty<'tcx>>,
+ args_for_def_id: A,
+ provided_kind: P,
+ inferred_kind: I,
+ ) -> &'tcx Substs<'tcx> where
+ A: Fn(DefId) -> (Option<&'b GenericArgs>, bool),
+ P: Fn(&GenericParamDef, &GenericArg) -> Kind<'tcx>,
+ I: Fn(Option<&[Kind<'tcx>]>, &GenericParamDef, bool) -> Kind<'tcx>
+ {
+ // Collect the segments of the path: we need to substitute arguments
+ // for parameters throughout the entire path (wherever there are
+ // generic parameters).
+ let mut parent_defs = tcx.generics_of(def_id);
+ let count = parent_defs.count();
+ let mut stack = vec![(def_id, parent_defs)];
+ while let Some(def_id) = parent_defs.parent {
+ parent_defs = tcx.generics_of(def_id);
+ stack.push((def_id, parent_defs));
+ }
+
+ // We manually build up the substitution, rather than using convenience
+ // methods in subst.rs so that we can iterate over the arguments and
+ // parameters in lock-step linearly, rather than trying to match each pair.
+ let mut substs: AccumulateVec<[Kind<'tcx>; 8]> = if count <= 8 {
+ AccumulateVec::Array(ArrayVec::new())
+ } else {
+ AccumulateVec::Heap(Vec::with_capacity(count))
+ };
+
+ fn push_kind<'tcx>(substs: &mut AccumulateVec<[Kind<'tcx>; 8]>, kind: Kind<'tcx>) {
+ match substs {
+ AccumulateVec::Array(ref mut arr) => arr.push(kind),
+ AccumulateVec::Heap(ref mut vec) => vec.push(kind),
+ }
+ }
+
+ // Iterate over each segment of the path.
+ while let Some((def_id, defs)) = stack.pop() {
+ let mut params = defs.params.iter().peekable();
+
+ // If we have already computed substitutions for parents, we can use those directly.
+ while let Some(¶m) = params.peek() {
+ if let Some(&kind) = parent_substs.get(param.index as usize) {
+ push_kind(&mut substs, kind);
+ params.next();
+ } else {
+ break;
+ }
+ }
+
+ // (Unless it's been handled in `parent_substs`) `Self` is handled first.
+ if has_self {
+ if let Some(¶m) = params.peek() {
+ if param.index == 0 {
+ if let GenericParamDefKind::Type { .. } = param.kind {
+ push_kind(&mut substs, self_ty.map(|ty| ty.into())
+ .unwrap_or_else(|| inferred_kind(None, param, true)));
+ params.next();
+ }
+ }
+ }
+ }
+
+ // Check whether this segment takes generic arguments and the user has provided any.
+ let (generic_args, infer_types) = args_for_def_id(def_id);
+
+ let mut args = generic_args.iter().flat_map(|generic_args| generic_args.args.iter())
+ .peekable();
+
+ loop {
+ // We're going to iterate through the generic arguments that the user
+ // provided, matching them with the generic parameters we expect.
+ // Mismatches can occur as a result of elided lifetimes, or for malformed
+ // input. We try to handle both sensibly.
+ match (args.peek(), params.peek()) {
+ (Some(&arg), Some(¶m)) => {
+ match (arg, ¶m.kind) {
+ (GenericArg::Lifetime(_), GenericParamDefKind::Lifetime)
+ | (GenericArg::Type(_), GenericParamDefKind::Type { .. }) => {
+ push_kind(&mut substs, provided_kind(param, arg));
+ args.next();
+ params.next();
+ }
+ (GenericArg::Lifetime(_), GenericParamDefKind::Type { .. }) => {
+ // We expected a type argument, but got a lifetime
+ // argument. This is an error, but we need to handle it
+ // gracefully so we can report sensible errors. In this
+ // case, we're simply going to infer this argument.
+ args.next();
+ }
+ (GenericArg::Type(_), GenericParamDefKind::Lifetime) => {
+ // We expected a lifetime argument, but got a type
+ // argument. That means we're inferring the lifetimes.
+ push_kind(&mut substs, inferred_kind(None, param, infer_types));
+ params.next();
+ }
+ }
+ }
+ (Some(_), None) => {
+ // We should never be able to reach this point with well-formed input.
+ // Getting to this point means the user supplied more arguments than
+ // there are parameters.
+ args.next();
+ }
+ (None, Some(¶m)) => {
+ // If there are fewer arguments than parameters, it means
+ // we're inferring the remaining arguments.
+ match param.kind {
+ GenericParamDefKind::Lifetime | GenericParamDefKind::Type { .. } => {
+ let kind = inferred_kind(Some(&substs), param, infer_types);
+ push_kind(&mut substs, kind);
+ }
+ }
+ args.next();
+ params.next();
+ }
+ (None, None) => break,
+ }
+ }
+ }
+
+ tcx.intern_substs(&substs)
+ }
+
/// Given the type/region arguments provided to some path (along with
/// an implicit Self, if this is a trait reference) returns the complete
/// set of substitutions. This may involve applying defaulted type parameters.
self_ty: Option<Ty<'tcx>>)
-> (&'tcx Substs<'tcx>, Vec<ConvertedBinding<'tcx>>)
{
- let tcx = self.tcx();
-
- debug!("create_substs_for_ast_path(def_id={:?}, self_ty={:?}, \
- generic_args={:?})",
- def_id, self_ty, generic_args);
-
// If the type is parameterized by this region, then replace this
// region with the current anon region binding (in other words,
// whatever & would get replaced with).
+ debug!("create_substs_for_ast_path(def_id={:?}, self_ty={:?}, \
+ generic_args={:?})",
+ def_id, self_ty, generic_args);
- // FIXME(varkor): Separating out the parameters is messy.
- let lifetimes: Vec<_> = generic_args.args.iter().filter_map(|arg| match arg {
- GenericArg::Lifetime(lt) => Some(lt),
- _ => None,
- }).collect();
- let types: Vec<_> = generic_args.args.iter().filter_map(|arg| match arg {
- GenericArg::Type(ty) => Some(ty),
- _ => None,
- }).collect();
- let lt_provided = lifetimes.len();
- let ty_provided = types.len();
-
- let decl_generics = tcx.generics_of(def_id);
- let mut lt_accepted = 0;
- let mut ty_params = ParamRange { required: 0, accepted: 0 };
- for param in &decl_generics.params {
- match param.kind {
- GenericParamDefKind::Lifetime => {
- lt_accepted += 1;
- }
- GenericParamDefKind::Type { has_default, .. } => {
- ty_params.accepted += 1;
- if !has_default {
- ty_params.required += 1;
- }
- }
- };
- }
- if self_ty.is_some() {
- ty_params.required -= 1;
- ty_params.accepted -= 1;
- }
-
- if lt_accepted != lt_provided {
- report_lifetime_number_error(tcx, span, lt_provided, lt_accepted);
- }
+ let tcx = self.tcx();
+ let generic_params = tcx.generics_of(def_id);
// If a self-type was declared, one should be provided.
- assert_eq!(decl_generics.has_self, self_ty.is_some());
+ assert_eq!(generic_params.has_self, self_ty.is_some());
- // Check the number of type parameters supplied by the user.
- if !infer_types || ty_provided > ty_params.required {
- check_type_argument_count(tcx, span, ty_provided, ty_params);
- }
+ let has_self = generic_params.has_self;
+ Self::check_generic_arg_count(
+ self.tcx(),
+ span,
+ &generic_params,
+ &generic_args,
+ GenericArgPosition::Type,
+ has_self,
+ infer_types,
+ GenericArgMismatchErrorCode {
+ lifetimes: ("E0107", "E0107"),
+ types: ("E0243", "E0244"),
+ },
+ );
let is_object = self_ty.map_or(false, |ty| ty.sty == TRAIT_OBJECT_DUMMY_SELF);
let default_needs_object_self = |param: &ty::GenericParamDef| {
false
};
- let own_self = self_ty.is_some() as usize;
- let substs = Substs::for_item(tcx, def_id, |param, substs| {
- match param.kind {
- GenericParamDefKind::Lifetime => {
- let i = param.index as usize - own_self;
- if let Some(lt) = lifetimes.get(i) {
- self.ast_region_to_region(lt, Some(param)).into()
- } else {
- tcx.types.re_static.into()
+ let substs = Self::create_substs_for_generic_args(
+ self.tcx(),
+ def_id,
+ &[][..],
+ self_ty.is_some(),
+ self_ty,
+ // Provide the generic args, and whether types should be inferred.
+ |_| (Some(generic_args), infer_types),
+ // Provide substitutions for parameters for which (valid) arguments have been provided.
+ |param, arg| {
+ match (¶m.kind, arg) {
+ (GenericParamDefKind::Lifetime, GenericArg::Lifetime(lt)) => {
+ self.ast_region_to_region(<, Some(param)).into()
}
- }
- GenericParamDefKind::Type { has_default, .. } => {
- let i = param.index as usize;
-
- // Handle Self first, so we can adjust the index to match the AST.
- if let (0, Some(ty)) = (i, self_ty) {
- return ty.into();
+ (GenericParamDefKind::Type { .. }, GenericArg::Type(ty)) => {
+ self.ast_ty_to_ty(&ty).into()
}
-
- let i = i - (lt_accepted + own_self);
- if i < ty_provided {
- // A provided type parameter.
- self.ast_ty_to_ty(&types[i]).into()
- } else if infer_types {
- // No type parameters were provided, we can infer all.
- if !default_needs_object_self(param) {
- self.ty_infer_for_def(param, span).into()
+ _ => unreachable!(),
+ }
+ },
+ // Provide substitutions for parameters for which arguments are inferred.
+ |substs, param, infer_types| {
+ match param.kind {
+ GenericParamDefKind::Lifetime => tcx.types.re_static.into(),
+ GenericParamDefKind::Type { has_default, .. } => {
+ if !infer_types && has_default {
+ // No type parameter provided, but a default exists.
+
+ // If we are converting an object type, then the
+ // `Self` parameter is unknown. However, some of the
+ // other type parameters may reference `Self` in their
+ // defaults. This will lead to an ICE if we are not
+ // careful!
+ if default_needs_object_self(param) {
+ struct_span_err!(tcx.sess, span, E0393,
+ "the type parameter `{}` must be explicitly \
+ specified",
+ param.name)
+ .span_label(span,
+ format!("missing reference to `{}`", param.name))
+ .note(&format!("because of the default `Self` reference, \
+ type parameters must be specified on object \
+ types"))
+ .emit();
+ tcx.types.err.into()
+ } else {
+ // This is a default type parameter.
+ self.normalize_ty(
+ span,
+ tcx.at(span).type_of(param.def_id)
+ .subst_spanned(tcx, substs.unwrap(), Some(span))
+ ).into()
+ }
+ } else if infer_types {
+ // No type parameters were provided, we can infer all.
+ if !default_needs_object_self(param) {
+ self.ty_infer_for_def(param, span).into()
+ } else {
+ self.ty_infer(span).into()
+ }
} else {
- self.ty_infer(span).into()
- }
- } else if has_default {
- // No type parameter provided, but a default exists.
-
- // If we are converting an object type, then the
- // `Self` parameter is unknown. However, some of the
- // other type parameters may reference `Self` in their
- // defaults. This will lead to an ICE if we are not
- // careful!
- if default_needs_object_self(param) {
- struct_span_err!(tcx.sess, span, E0393,
- "the type parameter `{}` must be explicitly \
- specified",
- param.name)
- .span_label(span,
- format!("missing reference to `{}`", param.name))
- .note(&format!("because of the default `Self` reference, \
- type parameters must be specified on object \
- types"))
- .emit();
+ // We've already errored above about the mismatch.
tcx.types.err.into()
- } else {
- // This is a default type parameter.
- self.normalize_ty(
- span,
- tcx.at(span).type_of(param.def_id)
- .subst_spanned(tcx, substs, Some(span))
- ).into()
}
- } else {
- // We've already errored above about the mismatch.
- tcx.types.err.into()
}
}
- }
- });
+ },
+ );
let assoc_bindings = generic_args.bindings.iter().map(|binding| {
ConvertedBinding {
}
}).collect();
- debug!("create_substs_for_ast_path(decl_generics={:?}, self_ty={:?}) -> {:?}",
- decl_generics, self_ty, substs);
+ debug!("create_substs_for_ast_path(generic_params={:?}, self_ty={:?}) -> {:?}",
+ generic_params, self_ty, substs);
(substs, assoc_bindings)
}
trait_def_id,
self_ty,
trait_segment);
- assoc_bindings.first().map(|b| self.prohibit_projection(b.span));
+ assoc_bindings.first().map(|b| AstConv::prohibit_assoc_ty_binding(self.tcx(), b.span));
ty::TraitRef::new(trait_def_id, substs)
}
self.normalize_ty(span, tcx.mk_projection(item_def_id, trait_ref.substs))
}
- pub fn prohibit_generics(&self, segments: &[hir::PathSegment]) {
+ pub fn prohibit_generics<'a, T: IntoIterator<Item = &'a hir::PathSegment>>(&self, segments: T) {
for segment in segments {
segment.with_generic_args(|generic_args| {
let (mut err_for_lt, mut err_for_ty) = (false, false);
}
}
for binding in &generic_args.bindings {
- self.prohibit_projection(binding.span);
+ Self::prohibit_assoc_ty_binding(self.tcx(), binding.span);
break;
}
})
}
}
- pub fn prohibit_projection(&self, span: Span) {
- let mut err = struct_span_err!(self.tcx().sess, span, E0229,
+ pub fn prohibit_assoc_ty_binding(tcx: TyCtxt, span: Span) {
+ let mut err = struct_span_err!(tcx.sess, span, E0229,
"associated type bindings are not allowed here");
err.span_label(span, "associated type not allowed here").emit();
}
(auto_traits, trait_bounds)
}
-fn check_type_argument_count(tcx: TyCtxt,
- span: Span,
- supplied: usize,
- ty_params: ParamRange)
-{
- let (required, accepted) = (ty_params.required, ty_params.accepted);
- if supplied < required {
- let expected = if required < accepted {
- "expected at least"
- } else {
- "expected"
- };
- let arguments_plural = if required == 1 { "" } else { "s" };
-
- struct_span_err!(tcx.sess, span, E0243,
- "wrong number of type arguments: {} {}, found {}",
- expected, required, supplied)
- .span_label(span,
- format!("{} {} type argument{}",
- expected,
- required,
- arguments_plural))
- .emit();
- } else if supplied > accepted {
- let expected = if required < accepted {
- format!("expected at most {}", accepted)
- } else {
- format!("expected {}", accepted)
- };
- let arguments_plural = if accepted == 1 { "" } else { "s" };
-
- struct_span_err!(tcx.sess, span, E0244,
- "wrong number of type arguments: {}, found {}",
- expected, supplied)
- .span_label(
- span,
- format!("{} type argument{}",
- if accepted == 0 { "expected no" } else { &expected },
- arguments_plural)
- )
- .emit();
- }
-}
-
-fn report_lifetime_number_error(tcx: TyCtxt, span: Span, number: usize, expected: usize) {
- let label = if number < expected {
- if expected == 1 {
- format!("expected {} lifetime parameter", expected)
- } else {
- format!("expected {} lifetime parameters", expected)
- }
- } else {
- let additional = number - expected;
- if additional == 1 {
- "unexpected lifetime parameter".to_string()
- } else {
- format!("{} unexpected lifetime parameters", additional)
- }
- };
- struct_span_err!(tcx.sess, span, E0107,
- "wrong number of lifetime parameters: expected {}, found {}",
- expected, number)
- .span_label(span, label)
- .emit();
-}
-
// A helper struct for conveniently grouping a set of bounds which we pass to
// and return from functions in multiple places.
#[derive(PartialEq, Eq, Clone, Debug)]
// Replace all regions inside the generator interior with late bound regions
// Note that each region slot in the types gets a new fresh late bound region,
// which means that none of the regions inside relate to any other, even if
- // typeck had previously found contraints that would cause them to be related.
+ // typeck had previously found constraints that would cause them to be related.
let mut counter = 0;
let type_list = fcx.tcx.fold_regions(&type_list, &mut false, |_, current_depth| {
counter += 1;
use rustc::ty::adjustment::{AllowTwoPhase, AutoBorrow, AutoBorrowMutability};
use rustc::ty::fold::TypeFoldable;
use rustc::infer::{self, InferOk};
-use syntax_pos::Span;
use rustc::hir;
+use syntax_pos::Span;
use std::ops::Deref;
fn instantiate_method_substs(
&mut self,
pick: &probe::Pick<'tcx>,
- segment: &hir::PathSegment,
+ seg: &hir::PathSegment,
parent_substs: &Substs<'tcx>,
) -> &'tcx Substs<'tcx> {
// Determine the values for the generic parameters of the method.
// If they were not explicitly supplied, just construct fresh
// variables.
- let method_generics = self.tcx.generics_of(pick.item.def_id);
- let mut fn_segment = Some((segment, method_generics));
- let supress_mismatch = self.fcx.check_impl_trait(self.span, fn_segment);
- self.fcx.check_generic_arg_count(self.span, &mut fn_segment, true, supress_mismatch);
+ let generics = self.tcx.generics_of(pick.item.def_id);
+ AstConv::check_generic_arg_count_for_call(
+ self.tcx,
+ self.span,
+ &generics,
+ &seg,
+ true, // `is_method_call`
+ );
// Create subst for early-bound lifetime parameters, combining
// parameters from the type and those from the method.
- assert_eq!(method_generics.parent_count, parent_substs.len());
- let provided = &segment.args;
- let own_counts = method_generics.own_counts();
- Substs::for_item(self.tcx, pick.item.def_id, |param, _| {
- let mut i = param.index as usize;
- if i < parent_substs.len() {
- parent_substs[i]
- } else {
- let (is_lt, is_ty) = match param.kind {
- GenericParamDefKind::Lifetime => (true, false),
- GenericParamDefKind::Type { .. } => (false, true),
- };
- provided.as_ref().and_then(|data| {
- for arg in &data.args {
- match arg {
- GenericArg::Lifetime(lt) if is_lt => {
- if i == parent_substs.len() {
- return Some(AstConv::ast_region_to_region(
- self.fcx, lt, Some(param)).into());
- }
- i -= 1;
- }
- GenericArg::Lifetime(_) => {}
- GenericArg::Type(ty) if is_ty => {
- if i == parent_substs.len() + own_counts.lifetimes {
- return Some(self.to_ty(ty).into());
- }
- i -= 1;
- }
- GenericArg::Type(_) => {}
- }
+ assert_eq!(generics.parent_count, parent_substs.len());
+
+ AstConv::create_substs_for_generic_args(
+ self.tcx,
+ pick.item.def_id,
+ parent_substs,
+ false,
+ None,
+ // Provide the generic args, and whether types should be inferred.
+ |_| {
+ // The last argument of the returned tuple here is unimportant.
+ if let Some(ref data) = seg.args {
+ (Some(data), false)
+ } else {
+ (None, false)
+ }
+ },
+ // Provide substitutions for parameters for which (valid) arguments have been provided.
+ |param, arg| {
+ match (¶m.kind, arg) {
+ (GenericParamDefKind::Lifetime, GenericArg::Lifetime(lt)) => {
+ AstConv::ast_region_to_region(self.fcx, lt, Some(param)).into()
}
- None
- }).unwrap_or_else(|| self.var_for_def(self.span, param))
- }
- })
+ (GenericParamDefKind::Type { .. }, GenericArg::Type(ty)) => {
+ self.to_ty(ty).into()
+ }
+ _ => unreachable!(),
+ }
+ },
+ // Provide substitutions for parameters for which arguments are inferred.
+ |_, param, _| self.var_for_def(self.span, param),
+ )
}
fn unify_receivers(&mut self, self_ty: Ty<'tcx>, method_self_ty: Ty<'tcx>) {
use TypeAndSubsts;
use lint;
use util::common::{ErrorReported, indenter};
-use util::nodemap::{DefIdMap, DefIdSet, FxHashMap, NodeMap};
+use util::nodemap::{DefIdMap, DefIdSet, FxHashMap, FxHashSet, NodeMap};
use std::cell::{Cell, RefCell, Ref, RefMut};
use rustc_data_structures::sync::Lrc;
}
}
+#[derive(Debug)]
+struct PathSeg(DefId, usize);
+
pub struct FnCtxt<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
body_id: ast::NodeId,
// backwards compatibility. This makes fallback a stronger type hint than a cast coercion.
fcx.check_casts();
- // Closure and generater analysis may run after fallback
+ // Closure and generator analysis may run after fallback
// because they don't constrain other type variables.
fcx.closure_analyze(body);
assert!(fcx.deferred_call_resolutions.borrow().is_empty());
};
let param_env = ty::ParamEnv::reveal_all();
if let Ok(static_) = tcx.const_eval(param_env.and(cid)) {
- let alloc = tcx.const_value_to_allocation(static_);
+ let alloc = tcx.const_to_allocation(static_);
if alloc.relocations.len() != 0 {
let msg = "statics with a custom `#[link_section]` must be a \
simple list of bytes on the wasm target with no \
// unconstrained floats with f64.
// Fallback becomes very dubious if we have encountered type-checking errors.
// In that case, fallback to TyError.
- // The return value indicates whether fallback has occured.
+ // The return value indicates whether fallback has occurred.
fn fallback_if_possible(&self, ty: Ty<'tcx>) -> bool {
use rustc::ty::error::UnconstrainedNumeric::Neither;
use rustc::ty::error::UnconstrainedNumeric::{UnconstrainedInt, UnconstrainedFloat};
{
match *qpath {
hir::QPath::Resolved(ref maybe_qself, ref path) => {
- let opt_self_ty = maybe_qself.as_ref().map(|qself| self.to_ty(qself));
- let ty = AstConv::def_to_ty(self, opt_self_ty, path, true);
+ let self_ty = maybe_qself.as_ref().map(|qself| self.to_ty(qself));
+ let ty = AstConv::def_to_ty(self, self_ty, path, true);
(path.def, ty)
}
hir::QPath::TypeRelative(ref qself, ref segment) => {
err.span_suggestion(span_semi, "consider removing this semicolon", "".to_string());
}
- // Instantiates the given path, which must refer to an item with the given
- // number of type parameters and type.
- pub fn instantiate_value_path(&self,
- segments: &[hir::PathSegment],
- opt_self_ty: Option<Ty<'tcx>>,
- def: Def,
- span: Span,
- node_id: ast::NodeId)
- -> Ty<'tcx> {
- debug!("instantiate_value_path(path={:?}, def={:?}, node_id={})",
- segments,
- def,
- node_id);
-
+ fn def_ids_for_path_segments(&self,
+ segments: &[hir::PathSegment],
+ def: Def)
+ -> Vec<PathSeg> {
// We need to extract the type parameters supplied by the user in
// the path `path`. Due to the current setup, this is a bit of a
// tricky-process; the problem is that resolve only tells us the
// The first step then is to categorize the segments appropriately.
assert!(!segments.is_empty());
+ let last = segments.len() - 1;
+
+ let mut path_segs = vec![];
- let mut ufcs_associated = None;
- let mut type_segment = None;
- let mut fn_segment = None;
match def {
// Case 1. Reference to a struct/variant constructor.
Def::StructCtor(def_id, ..) |
Def::VariantCtor(def_id, ..) => {
// Everything but the final segment should have no
// parameters at all.
- let mut generics = self.tcx.generics_of(def_id);
- if let Some(def_id) = generics.parent {
- // Variant and struct constructors use the
- // generics of their parent type definition.
- generics = self.tcx.generics_of(def_id);
- }
- type_segment = Some((segments.last().unwrap(), generics));
+ let generics = self.tcx.generics_of(def_id);
+ // Variant and struct constructors use the
+ // generics of their parent type definition.
+ let generics_def_id = generics.parent.unwrap_or(def_id);
+ path_segs.push(PathSeg(generics_def_id, last));
}
// Case 2. Reference to a top-level value.
Def::Fn(def_id) |
Def::Const(def_id) |
Def::Static(def_id, _) => {
- fn_segment = Some((segments.last().unwrap(), self.tcx.generics_of(def_id)));
+ path_segs.push(PathSeg(def_id, last));
}
// Case 3. Reference to a method or associated const.
+ Def::Method(def_id) |
+ Def::AssociatedConst(def_id) => {
+ if segments.len() >= 2 {
+ let generics = self.tcx.generics_of(def_id);
+ path_segs.push(PathSeg(generics.parent.unwrap(), last - 1));
+ }
+ path_segs.push(PathSeg(def_id, last));
+ }
+
+ // Case 4. Local variable, no generics.
+ Def::Local(..) | Def::Upvar(..) => {}
+
+ _ => bug!("unexpected definition: {:?}", def),
+ }
+
+ debug!("path_segs = {:?}", path_segs);
+
+ path_segs
+ }
+
+ // Instantiates the given path, which must refer to an item with the given
+ // number of type parameters and type.
+ pub fn instantiate_value_path(&self,
+ segments: &[hir::PathSegment],
+ self_ty: Option<Ty<'tcx>>,
+ def: Def,
+ span: Span,
+ node_id: ast::NodeId)
+ -> Ty<'tcx> {
+ debug!("instantiate_value_path(path={:?}, def={:?}, node_id={})",
+ segments,
+ def,
+ node_id);
+
+ let path_segs = self.def_ids_for_path_segments(segments, def);
+
+ let mut ufcs_associated = None;
+ match def {
Def::Method(def_id) |
Def::AssociatedConst(def_id) => {
let container = self.tcx.associated_item(def_id).container;
}
ty::ImplContainer(_) => {}
}
-
- let generics = self.tcx.generics_of(def_id);
- if segments.len() >= 2 {
- let parent_generics = self.tcx.generics_of(generics.parent.unwrap());
- type_segment = Some((&segments[segments.len() - 2], parent_generics));
- } else {
+ if segments.len() == 1 {
// `<T>::assoc` will end up here, and so can `T::assoc`.
- let self_ty = opt_self_ty.expect("UFCS sugared assoc missing Self");
+ let self_ty = self_ty.expect("UFCS sugared assoc missing Self");
ufcs_associated = Some((container, self_ty));
}
- fn_segment = Some((segments.last().unwrap(), generics));
}
-
- // Case 4. Local variable, no generics.
- Def::Local(..) | Def::Upvar(..) => {}
-
- _ => bug!("unexpected definition: {:?}", def),
+ _ => {}
}
- debug!("type_segment={:?} fn_segment={:?}", type_segment, fn_segment);
-
// Now that we have categorized what space the parameters for each
// segment belong to, let's sort out the parameters that the user
// provided (if any) into their appropriate spaces. We'll also report
// errors if type parameters are provided in an inappropriate place.
- let poly_segments = type_segment.is_some() as usize +
- fn_segment.is_some() as usize;
- AstConv::prohibit_generics(self, &segments[..segments.len() - poly_segments]);
+
+ let mut generic_segs = FxHashSet::default();
+ for PathSeg(_, index) in &path_segs {
+ generic_segs.insert(index);
+ }
+ AstConv::prohibit_generics(self, segments.iter().enumerate().filter_map(|(index, seg)| {
+ if !generic_segs.contains(&index) {
+ Some(seg)
+ } else {
+ None
+ }
+ }));
match def {
Def::Local(nid) | Def::Upvar(nid, ..) => {
// variables. If the user provided some types, we may still need
// to add defaults. If the user provided *too many* types, that's
// a problem.
- let supress_mismatch = self.check_impl_trait(span, fn_segment);
- self.check_generic_arg_count(span, &mut type_segment, false, supress_mismatch);
- self.check_generic_arg_count(span, &mut fn_segment, false, supress_mismatch);
- let (fn_start, has_self) = match (type_segment, fn_segment) {
- (_, Some((_, generics))) => {
- (generics.parent_count, generics.has_self)
- }
- (Some((_, generics)), None) => {
- (generics.params.len(), generics.has_self)
- }
- (None, None) => (0, false)
- };
- // FIXME(varkor): Separating out the parameters is messy.
- let mut lifetimes_type_seg = vec![];
- let mut types_type_seg = vec![];
- let mut infer_types_type_seg = true;
- if let Some((seg, _)) = type_segment {
- if let Some(ref data) = seg.args {
- for arg in &data.args {
- match arg {
- GenericArg::Lifetime(lt) => lifetimes_type_seg.push(lt),
- GenericArg::Type(ty) => types_type_seg.push(ty),
- }
- }
+ let mut infer_args_for_err = FxHashSet::default();
+ for &PathSeg(def_id, index) in &path_segs {
+ let seg = &segments[index];
+ let generics = self.tcx.generics_of(def_id);
+ // Argument-position `impl Trait` is treated as a normal generic
+ // parameter internally, but we don't allow users to specify the
+ // parameter's value explicitly, so we have to do some error-
+ // checking here.
+ let suppress_errors = AstConv::check_generic_arg_count_for_call(
+ self.tcx,
+ span,
+ &generics,
+ &seg,
+ false, // `is_method_call`
+ );
+ if suppress_errors {
+ infer_args_for_err.insert(index);
+ self.set_tainted_by_errors(); // See issue #53251.
}
- infer_types_type_seg = seg.infer_types;
}
- let mut lifetimes_fn_seg = vec![];
- let mut types_fn_seg = vec![];
- let mut infer_types_fn_seg = true;
- if let Some((seg, _)) = fn_segment {
- if let Some(ref data) = seg.args {
- for arg in &data.args {
- match arg {
- GenericArg::Lifetime(lt) => lifetimes_fn_seg.push(lt),
- GenericArg::Type(ty) => types_fn_seg.push(ty),
- }
- }
- }
- infer_types_fn_seg = seg.infer_types;
- }
+ let has_self = path_segs.last().map(|PathSeg(def_id, _)| {
+ self.tcx.generics_of(*def_id).has_self
+ }).unwrap_or(false);
- let substs = Substs::for_item(self.tcx, def.def_id(), |param, substs| {
- let mut i = param.index as usize;
+ let def_id = def.def_id();
- let (segment, lifetimes, types, infer_types) = if i < fn_start {
- if let GenericParamDefKind::Type { .. } = param.kind {
- // Handle Self first, so we can adjust the index to match the AST.
- if has_self && i == 0 {
- return opt_self_ty.map(|ty| ty.into()).unwrap_or_else(|| {
- self.var_for_def(span, param)
- });
+ let substs = AstConv::create_substs_for_generic_args(
+ self.tcx,
+ def_id,
+ &[][..],
+ has_self,
+ self_ty,
+ // Provide the generic args, and whether types should be inferred.
+ |def_id| {
+ if let Some(&PathSeg(_, index)) = path_segs.iter().find(|&PathSeg(did, _)| {
+ *did == def_id
+ }) {
+ // If we've encountered an `impl Trait`-related error, we're just
+ // going to infer the arguments for better error messages.
+ if !infer_args_for_err.contains(&index) {
+ // Check whether the user has provided generic arguments.
+ if let Some(ref data) = segments[index].args {
+ return (Some(data), segments[index].infer_types);
+ }
}
+ return (None, segments[index].infer_types);
}
- i -= has_self as usize;
- (type_segment, &lifetimes_type_seg, &types_type_seg, infer_types_type_seg)
- } else {
- i -= fn_start;
- (fn_segment, &lifetimes_fn_seg, &types_fn_seg, infer_types_fn_seg)
- };
- match param.kind {
- GenericParamDefKind::Lifetime => {
- if let Some(lifetime) = lifetimes.get(i) {
- AstConv::ast_region_to_region(self, lifetime, Some(param)).into()
- } else {
- self.re_infer(span, Some(param)).unwrap().into()
+ (None, true)
+ },
+ // Provide substitutions for parameters for which (valid) arguments have been provided.
+ |param, arg| {
+ match (¶m.kind, arg) {
+ (GenericParamDefKind::Lifetime, GenericArg::Lifetime(lt)) => {
+ AstConv::ast_region_to_region(self, lt, Some(param)).into()
}
+ (GenericParamDefKind::Type { .. }, GenericArg::Type(ty)) => {
+ self.to_ty(ty).into()
+ }
+ _ => unreachable!(),
}
- GenericParamDefKind::Type { .. } => {
- // Skip over the lifetimes in the same segment.
- if let Some((_, generics)) = segment {
- i -= generics.own_counts().lifetimes;
+ },
+ // Provide substitutions for parameters for which arguments are inferred.
+ |substs, param, infer_types| {
+ match param.kind {
+ GenericParamDefKind::Lifetime => {
+ self.re_infer(span, Some(param)).unwrap().into()
}
-
- let has_default = match param.kind {
- GenericParamDefKind::Type { has_default, .. } => has_default,
- _ => unreachable!()
- };
-
- if let Some(ast_ty) = types.get(i) {
- // A provided type parameter.
- self.to_ty(ast_ty).into()
- } else if !infer_types && has_default {
- // No type parameter provided, but a default exists.
- let default = self.tcx.type_of(param.def_id);
- self.normalize_ty(
- span,
- default.subst_spanned(self.tcx, substs, Some(span))
- ).into()
- } else {
- // No type parameters were provided, we can infer all.
- // This can also be reached in some error cases:
- // We prefer to use inference variables instead of
- // TyError to let type inference recover somewhat.
- self.var_for_def(span, param)
+ GenericParamDefKind::Type { has_default, .. } => {
+ if !infer_types && has_default {
+ // If we have a default, then we it doesn't matter that we're not
+ // inferring the type arguments: we provide the default where any
+ // is missing.
+ let default = self.tcx.type_of(param.def_id);
+ self.normalize_ty(
+ span,
+ default.subst_spanned(self.tcx, substs.unwrap(), Some(span))
+ ).into()
+ } else {
+ // If no type arguments were provided, we have to infer them.
+ // This case also occurs as a result of some malformed input, e.g.
+ // a lifetime argument being given instead of a type paramter.
+ // Using inference instead of `TyError` gives better error messages.
+ self.var_for_def(span, param)
+ }
}
}
- }
- });
+ },
+ );
// The things we are substituting into the type should not contain
// escaping late-bound regions, and nor should the base type scheme.
- let ty = self.tcx.type_of(def.def_id());
+ let ty = self.tcx.type_of(def_id);
assert!(!substs.has_escaping_regions());
assert!(!ty.has_escaping_regions());
// Add all the obligations that are required, substituting and
// normalized appropriately.
- let bounds = self.instantiate_bounds(span, def.def_id(), &substs);
+ let bounds = self.instantiate_bounds(span, def_id, &substs);
self.add_obligations_for_parameters(
- traits::ObligationCause::new(span, self.body_id, traits::ItemObligation(def.def_id())),
+ traits::ObligationCause::new(span, self.body_id, traits::ItemObligation(def_id)),
&bounds);
// Substitute the values for the type parameters into the type of
}
}
- self.check_rustc_args_require_const(def.def_id(), node_id, span);
+ self.check_rustc_args_require_const(def_id, node_id, span);
debug!("instantiate_value_path: type of {:?} is {:?}",
node_id,
directly, not through a function pointer");
}
- /// Report errors if the provided parameters are too few or too many.
- fn check_generic_arg_count(&self,
- span: Span,
- segment: &mut Option<(&hir::PathSegment, &ty::Generics)>,
- is_method_call: bool,
- supress_mismatch_error: bool) {
- let (lifetimes, types, infer_types, bindings) = segment.map_or(
- (vec![], vec![], true, &[][..]),
- |(s, _)| {
- s.args.as_ref().map_or(
- (vec![], vec![], s.infer_types, &[][..]),
- |data| {
- let (mut lifetimes, mut types) = (vec![], vec![]);
- data.args.iter().for_each(|arg| match arg {
- GenericArg::Lifetime(lt) => lifetimes.push(lt),
- GenericArg::Type(ty) => types.push(ty),
- });
- (lifetimes, types, s.infer_types, &data.bindings[..])
- }
- )
- });
-
- // Check provided parameters.
- let ((ty_required, ty_accepted), lt_accepted) =
- segment.map_or(((0, 0), 0), |(_, generics)| {
- struct ParamRange {
- required: usize,
- accepted: usize
- };
-
- let mut lt_accepted = 0;
- let mut ty_params = ParamRange { required: 0, accepted: 0 };
- for param in &generics.params {
- match param.kind {
- GenericParamDefKind::Lifetime => lt_accepted += 1,
- GenericParamDefKind::Type { has_default, .. } => {
- ty_params.accepted += 1;
- if !has_default {
- ty_params.required += 1;
- }
- }
- };
- }
- if generics.parent.is_none() && generics.has_self {
- ty_params.required -= 1;
- ty_params.accepted -= 1;
- }
-
- ((ty_params.required, ty_params.accepted), lt_accepted)
- });
-
- let count_type_params = |n| {
- format!("{} type parameter{}", n, if n == 1 { "" } else { "s" })
- };
- let expected_text = count_type_params(ty_accepted);
- let actual_text = count_type_params(types.len());
- if let Some((mut err, span)) = if types.len() > ty_accepted {
- // To prevent derived errors to accumulate due to extra
- // type parameters, we force instantiate_value_path to
- // use inference variables instead of the provided types.
- *segment = None;
- let span = types[ty_accepted].span;
- Some((struct_span_err!(self.tcx.sess, span, E0087,
- "too many type parameters provided: \
- expected at most {}, found {}",
- expected_text, actual_text), span))
- } else if types.len() < ty_required && !infer_types && !supress_mismatch_error {
- Some((struct_span_err!(self.tcx.sess, span, E0089,
- "too few type parameters provided: \
- expected {}, found {}",
- expected_text, actual_text), span))
- } else {
- None
- } {
- self.set_tainted_by_errors(); // #53251
- err.span_label(span, format!("expected {}", expected_text)).emit();
- }
-
- if !bindings.is_empty() {
- AstConv::prohibit_projection(self, bindings[0].span);
- }
-
- let infer_lifetimes = lifetimes.len() == 0;
- // Prohibit explicit lifetime arguments if late bound lifetime parameters are present.
- let has_late_bound_lifetime_defs =
- segment.map_or(None, |(_, generics)| generics.has_late_bound_regions);
- if let (Some(span_late), false) = (has_late_bound_lifetime_defs, lifetimes.is_empty()) {
- // Report this as a lint only if no error was reported previously.
- let primary_msg = "cannot specify lifetime arguments explicitly \
- if late bound lifetime parameters are present";
- let note_msg = "the late bound lifetime parameter is introduced here";
- if !is_method_call && (lifetimes.len() > lt_accepted ||
- lifetimes.len() < lt_accepted && !infer_lifetimes) {
- let mut err = self.tcx.sess.struct_span_err(lifetimes[0].span, primary_msg);
- err.span_note(span_late, note_msg);
- err.emit();
- *segment = None;
- } else {
- let mut multispan = MultiSpan::from_span(lifetimes[0].span);
- multispan.push_span_label(span_late, note_msg.to_string());
- self.tcx.lint_node(lint::builtin::LATE_BOUND_LIFETIME_ARGUMENTS,
- lifetimes[0].id, multispan, primary_msg);
- }
- return;
- }
-
- let count_lifetime_params = |n| {
- format!("{} lifetime parameter{}", n, if n == 1 { "" } else { "s" })
- };
- let expected_text = count_lifetime_params(lt_accepted);
- let actual_text = count_lifetime_params(lifetimes.len());
- if let Some((mut err, span)) = if lifetimes.len() > lt_accepted {
- let span = lifetimes[lt_accepted].span;
- Some((struct_span_err!(self.tcx.sess, span, E0088,
- "too many lifetime parameters provided: \
- expected at most {}, found {}",
- expected_text, actual_text), span))
- } else if lifetimes.len() < lt_accepted && !infer_lifetimes {
- Some((struct_span_err!(self.tcx.sess, span, E0090,
- "too few lifetime parameters provided: \
- expected {}, found {}",
- expected_text, actual_text), span))
- } else {
- None
- } {
- err.span_label(span, format!("expected {}", expected_text)).emit();
- }
- }
-
- /// Report error if there is an explicit type parameter when using `impl Trait`.
- fn check_impl_trait(&self,
- span: Span,
- segment: Option<(&hir::PathSegment, &ty::Generics)>)
- -> bool {
- let segment = segment.map(|(path_segment, generics)| {
- let explicit = !path_segment.infer_types;
- let impl_trait = generics.params.iter().any(|param| match param.kind {
- ty::GenericParamDefKind::Type {
- synthetic: Some(hir::SyntheticTyParamKind::ImplTrait), ..
- } => true,
- _ => false,
- });
-
- if explicit && impl_trait {
- let mut err = struct_span_err! {
- self.tcx.sess,
- span,
- E0632,
- "cannot provide explicit type parameters when `impl Trait` is \
- used in argument position."
- };
-
- err.emit();
- }
-
- impl_trait
- });
-
- segment.unwrap_or(false)
- }
-
// Resolves `typ` by a single level if `typ` is a type variable.
// If no resolution is possible, then an error is reported.
// Numeric inference variables may be left unresolved.
// how all the types get adjusted.)
match ref_kind {
ty::ImmBorrow => {
- // The reference being reborrowed is a sharable ref of
+ // The reference being reborrowed is a shareable ref of
// type `&'a T`. In this case, it doesn't matter where we
// *found* the `&T` pointer, the memory it references will
// be valid and immutable for `'a`. So we can stop here.
}
fn visit_node_id(&mut self, span: Span, hir_id: hir::HirId) {
- // Export associated path extensions and method resultions.
+ // Export associated path extensions and method resolutions.
if let Some(def) = self.fcx
.tables
.borrow_mut()
"##,
E0087: r##"
-Too many type parameters were supplied for a function. For example:
+Too many type arguments were supplied for a function. For example:
```compile_fail,E0087
fn foo<T>() {}
fn main() {
- foo::<f64, bool>(); // error, expected 1 parameter, found 2 parameters
+ foo::<f64, bool>(); // error: wrong number of type arguments:
+ // expected 1, found 2
}
```
-The number of supplied parameters must exactly match the number of defined type
+The number of supplied arguments must exactly match the number of defined type
parameters.
"##,
E0088: r##"
-You gave too many lifetime parameters. Erroneous code example:
+You gave too many lifetime arguments. Erroneous code example:
```compile_fail,E0088
fn f() {}
fn main() {
- f::<'static>() // error: too many lifetime parameters provided
+ f::<'static>() // error: wrong number of lifetime arguments:
+ // expected 0, found 1
}
```
-Please check you give the right number of lifetime parameters. Example:
+Please check you give the right number of lifetime arguments. Example:
```
fn f() {}
"##,
E0089: r##"
-Not enough type parameters were supplied for a function. For example:
+Too few type arguments were supplied for a function. For example:
```compile_fail,E0089
fn foo<T, U>() {}
fn main() {
- foo::<f64>(); // error, expected 2 parameters, found 1 parameter
+ foo::<f64>(); // error: wrong number of type arguments: expected 2, found 1
}
```
-Note that if a function takes multiple type parameters but you want the compiler
+Note that if a function takes multiple type arguments but you want the compiler
to infer some of them, you can use type placeholders:
```compile_fail,E0089
fn main() {
let x: bool = true;
- foo::<f64>(x); // error, expected 2 parameters, found 1 parameter
+ foo::<f64>(x); // error: wrong number of type arguments:
+ // expected 2, found 1
foo::<_, f64>(x); // same as `foo::<bool, f64>(x)`
}
```
"##,
E0090: r##"
-You gave too few lifetime parameters. Example:
+You gave too few lifetime arguments. Example:
```compile_fail,E0090
fn foo<'a: 'b, 'b: 'a>() {}
fn main() {
- foo::<'static>(); // error, expected 2 lifetime parameters
+ foo::<'static>(); // error: wrong number of lifetime arguments:
+ // expected 2, found 1
}
```
-Please check you give the right number of lifetime parameters. Example:
+Please check you give the right number of lifetime arguments. Example:
```
fn foo<'a: 'b, 'b: 'a>() {}
// }
// ```
//
- // In a concession to backwards compatbility, we continue to
+ // In a concession to backwards compatibility, we continue to
// permit those, so long as the lifetimes aren't used in
// associated types. I believe this is sound, because lifetimes
// used elsewhere are not projected back out.
// In fact, the iteration of an FxHashMap can even vary between platforms,
// since FxHasher has different behavior for 32-bit and 64-bit platforms.
//
- // Obviously, it's extremely undesireable for documentation rendering
+ // Obviously, it's extremely undesirable for documentation rendering
// to be depndent on the platform it's run on. Apart from being confusing
// to end users, it makes writing tests much more difficult, as predicates
// can appear in any order in the final result.
// predicates and bounds, however, we ensure that for a given codebase, all
// auto-trait impls always render in exactly the same way.
//
- // Using the Debug impementation for sorting prevents us from needing to
+ // Using the Debug implementation for sorting prevents us from needing to
// write quite a bit of almost entirely useless code (e.g. how should two
// Types be sorted relative to each other). It also allows us to solve the
// problem for both WherePredicates and GenericBounds at the same time. This
return impls;
}
let ty = self.cx.tcx.type_of(def_id);
- if self.cx.access_levels.borrow().is_doc_reachable(def_id) || ty.is_primitive() {
- let generics = self.cx.tcx.generics_of(def_id);
- let real_name = name.clone().map(|name| Ident::from_str(&name));
- let param_env = self.cx.tcx.param_env(def_id);
- for &trait_def_id in self.cx.all_traits.iter() {
- if !self.cx.access_levels.borrow().is_doc_reachable(trait_def_id) ||
- self.cx.generated_synthetics
- .borrow_mut()
- .get(&(def_id, trait_def_id))
- .is_some() {
- continue
- }
- self.cx.tcx.for_each_relevant_impl(trait_def_id, ty, |impl_def_id| {
- self.cx.tcx.infer_ctxt().enter(|infcx| {
- let t_generics = infcx.tcx.generics_of(impl_def_id);
- let trait_ref = infcx.tcx.impl_trait_ref(impl_def_id)
- .expect("Cannot get impl trait");
-
- match trait_ref.self_ty().sty {
- ty::TypeVariants::TyParam(_) => {},
- _ => return,
- }
-
- let substs = infcx.fresh_substs_for_item(DUMMY_SP, def_id);
- let ty = ty.subst(infcx.tcx, substs);
- let param_env = param_env.subst(infcx.tcx, substs);
-
- let impl_substs = infcx.fresh_substs_for_item(DUMMY_SP, impl_def_id);
- let trait_ref = trait_ref.subst(infcx.tcx, impl_substs);
-
- // Require the type the impl is implemented on to match
- // our type, and ignore the impl if there was a mismatch.
- let cause = traits::ObligationCause::dummy();
- let eq_result = infcx.at(&cause, param_env)
- .eq(trait_ref.self_ty(), ty);
- if let Ok(InferOk { value: (), obligations }) = eq_result {
- // FIXME(eddyb) ignoring `obligations` might cause false positives.
- drop(obligations);
-
- let may_apply = infcx.predicate_may_hold(&traits::Obligation::new(
- cause.clone(),
- param_env,
- trait_ref.to_predicate(),
- ));
- if !may_apply {
- return
- }
- self.cx.generated_synthetics.borrow_mut()
- .insert((def_id, trait_def_id));
- let trait_ = hir::TraitRef {
- path: get_path_for_type(infcx.tcx,
- trait_def_id,
- hir::def::Def::Trait),
- ref_id: ast::DUMMY_NODE_ID,
- hir_ref_id: hir::DUMMY_HIR_ID,
- };
- let provided_trait_methods =
- infcx.tcx.provided_trait_methods(trait_def_id)
- .into_iter()
- .map(|meth| meth.ident.to_string())
- .collect();
-
- let ty = self.cx.get_real_ty(def_id, def_ctor, &real_name, generics);
- let predicates = infcx.tcx.predicates_of(impl_def_id);
-
- impls.push(Item {
- source: infcx.tcx.def_span(impl_def_id).clean(self.cx),
- name: None,
- attrs: Default::default(),
- visibility: None,
- def_id: self.cx.next_def_id(impl_def_id.krate),
- stability: None,
- deprecation: None,
- inner: ImplItem(Impl {
- unsafety: hir::Unsafety::Normal,
- generics: (t_generics, &predicates).clean(self.cx),
- provided_trait_methods,
- trait_: Some(trait_.clean(self.cx)),
- for_: ty.clean(self.cx),
- items: infcx.tcx.associated_items(impl_def_id)
- .collect::<Vec<_>>()
- .clean(self.cx),
- polarity: None,
- synthetic: false,
- blanket_impl: Some(infcx.tcx.type_of(impl_def_id)
- .clean(self.cx)),
- }),
- });
+ let generics = self.cx.tcx.generics_of(def_id);
+ let real_name = name.clone().map(|name| Ident::from_str(&name));
+ let param_env = self.cx.tcx.param_env(def_id);
+ for &trait_def_id in self.cx.all_traits.iter() {
+ if !self.cx.access_levels.borrow().is_doc_reachable(trait_def_id) ||
+ self.cx.generated_synthetics
+ .borrow_mut()
+ .get(&(def_id, trait_def_id))
+ .is_some() {
+ continue
+ }
+ self.cx.tcx.for_each_relevant_impl(trait_def_id, ty, |impl_def_id| {
+ self.cx.tcx.infer_ctxt().enter(|infcx| {
+ let t_generics = infcx.tcx.generics_of(impl_def_id);
+ let trait_ref = infcx.tcx.impl_trait_ref(impl_def_id)
+ .expect("Cannot get impl trait");
+
+ match trait_ref.self_ty().sty {
+ ty::TypeVariants::TyParam(_) => {},
+ _ => return,
+ }
+
+ let substs = infcx.fresh_substs_for_item(DUMMY_SP, def_id);
+ let ty = ty.subst(infcx.tcx, substs);
+ let param_env = param_env.subst(infcx.tcx, substs);
+
+ let impl_substs = infcx.fresh_substs_for_item(DUMMY_SP, impl_def_id);
+ let trait_ref = trait_ref.subst(infcx.tcx, impl_substs);
+
+ // Require the type the impl is implemented on to match
+ // our type, and ignore the impl if there was a mismatch.
+ let cause = traits::ObligationCause::dummy();
+ let eq_result = infcx.at(&cause, param_env)
+ .eq(trait_ref.self_ty(), ty);
+ if let Ok(InferOk { value: (), obligations }) = eq_result {
+ // FIXME(eddyb) ignoring `obligations` might cause false positives.
+ drop(obligations);
+
+ let may_apply = infcx.predicate_may_hold(&traits::Obligation::new(
+ cause.clone(),
+ param_env,
+ trait_ref.to_predicate(),
+ ));
+ if !may_apply {
+ return
}
- });
+ self.cx.generated_synthetics.borrow_mut()
+ .insert((def_id, trait_def_id));
+ let trait_ = hir::TraitRef {
+ path: get_path_for_type(infcx.tcx,
+ trait_def_id,
+ hir::def::Def::Trait),
+ ref_id: ast::DUMMY_NODE_ID,
+ hir_ref_id: hir::DUMMY_HIR_ID,
+ };
+ let provided_trait_methods =
+ infcx.tcx.provided_trait_methods(trait_def_id)
+ .into_iter()
+ .map(|meth| meth.ident.to_string())
+ .collect();
+
+ let ty = self.cx.get_real_ty(def_id, def_ctor, &real_name, generics);
+ let predicates = infcx.tcx.predicates_of(impl_def_id);
+
+ impls.push(Item {
+ source: infcx.tcx.def_span(impl_def_id).clean(self.cx),
+ name: None,
+ attrs: Default::default(),
+ visibility: None,
+ def_id: self.cx.next_def_id(impl_def_id.krate),
+ stability: None,
+ deprecation: None,
+ inner: ImplItem(Impl {
+ unsafety: hir::Unsafety::Normal,
+ generics: (t_generics, &predicates).clean(self.cx),
+ provided_trait_methods,
+ trait_: Some(trait_.clean(self.cx)),
+ for_: ty.clean(self.cx),
+ items: infcx.tcx.associated_items(impl_def_id)
+ .collect::<Vec<_>>()
+ .clean(self.cx),
+ polarity: None,
+ synthetic: false,
+ blanket_impl: Some(infcx.tcx.type_of(impl_def_id)
+ .clean(self.cx)),
+ }),
+ });
+ }
});
- }
+ });
}
impls
}
True,
/// Denies all configurations.
False,
- /// A generic configration option, e.g. `test` or `target_os = "linux"`.
+ /// A generic configuration option, e.g. `test` or `target_os = "linux"`.
Cfg(Symbol, Option<Symbol>),
/// Negate a configuration requirement, i.e. `not(x)`.
Not(Box<Cfg>),
let mut ty_substs = FxHashMap();
let mut lt_substs = FxHashMap();
provided_params.with_generic_args(|generic_args| {
- let mut indices = ty::GenericParamCount {
- lifetimes: 0,
- types: 0
- };
+ let mut indices: GenericParamCount = Default::default();
for param in generics.params.iter() {
match param.kind {
hir::GenericParamKind::Lifetime { .. } => {
// the access levels from crateanalysis.
pub access_levels: Arc<AccessLevels<DefId>>,
- /// The version of the crate being documented, if given fron the `--crate-version` flag.
+ /// The version of the crate being documented, if given from the `--crate-version` flag.
pub crate_version: Option<String>,
// Private fields only used when initially crawling a crate to build a cache
var themesWidth = null;
+ var titleBeforeSearch = document.title;
+
if (!String.prototype.startsWith) {
String.prototype.startsWith = function(searchString, position) {
position = position || 0;
ev.preventDefault();
addClass(search, "hidden");
removeClass(document.getElementById("main"), "hidden");
+ document.title = titleBeforeSearch;
}
defocusSearchBar();
}
let path = ast::Path { segments: vec![segment], span: DUMMY_SP };
let mut resolver = cx.resolver.borrow_mut();
let mark = Mark::root();
- let res = resolver
- .resolve_macro_to_def_inner(mark, &path, MacroKind::Bang, false);
- if let Ok(def) = res {
+ if let Ok(def) = resolver.resolve_macro_to_def_inner(&path, MacroKind::Bang, mark, &[], false) {
if let SyntaxExtension::DeclMacro { .. } = *resolver.get_macro(def) {
return Some(def);
}
use std::rc::Rc;
use std::sync::Arc;
-impl<
- T: Encodable
-> Encodable for LinkedList<T> {
+impl<T: Encodable> Encodable for LinkedList<T> {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_seq(self.len(), |s| {
for (i, e) in self.iter().enumerate() {
}
}
-impl<
- K: Encodable + PartialEq + Ord,
- V: Encodable
-> Encodable for BTreeMap<K, V> {
+impl<K, V> Encodable for BTreeMap<K, V>
+ where K: Encodable + PartialEq + Ord,
+ V: Encodable
+{
fn encode<S: Encoder>(&self, e: &mut S) -> Result<(), S::Error> {
e.emit_map(self.len(), |e| {
let mut i = 0;
}
}
-impl<
- K: Decodable + PartialEq + Ord,
- V: Decodable
-> Decodable for BTreeMap<K, V> {
+impl<K, V> Decodable for BTreeMap<K, V>
+ where K: Decodable + PartialEq + Ord,
+ V: Decodable
+{
fn decode<D: Decoder>(d: &mut D) -> Result<BTreeMap<K, V>, D::Error> {
d.read_map(|d, len| {
let mut map = BTreeMap::new();
}
}
-impl<
- T: Encodable + PartialEq + Ord
-> Encodable for BTreeSet<T> {
+impl<T> Encodable for BTreeSet<T>
+ where T: Encodable + PartialEq + Ord
+{
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_seq(self.len(), |s| {
let mut i = 0;
}
}
-impl<
- T: Decodable + PartialEq + Ord
-> Decodable for BTreeSet<T> {
+impl<T> Decodable for BTreeSet<T>
+ where T: Decodable + PartialEq + Ord
+{
fn decode<D: Decoder>(d: &mut D) -> Result<BTreeSet<T>, D::Error> {
d.read_seq(|d, len| {
let mut set = BTreeSet::new();
}
}
+#[inline]
pub fn write_signed_leb128(out: &mut Vec<u8>, value: i128) {
write_signed_leb128_to(value, |v| write_to_vec(out, v))
}
self.data
}
+ #[inline]
pub fn emit_raw_bytes(&mut self, s: &[u8]) {
self.data.extend_from_slice(s);
}
self.position += bytes;
}
+ #[inline]
pub fn read_raw_bytes(&mut self, s: &mut [u8]) -> Result<(), String> {
let start = self.position;
let end = start + s.len();
Ok(Cow::Borrowed(s))
}
+ #[inline]
fn error(&mut self, err: &str) -> Self::Error {
err.to_string()
}
self.emit_enum("Option", f)
}
+ #[inline]
fn emit_option_none(&mut self) -> Result<(), Self::Error> {
self.emit_enum_variant("None", 0, 0, |_| Ok(()))
}
}
impl<T:Encodable> Encodable for Rc<T> {
- #[inline]
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
(**self).encode(s)
}
}
impl<T:Decodable> Decodable for Rc<T> {
- #[inline]
fn decode<D: Decoder>(d: &mut D) -> Result<Rc<T>, D::Error> {
Ok(Rc::new(Decodable::decode(d)?))
}
}
}
-impl<T:Decodable+ToOwned> Decodable for Cow<'static, [T]> where [T]: ToOwned<Owned = Vec<T>> {
+impl<T:Decodable+ToOwned> Decodable for Cow<'static, [T]>
+ where [T]: ToOwned<Owned = Vec<T>>
+{
fn decode<D: Decoder>(d: &mut D) -> Result<Cow<'static, [T]>, D::Error> {
d.read_seq(|d, len| {
let mut v = Vec::with_capacity(len);
// make a RawBucket point to invalid memory using safe code.
impl<K, V> RawBucket<K, V> {
unsafe fn hash(&self) -> *mut HashUint {
- self.hash_start.offset(self.idx as isize)
+ self.hash_start.add(self.idx)
}
unsafe fn pair(&self) -> *mut (K, V) {
- self.pair_start.offset(self.idx as isize) as *mut (K, V)
+ self.pair_start.add(self.idx) as *mut (K, V)
}
unsafe fn hash_pair(&self) -> (*mut HashUint, *mut (K, V)) {
(self.hash(), self.pair())
/// This function acquires exclusive access to the task context.
///
/// Panics if no task has been set or if the task context has already been
-/// retrived by a surrounding call to get_task_cx.
+/// retrieved by a surrounding call to get_task_cx.
pub fn get_task_cx<F, R>(f: F) -> R
where
F: FnOnce(&mut task::Context) -> R
// Find the last newline character in the buffer provided. If found then
// we're going to write all the data up to that point and then flush,
- // otherewise we just write the whole block to the underlying writer.
+ // otherwise we just write the whole block to the underlying writer.
let i = match memchr::memrchr(b'\n', buf) {
Some(i) => i,
None => return self.inner.write(buf),
#![feature(libc)]
#![feature(link_args)]
#![feature(linkage)]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![feature(needs_panic_runtime)]
#![feature(never_type)]
#![cfg_attr(not(stage0), feature(nll))]
#[cfg(test)]
mod tests {
- // test the implementations for the current plattform
+ // test the implementations for the current platform
use super::{memchr, memrchr};
#[test]
/// # Examples
///
/// ```
- /// #![feature(ip_constructors)]
/// use std::net::Ipv4Addr;
///
/// let addr = Ipv4Addr::LOCALHOST;
/// assert_eq!(addr, Ipv4Addr::new(127, 0, 0, 1));
/// ```
- #[unstable(feature = "ip_constructors",
- reason = "requires greater scrutiny before stabilization",
- issue = "44582")]
+ #[stable(feature = "ip_constructors", since = "1.30.0")]
pub const LOCALHOST: Self = Ipv4Addr::new(127, 0, 0, 1);
/// An IPv4 address representing an unspecified address: 0.0.0.0
/// # Examples
///
/// ```
- /// #![feature(ip_constructors)]
/// use std::net::Ipv4Addr;
///
/// let addr = Ipv4Addr::UNSPECIFIED;
/// assert_eq!(addr, Ipv4Addr::new(0, 0, 0, 0));
/// ```
- #[unstable(feature = "ip_constructors",
- reason = "requires greater scrutiny before stabilization",
- issue = "44582")]
+ #[stable(feature = "ip_constructors", since = "1.30.0")]
pub const UNSPECIFIED: Self = Ipv4Addr::new(0, 0, 0, 0);
/// An IPv4 address representing the broadcast address: 255.255.255.255
/// # Examples
///
/// ```
- /// #![feature(ip_constructors)]
/// use std::net::Ipv4Addr;
///
/// let addr = Ipv4Addr::BROADCAST;
/// assert_eq!(addr, Ipv4Addr::new(255, 255, 255, 255));
/// ```
- #[unstable(feature = "ip_constructors",
- reason = "requires greater scrutiny before stabilization",
- issue = "44582")]
+ #[stable(feature = "ip_constructors", since = "1.30.0")]
pub const BROADCAST: Self = Ipv4Addr::new(255, 255, 255, 255);
/// Returns the four eight-bit integers that make up this address.
/// # Examples
///
/// ```
- /// #![feature(ip_constructors)]
/// use std::net::Ipv6Addr;
///
/// let addr = Ipv6Addr::LOCALHOST;
/// assert_eq!(addr, Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1));
/// ```
- #[unstable(feature = "ip_constructors",
- reason = "requires greater scrutiny before stabilization",
- issue = "44582")]
+ #[stable(feature = "ip_constructors", since = "1.30.0")]
pub const LOCALHOST: Self = Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1);
/// An IPv6 address representing the unspecified address: `::`
/// # Examples
///
/// ```
- /// #![feature(ip_constructors)]
/// use std::net::Ipv6Addr;
///
/// let addr = Ipv6Addr::UNSPECIFIED;
/// assert_eq!(addr, Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 0));
/// ```
- #[unstable(feature = "ip_constructors",
- reason = "requires greater scrutiny before stabilization",
- issue = "44582")]
+ #[stable(feature = "ip_constructors", since = "1.30.0")]
pub const UNSPECIFIED: Self = Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 0);
/// Returns the eight 16-bit segments that make up this address.
/// happens-before relation between the closure and code executing after the
/// return).
///
- /// If the given closure recusively invokes `call_once` on the same `Once`
+ /// If the given closure recursively invokes `call_once` on the same `Once`
/// instance the exact behavior is not specified, allowed outcomes are
/// a panic or a deadlock.
///
use sync::atomic::{AtomicBool, Ordering};
// Kernel prior to 4.5 don't have copy_file_range
- // We store the availability in a global to avoid unneccessary syscalls
+ // We store the availability in a global to avoid unnecessary syscalls
static HAS_COPY_FILE_RANGE: AtomicBool = AtomicBool::new(true);
unsafe fn copy_file_range(
#[cfg(not(target_os = "linux"))]
const SOCK_CLOEXEC: c_int = 0;
-// Another conditional contant for name resolution: Macos et iOS use
+// Another conditional constant for name resolution: Macos et iOS use
// SO_NOSIGPIPE as a setsockopt flag to disable SIGPIPE emission on socket.
// Other platforms do otherwise.
#[cfg(target_vendor = "apple")]
if v.capacity() == v.len() {
v.reserve(1);
}
- slice::from_raw_parts_mut(v.as_mut_ptr().offset(v.len() as isize),
+ slice::from_raw_parts_mut(v.as_mut_ptr().add(v.len()),
v.capacity() - v.len())
}
pub unsafe fn slice_unchecked(s: &Wtf8, begin: usize, end: usize) -> &Wtf8 {
// memory layout of an &[u8] and &Wtf8 are the same
Wtf8::from_bytes_unchecked(slice::from_raw_parts(
- s.bytes.as_ptr().offset(begin as isize),
+ s.bytes.as_ptr().add(begin),
end - begin
))
}
```ignore (limited to a warning during 2018 edition development)
#![feature(rust_2018_preview)]
-#![feature(raw_identifiers)] // error: the feature `raw_identifiers` is
- // included in the Rust 2018 edition
+#![feature(impl_header_lifetime_elision)] // error: the feature
+ // `impl_header_lifetime_elision` is
+ // included in the Rust 2018 edition
```
"##,
fn find_legacy_attr_invoc(&mut self, attrs: &mut Vec<Attribute>, allow_derive: bool)
-> Option<Attribute>;
- fn resolve_invoc(&mut self, invoc: &Invocation, scope: Mark, force: bool)
- -> Result<Option<Lrc<SyntaxExtension>>, Determinacy>;
- fn resolve_macro(&mut self, scope: Mark, path: &ast::Path, kind: MacroKind, force: bool)
- -> Result<Lrc<SyntaxExtension>, Determinacy>;
+ fn resolve_macro_invocation(&mut self, invoc: &Invocation, scope: Mark, force: bool)
+ -> Result<Option<Lrc<SyntaxExtension>>, Determinacy>;
+ fn resolve_macro_path(&mut self, path: &ast::Path, kind: MacroKind, scope: Mark,
+ derives_in_scope: &[ast::Path], force: bool)
+ -> Result<Lrc<SyntaxExtension>, Determinacy>;
+
fn check_unused_macros(&self);
}
fn resolve_imports(&mut self) {}
fn find_legacy_attr_invoc(&mut self, _attrs: &mut Vec<Attribute>, _allow_derive: bool)
-> Option<Attribute> { None }
- fn resolve_invoc(&mut self, _invoc: &Invocation, _scope: Mark, _force: bool)
- -> Result<Option<Lrc<SyntaxExtension>>, Determinacy> {
+ fn resolve_macro_invocation(&mut self, _invoc: &Invocation, _scope: Mark, _force: bool)
+ -> Result<Option<Lrc<SyntaxExtension>>, Determinacy> {
Err(Determinacy::Determined)
}
- fn resolve_macro(&mut self, _scope: Mark, _path: &ast::Path, _kind: MacroKind,
- _force: bool) -> Result<Lrc<SyntaxExtension>, Determinacy> {
+ fn resolve_macro_path(&mut self, _path: &ast::Path, _kind: MacroKind, _scope: Mark,
+ _derives_in_scope: &[ast::Path], _force: bool)
+ -> Result<Lrc<SyntaxExtension>, Determinacy> {
Err(Determinacy::Determined)
}
fn check_unused_macros(&self) {}
InvocationKind::Derive { ref path, .. } => path.span,
}
}
-
- pub fn path(&self) -> Option<&Path> {
- match self.kind {
- InvocationKind::Bang { ref mac, .. } => Some(&mac.node.path),
- InvocationKind::Attr { attr: Some(ref attr), .. } => Some(&attr.path),
- InvocationKind::Attr { attr: None, .. } => None,
- InvocationKind::Derive { ref path, .. } => Some(path),
- }
- }
}
pub struct MacroExpander<'a, 'b:'a> {
// we'll be able to immediately resolve most of imported macros.
self.resolve_imports();
- // Resolve paths in all invocations and produce ouput expanded fragments for them, but
+ // Resolve paths in all invocations and produce output expanded fragments for them, but
// do not insert them into our input AST fragment yet, only store in `expanded_fragments`.
// The output fragments also go through expansion recursively until no invocations are left.
// Unresolved macros produce dummy outputs as a recovery measure.
let scope =
if self.monotonic { invoc.expansion_data.mark } else { orig_expansion_data.mark };
- let ext = match self.cx.resolver.resolve_invoc(&invoc, scope, force) {
+ let ext = match self.cx.resolver.resolve_macro_invocation(&invoc, scope, force) {
Ok(ext) => Some(ext),
Err(Determinacy::Determined) => None,
Err(Determinacy::Undetermined) => {
for path in &traits {
let mark = Mark::fresh(self.cx.current_expansion.mark);
derives.push(mark);
- let item = match self.cx.resolver.resolve_macro(
- Mark::root(), path, MacroKind::Derive, false) {
+ let item = match self.cx.resolver.resolve_macro_path(
+ path, MacroKind::Derive, Mark::root(), &[], false) {
Ok(ext) => match *ext {
BuiltinDerive(..) => item_with_markers.clone(),
_ => item.clone(),
// A queue of possible matcher positions. We initialize it with the matcher position in which
// the "dot" is before the first token of the first token tree in `ms`. `inner_parse_loop` then
- // processes all of these possible matcher positions and produces posible next positions into
+ // processes all of these possible matcher positions and produces possible next positions into
// `next_items`. After some post-processing, the contents of `next_items` replenish `cur_items`
// and we start over again.
//
),
);
}
- // If there are no posible next positions AND we aren't waiting for the black-box parser,
+ // If there are no possible next positions AND we aren't waiting for the black-box parser,
// then their is a syntax error.
else if bb_items.is_empty() && next_items.is_empty() {
return Failure(parser.span, parser.token);
frag_span: Span) -> bool {
match frag_name {
"item" | "block" | "stmt" | "expr" | "pat" | "lifetime" |
- "path" | "ty" | "ident" | "meta" | "tt" | "" => true,
+ "path" | "ty" | "ident" | "meta" | "tt" | "vis" | "" => true,
"literal" => {
if !features.macro_literal_matcher &&
!attr::contains_name(attrs, "allow_internal_unstable") {
}
true
},
- "vis" => {
- if !features.macro_vis_matcher &&
- !attr::contains_name(attrs, "allow_internal_unstable") {
- let explain = feature_gate::EXPLAIN_VIS_MATCHER;
- emit_feature_err(sess,
- "macro_vis_matcher",
- frag_span,
- GateIssue::Language,
- explain);
- }
- true
- },
_ => false,
}
}
}
// `tree` is followed by an `ident`. This could be `$meta_var` or the `$crate` special
- // metavariable that names the crate of the invokation.
+ // metavariable that names the crate of the invocation.
Some(tokenstream::TokenTree::Token(ident_span, ref token)) if token.is_ident() => {
let (ident, is_raw) = token.ident().unwrap();
let span = ident_span.with_lo(span.lo());
// Allows overlapping impls of marker traits
(active, overlapping_marker_traits, "1.18.0", Some(29864), None),
- // Allows use of the :vis macro fragment specifier
- (active, macro_vis_matcher, "1.18.0", Some(41022), None),
-
// rustc internal
(active, abi_thiscall, "1.19.0", None, None),
// `use path as _;` and `extern crate c as _;`
(active, underscore_imports, "1.26.0", Some(48216), None),
- // Allows keywords to be escaped for use as identifiers
- (active, raw_identifiers, "1.26.0", Some(48589), Some(Edition::Edition2018)),
-
// Allows macro invocations in `extern {}` blocks
(active, macros_in_extern, "1.27.0", Some(49476), None),
// 'a: { break 'a; }
(active, label_break_value, "1.28.0", Some(48594), None),
+ // Integer match exhaustiveness checking
+ (active, exhaustive_integer_patterns, "1.30.0", Some(50907), None),
+
// #[panic_implementation]
(active, panic_implementation, "1.28.0", Some(44489), None),
(accepted, repr_transparent, "1.28.0", Some(43036), None),
// Defining procedural macros in `proc-macro` crates
(accepted, proc_macro, "1.29.0", Some(38356), None),
+ // Allows use of the :vis macro fragment specifier
+ (accepted, macro_vis_matcher, "1.29.0", Some(41022), None),
// Allows importing and reexporting macros with `use`,
// enables macro modularization in general.
(accepted, use_extern_macros, "1.30.0", Some(35896), None),
+ // Allows keywords to be escaped for use as identifiers
+ (accepted, raw_identifiers, "1.30.0", Some(48589), None),
);
// If you change this, please modify src/doc/unstable-book as well. You must
pub const EXPLAIN_DERIVE_UNDERSCORE: &'static str =
"attributes of the form `#[derive_*]` are reserved for the compiler";
-pub const EXPLAIN_VIS_MATCHER: &'static str =
- ":vis fragment specifier is experimental and subject to change";
-
pub const EXPLAIN_LITERAL_MATCHER: &'static str =
":literal fragment specifier is experimental and subject to change";
plugin_attributes,
};
- if !features.raw_identifiers {
- for &span in sess.raw_identifier_spans.borrow().iter() {
- if !span.allows_unstable() {
- gate_feature!(&ctx, raw_identifiers, span,
- "raw identifiers are experimental and subject to change"
- );
- }
- }
- }
-
let visitor = &mut PostExpansionVisitor { context: &ctx };
visitor.whole_crate_feature_gates(krate);
visit::walk_crate(visitor, krate);
fn byte_str_lit(lit: &str) -> Lrc<Vec<u8>> {
let mut res = Vec::with_capacity(lit.len());
- let error = |i| format!("lexer should have rejected {} at {}", lit, i);
+ let error = |i| panic!("lexer should have rejected {} at {}", lit, i);
/// Eat everything up to a non-whitespace
fn eat<I: Iterator<Item=(usize, u8)>>(it: &mut iter::Peekable<I>) {
loop {
match chars.next() {
Some((i, b'\\')) => {
- let em = error(i);
- match chars.peek().expect(&em).1 {
+ match chars.peek().unwrap_or_else(|| error(i)).1 {
b'\n' => eat(&mut chars),
b'\r' => {
chars.next();
- if chars.peek().expect(&em).1 != b'\n' {
+ if chars.peek().unwrap_or_else(|| error(i)).1 != b'\n' {
panic!("lexer accepted bare CR");
}
eat(&mut chars);
}
},
Some((i, b'\r')) => {
- let em = error(i);
- if chars.peek().expect(&em).1 != b'\n' {
+ if chars.peek().unwrap_or_else(|| error(i)).1 != b'\n' {
panic!("lexer accepted bare CR");
}
chars.next();
if arm_start_lines.lines[0].end_col == expr_lines.lines[0].end_col
&& expr_lines.lines.len() == 2
&& self.token == token::FatArrow => {
- // We check wether there's any trailing code in the parse span, if there
- // isn't, we very likely have the following:
+ // We check whether there's any trailing code in the parse span,
+ // if there isn't, we very likely have the following:
//
// X | &Y => "y"
// | -- - missing comma
}
/// A wrapper around `parse_pat` with some special error handling for the
- /// "top-level" patterns in a match arm, `for` loop, `let`, &c. (in contast
+ /// "top-level" patterns in a match arm, `for` loop, `let`, &c. (in contrast
/// to subpatterns within such).
fn parse_top_level_pat(&mut self) -> PResult<'a, P<Pat>> {
let pat = self.parse_pat()?;
// If `break_on_semi` is `Break`, then we will stop consuming tokens after
// finding (and consuming) a `;` outside of `{}` or `[]` (note that this is
// approximate - it can mean we break too early due to macros, but that
- // shoud only lead to sub-optimal recovery, not inaccurate parsing).
+ // should only lead to sub-optimal recovery, not inaccurate parsing).
//
// If `break_on_block` is `Break`, then we will stop consuming tokens
// after finding (and consuming) a brace-delimited block.
fn parse_generic_bounds_common(&mut self, allow_plus: bool) -> PResult<'a, GenericBounds> {
let mut bounds = Vec::new();
loop {
- // This needs to be syncronized with `Token::can_begin_bound`.
+ // This needs to be synchronized with `Token::can_begin_bound`.
let is_bound_start = self.check_path() || self.check_lifetime() ||
self.check(&token::Question) ||
self.check_keyword(keywords::For) ||
invalid_refs: Vec<(usize, usize)>,
/// Spans of all the formatting arguments, in order.
arg_spans: Vec<Span>,
- /// Wether this formatting string is a literal or it comes from a macro.
+ /// Whether this formatting string is a literal or it comes from a macro.
is_literal: bool,
}
});
// This is safe because the interner keeps string alive until it is dropped.
// We can access it because we know the interner is still alive since we use a
- // scoped thread local to access it, and it was alive at the begining of this scope
+ // scoped thread local to access it, and it was alive at the beginning of this scope
unsafe { f(&*str) }
}
}
#if LLVM_VERSION_GE(7, 0)
- unwrap(Target)->addPassesToEmitFile(*PM, OS, nullptr, FileType, false);
+ buffer_ostream BOS(OS);
+ unwrap(Target)->addPassesToEmitFile(*PM, BOS, nullptr, FileType, false);
#else
unwrap(Target)->addPassesToEmitFile(*PM, OS, FileType, false);
#endif
let _: (char, u32) = Trait::without_default_impl(0);
// Currently, no object code is generated for trait methods with default
- // implemenations, unless they are actually called from somewhere. Therefore
+ // implementations, unless they are actually called from somewhere. Therefore
// we cannot import the implementations and have to create our own inline.
//~ MONO_ITEM fn cgu_export_trait_method::Trait[0]::with_default_impl[0]<u32>
let _ = Trait::with_default_impl(0u32);
--- /dev/null
+// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// compile-flags: -O
+
+// A drop([...].clone()) sequence on an Rc should be a no-op
+// In particular, no call to __rust_dealloc should be emitted
+#![crate_type = "lib"]
+use std::rc::Rc;
+
+pub fn foo(t: &Rc<Vec<usize>>) {
+// CHECK-NOT: __rust_dealloc
+ drop(t.clone());
+}
#![feature(plugin_registrar, rustc_private)]
#![feature(box_syntax)]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![feature(macro_at_most_once_rep)]
#[macro_use] extern crate rustc;
#![feature(plugin_registrar)]
#![feature(box_syntax, rustc_private)]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![feature(macro_at_most_once_rep)]
// Load rustc as a plugin to get macros
#![feature(plugin_registrar)]
#![feature(box_syntax, rustc_private)]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![feature(macro_at_most_once_rep)]
extern crate syntax;
// except according to those terms.
// This crate attempts to enumerate the various scenarios for how a
-// type can define fields and methods with various visiblities and
+// type can define fields and methods with various visibilities and
// stabilities.
//
// The basic stability pattern in this file has four cases:
//
// However, since stability attributes can only be observed in
// cross-crate linkage scenarios, there is little reason to take the
-// cross-product (4 stability cases * 4 visiblity cases), because the
+// cross-product (4 stability cases * 4 visibility cases), because the
// first three visibility cases cannot be accessed outside this crate,
// and therefore stability is only relevant when the visibility is pub
// to the whole universe.
use foo::*;
-#[foo::a] //~ ERROR: paths of length greater than one
+#[foo::a] //~ ERROR: non-ident attribute macro paths are unstable
fn _test() {}
fn _test_inner() {
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// error-pattern:runned an unexported test
+// error-pattern:ran an unexported test
// compile-flags:--test
// check-stdout
#[test]
fn unexported() {
- panic!("runned an unexported test");
+ panic!("ran an unexported test");
}
}
-include ../tools.mk
-# Test that hir-tree output doens't crash and includes
+# Test that hir-tree output doesn't crash and includes
# the string constant we would expect to see.
all:
-include ../tools.mk
-# Test that hir-tree output doens't crash and includes
+# Test that hir-tree output doesn't crash and includes
# the string constant we would expect to see.
all:
LOG := $(TMPDIR)/log.txt
# This test builds a shared object, then an executable that links it as a native
-# rust library (constrast to an rlib). The shared library and executable both
+# rust library (contrast to an rlib). The shared library and executable both
# are compiled with address sanitizer, and we assert that a fault in the cdylib
# is correctly detected.
LOG := $(TMPDIR)/log.txt
# This test builds a shared object, then an executable that links it as a native
-# rust library (constrast to an rlib). The shared library and executable both
+# rust library (contrast to an rlib). The shared library and executable both
# are compiled with address sanitizer, and we assert that a fault in the dylib
# is correctly detected.
#![feature(plugin_registrar, rustc_private)]
#![feature(box_syntax)]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![feature(macro_at_most_once_rep)]
#[macro_use] extern crate rustc;
// except according to those terms.
#![feature(box_syntax, plugin, plugin_registrar, rustc_private)]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![feature(macro_at_most_once_rep)]
#![crate_type = "dylib"]
// edition:2015
-#![feature(raw_identifiers)]
-
#[macro_export]
macro_rules! produces_async {
() => (pub fn async() {})
// RwLock (since we can grab the child pointers in read-only
// mode), but we cannot lock a std::sync::Mutex to guard reading
// from each node via the same pattern, since once you hit the
- // cycle, you'll be trying to acquring the same lock twice.
+ // cycle, you'll be trying to acquiring the same lock twice.
// (We deal with this by exiting the traversal early if try_lock fails.)
// Cycle 12: { arc0 -> (arc1, arc2), arc1 -> (), arc2 -> arc0 }, refcells
// edition:2015
// aux-build:edition-kw-macro-2015.rs
-#![feature(raw_identifiers)]
-
#[macro_use]
extern crate edition_kw_macro_2015;
// edition:2015
// aux-build:edition-kw-macro-2018.rs
-#![feature(raw_identifiers)]
-
#[macro_use]
extern crate edition_kw_macro_2018;
}
fn test_once() {
- // Make sure each argument are evaluted only once even though it may be
+ // Make sure each argument are evaluated only once even though it may be
// formatted multiple times
fn foo() -> isize {
static mut FOO: isize = 0;
}
match 'c' {
'a'...'z' => {}
- _ => panic!("should suppport char ranges")
+ _ => panic!("should support char ranges")
}
match -3_isize {
-7...5 => {}
// trailing comma on lifetime bounds
type TypeE = TypeA<'static,>;
-// normal type arugment
+// normal type argument
type TypeF<T> = Box<T>;
// type argument with trailing comma
// Issue 33903:
// Built-in indexing should be used even when the index is not
// trivially an integer
-// Only built-in indexing can be used in constant expresssions
+// Only built-in indexing can be used in constant expressions
const FOO: i32 = [12, 34][0 + 1];
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// Test that we are able to reinitilize box with moved referent
+// Test that we are able to reinitialize box with moved referent
#![feature(nll)]
static mut ORDER: [usize; 3] = [0, 0, 0];
static mut INDEX: usize = 0;
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
//{{{ issue 40569 ==============================================================
// except according to those terms.
#![allow(dead_code, unused_imports)]
-#![feature(macro_vis_matcher, crate_visibility_modifier)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
+#![feature(crate_visibility_modifier)]
/**
Ensure that `:vis` matches can be captured in existing positions, and passed
}
match 'c' {
'a'..='z' => {}
- _ => panic!("should suppport char ranges")
+ _ => panic!("should support char ranges")
}
match -3 {
-7..=5 => {}
// except according to those terms.
// Regression test for #23698: The reassignment checker only cared
-// about the last assigment in a match arm body
+// about the last assignment in a match arm body
// Use an extra function to make sure no extra assignments
// are introduced by macros in the match statement
let write_len = buf.len();
unsafe {
*self = slice::from_raw_parts_mut(
- self.as_mut_ptr().offset(write_len as isize),
+ self.as_mut_ptr().add(write_len),
self.len() - write_len
);
}
fn main() {
// This can fail if rustc and LLVM disagree on the size of a type.
- // In this case, `Option<Packed<(&(), u32)>>` was erronously not
+ // In this case, `Option<Packed<(&(), u32)>>` was erroneously not
// marked as packed despite needing alignment `1` and containing
// its `&()` discriminant, which has alignment larger than `1`.
sanity_check_size((Some(Packed((&(), 0))), true));
for i in 0..COUNT / 2 {
let (p0, p1, size) = (ascend[2*i], ascend[2*i+1], idx_to_size(i));
for j in 0..size {
- assert_eq!(*p0.offset(j as isize), i as u8);
- assert_eq!(*p1.offset(j as isize), i as u8);
+ assert_eq!(*p0.add(j), i as u8);
+ assert_eq!(*p1.add(j), i as u8);
}
}
}
for i in 0..COUNT / 2 {
let (p0, p1, size) = (ascend[2*i], ascend[2*i+1], idx_to_size(i));
for j in 0..size {
- *p0.offset(j as isize) = i as u8;
- *p1.offset(j as isize) = i as u8;
+ *p0.add(j) = i as u8;
+ *p1.add(j) = i as u8;
}
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(raw_identifiers)]
-
use std::mem;
#[r#repr(r#C, r#packed)]
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(raw_identifiers)]
-
fn r#fn(r#match: u32) -> u32 {
r#match
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(raw_identifiers)]
-
#[derive(Debug, PartialEq, Eq)]
struct IntWrapper(u32);
// except according to those terms.
#![feature(decl_macro)]
-#![feature(raw_identifiers)]
r#macro_rules! r#struct {
($r#struct:expr) => { $r#struct }
let args = unsafe {
(0..argc as usize).map(|i| {
- let ptr = *argv.offset(i as isize) as *const _;
+ let ptr = *argv.add(i) as *const _;
CStr::from_ptr(ptr).to_bytes().to_vec()
}).collect::<Vec<_>>()
};
fn main() {
unsafe {
- // Install signal hander that runs on alternate signal stack.
+ // Install signal handler that runs on alternate signal stack.
let mut action: sigaction = std::mem::zeroed();
action.sa_flags = (SA_ONSTACK | SA_SIGINFO) as _;
action.sa_sigaction = signal_handler as sighandler_t;
--- /dev/null
+// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![crate_name = "foo"]
+
+// @has foo/struct.S.html '//h3[@id="impl-Into"]//code' 'impl<T, U> Into for T'
+pub struct S2 {}
+mod m {
+ pub struct S {}
+}
+pub use m::*;
#![crate_name = "qwop"]
-/// (writen on a spider's web) Some Macro
+/// (written on a spider's web) Some Macro
#[macro_export]
macro_rules! some_macro {
() => {
#![feature(plugin_registrar)]
#![feature(box_syntax, rustc_private)]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![feature(macro_at_most_once_rep)]
// Load rustc as a plugin to get macros
#![feature(plugin_registrar)]
#![feature(box_syntax, rustc_private)]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![feature(macro_at_most_once_rep)]
extern crate syntax;
#![feature(plugin_registrar)]
#![feature(box_syntax, rustc_private)]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![feature(macro_at_most_once_rep)]
extern crate syntax;
//~| WARN this was previously accepted
struct Z;
+fn inner_block() {
+ #[derive(generate_mod::CheckDerive)] //~ WARN cannot find type `FromOutside` in this scope
+ //~| WARN cannot find type `OuterDerive` in this scope
+ //~| WARN this was previously accepted
+ //~| WARN this was previously accepted
+ struct InnerZ;
+}
+
#[derive(generate_mod::CheckDeriveLint)] // OK, lint is suppressed
struct W;
= warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
= note: for more information, see issue #50504 <https://github.com/rust-lang/rust/issues/50504>
+warning: cannot find type `FromOutside` in this scope
+ --> $DIR/generate-mod.rs:35:14
+ |
+LL | #[derive(generate_mod::CheckDerive)] //~ WARN cannot find type `FromOutside` in this scope
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^ names from parent modules are not accessible without an explicit import
+ |
+ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
+ = note: for more information, see issue #50504 <https://github.com/rust-lang/rust/issues/50504>
+
+warning: cannot find type `OuterDerive` in this scope
+ --> $DIR/generate-mod.rs:35:14
+ |
+LL | #[derive(generate_mod::CheckDerive)] //~ WARN cannot find type `FromOutside` in this scope
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^ names from parent modules are not accessible without an explicit import
+ |
+ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
+ = note: for more information, see issue #50504 <https://github.com/rust-lang/rust/issues/50504>
+
error: aborting due to 4 previous errors
For more information about this error, try `rustc --explain E0412`.
--> $DIR/macro-namespace-reserved-2.rs:34:5
|
LL | my_macro!(); //~ ERROR can't use a procedural macro from the same crate that defines it
- | ^^^^^^^^^^^^
+ | ^^^^^^^^
error: can't use a procedural macro from the same crate that defines it
--> $DIR/macro-namespace-reserved-2.rs:37:5
|
LL | my_macro_attr!(); //~ ERROR can't use a procedural macro from the same crate that defines it
- | ^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^
error: can't use a procedural macro from the same crate that defines it
--> $DIR/macro-namespace-reserved-2.rs:40:5
|
LL | MyTrait!(); //~ ERROR can't use a procedural macro from the same crate that defines it
- | ^^^^^^^^^^^
+ | ^^^^^^^
error: can't use a procedural macro from the same crate that defines it
- --> $DIR/macro-namespace-reserved-2.rs:43:1
+ --> $DIR/macro-namespace-reserved-2.rs:43:3
|
LL | #[my_macro] //~ ERROR can't use a procedural macro from the same crate that defines it
- | ^^^^^^^^^^^
+ | ^^^^^^^^
error: can't use a procedural macro from the same crate that defines it
- --> $DIR/macro-namespace-reserved-2.rs:45:1
+ --> $DIR/macro-namespace-reserved-2.rs:45:3
|
LL | #[my_macro_attr] //~ ERROR can't use a procedural macro from the same crate that defines it
- | ^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^
error: can't use a procedural macro from the same crate that defines it
- --> $DIR/macro-namespace-reserved-2.rs:47:1
+ --> $DIR/macro-namespace-reserved-2.rs:47:3
|
LL | #[MyTrait] //~ ERROR can't use a procedural macro from the same crate that defines it
- | ^^^^^^^^^^
+ | ^^^^^^^
error: can't use a procedural macro from the same crate that defines it
--> $DIR/macro-namespace-reserved-2.rs:50:10
// compile-pass
-#![feature(raw_identifiers)]
-//~^ WARN the feature `raw_identifiers` is included in the Rust 2018 edition
+#![feature(impl_header_lifetime_elision)]
+//~^ WARN the feature `impl_header_lifetime_elision` is included in the Rust 2018 edition
#![feature(rust_2018_preview)]
-fn main() {
- let foo = 0;
- let bar = r#foo;
-}
+fn main() {}
-warning[E0705]: the feature `raw_identifiers` is included in the Rust 2018 edition
+warning[E0705]: the feature `impl_header_lifetime_elision` is included in the Rust 2018 edition
--> $DIR/E0705.rs:13:12
|
-LL | #![feature(raw_identifiers)]
- | ^^^^^^^^^^^^^^^
+LL | #![feature(impl_header_lifetime_elision)]
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// Check that the user gets an errror if they omit a binding from an
+// Check that the user gets an error if they omit a binding from an
// object type.
pub trait Foo {
fn foo<'a>() {
let _ = S::new::<isize,f64>(1, 1.0);
- //~^ ERROR too many type parameters provided
+ //~^ ERROR wrong number of type arguments
let _ = S::<'a,isize>::new::<f64>(1, 1.0);
- //~^ ERROR wrong number of lifetime parameters
+ //~^ ERROR wrong number of lifetime arguments
let _: S2 = Trait::new::<isize,f64>(1, 1.0);
- //~^ ERROR too many type parameters provided
+ //~^ ERROR wrong number of type arguments
let _: S2 = Trait::<'a,isize>::new::<f64>(1, 1.0);
- //~^ ERROR too many lifetime parameters provided
+ //~^ ERROR wrong number of lifetime arguments
}
fn main() {}
-error[E0087]: too many type parameters provided: expected at most 1 type parameter, found 2 type parameters
+error[E0087]: wrong number of type arguments: expected 1, found 2
--> $DIR/bad-mid-path-type-params.rs:40:28
|
LL | let _ = S::new::<isize,f64>(1, 1.0);
- | ^^^ expected 1 type parameter
+ | ^^^ unexpected type argument
-error[E0107]: wrong number of lifetime parameters: expected 0, found 1
- --> $DIR/bad-mid-path-type-params.rs:43:13
+error[E0107]: wrong number of lifetime arguments: expected 0, found 1
+ --> $DIR/bad-mid-path-type-params.rs:43:17
|
LL | let _ = S::<'a,isize>::new::<f64>(1, 1.0);
- | ^^^^^^^^^^^^^^^^^^^^^^^^^ unexpected lifetime parameter
+ | ^^ unexpected lifetime argument
-error[E0087]: too many type parameters provided: expected at most 1 type parameter, found 2 type parameters
+error[E0087]: wrong number of type arguments: expected 1, found 2
--> $DIR/bad-mid-path-type-params.rs:46:36
|
LL | let _: S2 = Trait::new::<isize,f64>(1, 1.0);
- | ^^^ expected 1 type parameter
+ | ^^^ unexpected type argument
-error[E0088]: too many lifetime parameters provided: expected at most 0 lifetime parameters, found 1 lifetime parameter
+error[E0088]: wrong number of lifetime arguments: expected 0, found 1
--> $DIR/bad-mid-path-type-params.rs:49:25
|
LL | let _: S2 = Trait::<'a,isize>::new::<f64>(1, 1.0);
- | ^^ expected 0 lifetime parameters
+ | ^^ unexpected lifetime argument
error: aborting due to 4 previous errors
// revisions: ast migrate nll
// Since we are testing nll (and migration) explicitly as a separate
-// revisions, dont worry about the --compare-mode=nll on this test.
+// revisions, don't worry about the --compare-mode=nll on this test.
// ignore-compare-mode-nll
--> $DIR/cfg-attr-unknown-attribute-macro-expansion.rs:13:27
|
LL | #[cfg_attr(all(), unknown)] //~ ERROR `unknown` is currently unknown
- | ^^^^^^^^
+ | ^^^^^^^
...
LL | foo!();
| ------- in this macro invocation
--- /dev/null
+// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+fn main() {
+ let x = Some(1);
+ let y = x.or_else(4);
+ //~^ ERROR expected a `std::ops::FnOnce<()>` closure, found `{integer}`
+}
--- /dev/null
+error[E0277]: expected a `std::ops::FnOnce<()>` closure, found `{integer}`
+ --> $DIR/closure-expected.rs:13:15
+ |
+LL | let y = x.or_else(4);
+ | ^^^^^^^ expected an `FnOnce<()>` closure, found `{integer}`
+ |
+ = help: the trait `std::ops::FnOnce<()>` is not implemented for `{integer}`
+ = note: wrap the `{integer}` in a closure with no arguments: `|| { /* code */ }
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0277`.
+++ /dev/null
-// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-// compile-pass
-
-macro_rules! m {
- () => {{
- fn f(_: impl Sized) {}
- f
- }}
-}
-
-fn main() {
- fn f() -> impl Sized {};
- m!()(f());
-}
fn main() {
S(&0, &0); // OK
S::<'static>(&0, &0);
- //~^ ERROR expected 2 lifetime parameters, found 1 lifetime parameter
+ //~^ ERROR wrong number of lifetime arguments: expected 2, found 1
S::<'static, 'static, 'static>(&0, &0);
- //~^ ERROR expected at most 2 lifetime parameters, found 3 lifetime parameters
+ //~^ ERROR wrong number of lifetime arguments: expected 2, found 3
E::V(&0); // OK
E::V::<'static>(&0);
- //~^ ERROR expected 2 lifetime parameters, found 1 lifetime parameter
+ //~^ ERROR wrong number of lifetime arguments: expected 2, found 1
E::V::<'static, 'static, 'static>(&0);
- //~^ ERROR expected at most 2 lifetime parameters, found 3 lifetime parameters
+ //~^ ERROR wrong number of lifetime arguments: expected 2, found 3
}
-error[E0090]: too few lifetime parameters provided: expected 2 lifetime parameters, found 1 lifetime parameter
+error[E0090]: wrong number of lifetime arguments: expected 2, found 1
--> $DIR/constructor-lifetime-args.rs:27:5
|
LL | S::<'static>(&0, &0);
- | ^^^^^^^^^^^^ expected 2 lifetime parameters
+ | ^^^^^^^^^^^^ expected 2 lifetime arguments
-error[E0088]: too many lifetime parameters provided: expected at most 2 lifetime parameters, found 3 lifetime parameters
+error[E0088]: wrong number of lifetime arguments: expected 2, found 3
--> $DIR/constructor-lifetime-args.rs:29:27
|
LL | S::<'static, 'static, 'static>(&0, &0);
- | ^^^^^^^ expected 2 lifetime parameters
+ | ^^^^^^^ unexpected lifetime argument
-error[E0090]: too few lifetime parameters provided: expected 2 lifetime parameters, found 1 lifetime parameter
+error[E0090]: wrong number of lifetime arguments: expected 2, found 1
--> $DIR/constructor-lifetime-args.rs:32:5
|
LL | E::V::<'static>(&0);
- | ^^^^^^^^^^^^^^^ expected 2 lifetime parameters
+ | ^^^^^^^^^^^^^^^ expected 2 lifetime arguments
-error[E0088]: too many lifetime parameters provided: expected at most 2 lifetime parameters, found 3 lifetime parameters
+error[E0088]: wrong number of lifetime arguments: expected 2, found 3
--> $DIR/constructor-lifetime-args.rs:34:30
|
LL | E::V::<'static, 'static, 'static>(&0);
- | ^^^^^^^ expected 2 lifetime parameters
+ | ^^^^^^^ unexpected lifetime argument
error: aborting due to 4 previous errors
LL | const Z2: i32 = unsafe { *(42 as *const i32) }; //~ ERROR cannot be used
| ^^^^^^^^^^^^^^^^^^^^^^^^^-------------------^^^
| |
- | tried to access memory with alignment 2, but alignment 4 is required
+ | a memory access tried to interpret some bytes as a pointer
error: this constant cannot be used
--> $DIR/const_raw_ptr_ops.rs:27:1
LL | | Union { usize: &BAR }.foo,
LL | | Union { usize: &BAR }.bar,
LL | | )};
- | |___^ type validation failed: encountered 5 at (*.1).TAG, but expected something in the range 42..=99
+ | |___^ type validation failed: encountered 5 at .1.<deref>.<enum-tag>, but expected something in the range 42..=99
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
--- /dev/null
+// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// compile-pass
+
+macro_rules! m {
+ () => {{
+ fn f(_: impl Sized) {}
+ f
+ }}
+}
+
+fn main() {
+ fn f() -> impl Sized {};
+ m!()(f());
+}
+++ /dev/null
-// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-#[repr(usize)]
-#[derive(Copy, Clone)]
-enum Enum {
- A = 0,
-}
-
-union Foo {
- a: &'static u8,
- b: Enum,
-}
-
-// A pointer is guaranteed non-null
-const BAD_ENUM: Enum = unsafe { Foo { a: &1 }.b};
-//~^ ERROR this constant likely exhibits undefined behavior
-
-fn main() {
-}
+++ /dev/null
-error[E0080]: this constant likely exhibits undefined behavior
- --> $DIR/ub-enum-ptr.rs:23:1
- |
-LL | const BAD_ENUM: Enum = unsafe { Foo { a: &1 }.b};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered pointer at .TAG, but expected something in the range 0..=0
- |
- = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
-
-error: aborting due to previous error
-
-For more information about this error, try `rustc --explain E0080`.
--- /dev/null
+// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#[repr(usize)]
+#[derive(Copy, Clone)]
+enum Enum {
+ A = 0,
+}
+union TransmuteEnum {
+ a: &'static u8,
+ b: Enum,
+}
+
+// A pointer is guaranteed non-null
+const BAD_ENUM: Enum = unsafe { TransmuteEnum { a: &1 }.b };
+//~^ ERROR this constant likely exhibits undefined behavior
+
+// Invalid enum discriminant
+#[repr(usize)]
+#[derive(Copy, Clone)]
+enum Enum2 {
+ A = 2,
+}
+union TransmuteEnum2 {
+ a: usize,
+ b: Enum2,
+}
+const BAD_ENUM2 : Enum2 = unsafe { TransmuteEnum2 { a: 0 }.b };
+//~^ ERROR this constant likely exhibits undefined behavior
+
+// Invalid enum field content (mostly to test printing of apths for enum tuple
+// variants and tuples).
+union TransmuteChar {
+ a: u32,
+ b: char,
+}
+// Need to create something which does not clash with enum layout optimizations.
+const BAD_ENUM_CHAR : Option<(char, char)> = Some(('x', unsafe { TransmuteChar { a: !0 }.b }));
+//~^ ERROR this constant likely exhibits undefined behavior
+
+fn main() {
+}
--- /dev/null
+error[E0080]: this constant likely exhibits undefined behavior
+ --> $DIR/ub-enum.rs:22:1
+ |
+LL | const BAD_ENUM: Enum = unsafe { TransmuteEnum { a: &1 }.b };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered pointer at .<enum-tag>, but expected something in the range 0..=0
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
+
+error[E0080]: this constant likely exhibits undefined behavior
+ --> $DIR/ub-enum.rs:35:1
+ |
+LL | const BAD_ENUM2 : Enum2 = unsafe { TransmuteEnum2 { a: 0 }.b };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 0 at .<enum-tag>, but expected something in the range 2..=2
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
+
+error[E0080]: this constant likely exhibits undefined behavior
+ --> $DIR/ub-enum.rs:45:1
+ |
+LL | const BAD_ENUM_CHAR : Option<(char, char)> = Some(('x', unsafe { TransmuteChar { a: !0 }.b }));
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered character at .Some.0.1, but expected a valid unicode codepoint
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
+
+error: aborting due to 3 previous errors
+
+For more information about this error, try `rustc --explain E0080`.
error[E0658]: The attribute `foo` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/custom_attribute.rs:13:1
+ --> $DIR/custom_attribute.rs:13:3
|
LL | #[foo] //~ ERROR The attribute `foo`
- | ^^^^^^
+ | ^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `foo` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/custom_attribute.rs:15:5
+ --> $DIR/custom_attribute.rs:15:7
|
LL | #[foo] //~ ERROR The attribute `foo`
- | ^^^^^^
+ | ^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `foo` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/custom_attribute.rs:17:5
+ --> $DIR/custom_attribute.rs:17:7
|
LL | #[foo] //~ ERROR The attribute `foo`
- | ^^^^^^
+ | ^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// We need to opt inot the `!` feature in order to trigger the
+// We need to opt into the `!` feature in order to trigger the
// requirement that this is testing.
#![feature(never_type)]
// edition:2015
-#![feature(raw_identifiers)]
#![allow(async_idents)]
#[macro_export]
// aux-build:edition-kw-macro-2015.rs
// compile-pass
-#![feature(raw_identifiers)]
#![allow(async_idents)]
#[macro_use]
// edition:2015
// aux-build:edition-kw-macro-2015.rs
-#![feature(raw_identifiers)]
-
#[macro_use]
extern crate edition_kw_macro_2015;
error: no rules expected the token `r#async`
- --> $DIR/edition-keywords-2015-2015-parsing.rs:24:31
+ --> $DIR/edition-keywords-2015-2015-parsing.rs:22:31
|
LL | r#async = consumes_async!(r#async); //~ ERROR no rules expected the token `r#async`
| ^^^^^^^
error: no rules expected the token `async`
- --> $DIR/edition-keywords-2015-2015-parsing.rs:25:35
+ --> $DIR/edition-keywords-2015-2015-parsing.rs:23:35
|
LL | r#async = consumes_async_raw!(async); //~ ERROR no rules expected the token `async`
| ^^^^^
// edition:2015
// aux-build:edition-kw-macro-2018.rs
-#![feature(raw_identifiers)]
-
#[macro_use]
extern crate edition_kw_macro_2018;
error: expected identifier, found reserved keyword `async`
- --> $DIR/edition-keywords-2015-2018-expansion.rs:20:5
+ --> $DIR/edition-keywords-2015-2018-expansion.rs:18:5
|
LL | produces_async! {} //~ ERROR expected identifier, found reserved keyword
| ^^^^^^^^^^^^^^^^^^ expected identifier, found reserved keyword
// edition:2015
// aux-build:edition-kw-macro-2018.rs
-#![feature(raw_identifiers)]
-
#[macro_use]
extern crate edition_kw_macro_2018;
error: no rules expected the token `r#async`
- --> $DIR/edition-keywords-2015-2018-parsing.rs:24:31
+ --> $DIR/edition-keywords-2015-2018-parsing.rs:22:31
|
LL | r#async = consumes_async!(r#async); //~ ERROR no rules expected the token `r#async`
| ^^^^^^^
error: no rules expected the token `async`
- --> $DIR/edition-keywords-2015-2018-parsing.rs:25:35
+ --> $DIR/edition-keywords-2015-2018-parsing.rs:23:35
|
LL | r#async = consumes_async_raw!(async); //~ ERROR no rules expected the token `async`
| ^^^^^
fn bar<T>() {}
fn main() {
- foo::<f64>(); //~ ERROR expected at most 0 type parameters, found 1 type parameter [E0087]
+ foo::<f64>(); //~ ERROR wrong number of type arguments: expected 0, found 1 [E0087]
- bar::<f64, u64>(); //~ ERROR expected at most 1 type parameter, found 2 type parameters [E0087]
+ bar::<f64, u64>(); //~ ERROR wrong number of type arguments: expected 1, found 2 [E0087]
}
-error[E0087]: too many type parameters provided: expected at most 0 type parameters, found 1 type parameter
+error[E0087]: wrong number of type arguments: expected 0, found 1
--> $DIR/E0087.rs:15:11
|
-LL | foo::<f64>(); //~ ERROR expected at most 0 type parameters, found 1 type parameter [E0087]
- | ^^^ expected 0 type parameters
+LL | foo::<f64>(); //~ ERROR wrong number of type arguments: expected 0, found 1 [E0087]
+ | ^^^ unexpected type argument
-error[E0087]: too many type parameters provided: expected at most 1 type parameter, found 2 type parameters
+error[E0087]: wrong number of type arguments: expected 1, found 2
--> $DIR/E0087.rs:17:16
|
-LL | bar::<f64, u64>(); //~ ERROR expected at most 1 type parameter, found 2 type parameters [E0087]
- | ^^^ expected 1 type parameter
+LL | bar::<f64, u64>(); //~ ERROR wrong number of type arguments: expected 1, found 2 [E0087]
+ | ^^^ unexpected type argument
error: aborting due to 2 previous errors
-error[E0088]: too many lifetime parameters provided: expected at most 0 lifetime parameters, found 1 lifetime parameter
+error[E0088]: wrong number of lifetime arguments: expected 0, found 1
--> $DIR/E0088.rs:15:9
|
LL | f::<'static>(); //~ ERROR E0088
- | ^^^^^^^ expected 0 lifetime parameters
+ | ^^^^^^^ unexpected lifetime argument
-error[E0088]: too many lifetime parameters provided: expected at most 1 lifetime parameter, found 2 lifetime parameters
+error[E0088]: wrong number of lifetime arguments: expected 1, found 2
--> $DIR/E0088.rs:16:18
|
LL | g::<'static, 'static>(); //~ ERROR E0088
- | ^^^^^^^ expected 1 lifetime parameter
+ | ^^^^^^^ unexpected lifetime argument
error: aborting due to 2 previous errors
fn foo<T, U>() {}
fn main() {
- foo::<f64>(); //~ ERROR expected 2 type parameters, found 1 type parameter [E0089]
+ foo::<f64>(); //~ ERROR wrong number of type arguments: expected 2, found 1 [E0089]
}
-error[E0089]: too few type parameters provided: expected 2 type parameters, found 1 type parameter
+error[E0089]: wrong number of type arguments: expected 2, found 1
--> $DIR/E0089.rs:14:5
|
-LL | foo::<f64>(); //~ ERROR expected 2 type parameters, found 1 type parameter [E0089]
- | ^^^^^^^^^^ expected 2 type parameters
+LL | foo::<f64>(); //~ ERROR wrong number of type arguments: expected 2, found 1 [E0089]
+ | ^^^^^^^^^^ expected 2 type arguments
error: aborting due to previous error
fn foo<'a: 'b, 'b: 'a>() {}
fn main() {
- foo::<'static>(); //~ ERROR expected 2 lifetime parameters, found 1 lifetime parameter [E0090]
+ foo::<'static>(); //~ ERROR wrong number of lifetime arguments: expected 2, found 1 [E0090]
}
-error[E0090]: too few lifetime parameters provided: expected 2 lifetime parameters, found 1 lifetime parameter
+error[E0090]: wrong number of lifetime arguments: expected 2, found 1
--> $DIR/E0090.rs:14:5
|
-LL | foo::<'static>(); //~ ERROR expected 2 lifetime parameters, found 1 lifetime parameter [E0090]
- | ^^^^^^^^^^^^^^ expected 2 lifetime parameters
+LL | foo::<'static>(); //~ ERROR wrong number of lifetime arguments: expected 2, found 1 [E0090]
+ | ^^^^^^^^^^^^^^ expected 2 lifetime arguments
error: aborting due to previous error
struct Baz<'a, 'b, 'c> {
buzz: Buzz<'a>,
//~^ ERROR E0107
- //~| expected 2 lifetime parameters
+ //~| expected 2 lifetime arguments
bar: Bar<'a>,
//~^ ERROR E0107
- //~| unexpected lifetime parameter
+ //~| unexpected lifetime argument
foo2: Foo<'a, 'b, 'c>,
//~^ ERROR E0107
- //~| 2 unexpected lifetime parameters
+ //~| 2 unexpected lifetime arguments
}
-fn main() {
-}
+fn main() {}
-error[E0107]: wrong number of lifetime parameters: expected 2, found 1
+error[E0107]: wrong number of lifetime arguments: expected 2, found 1
--> $DIR/E0107.rs:21:11
|
LL | buzz: Buzz<'a>,
- | ^^^^^^^^ expected 2 lifetime parameters
+ | ^^^^^^^^ expected 2 lifetime arguments
-error[E0107]: wrong number of lifetime parameters: expected 0, found 1
- --> $DIR/E0107.rs:24:10
+error[E0107]: wrong number of lifetime arguments: expected 0, found 1
+ --> $DIR/E0107.rs:24:14
|
LL | bar: Bar<'a>,
- | ^^^^^^^ unexpected lifetime parameter
+ | ^^ unexpected lifetime argument
-error[E0107]: wrong number of lifetime parameters: expected 1, found 3
+error[E0107]: wrong number of lifetime arguments: expected 1, found 3
--> $DIR/E0107.rs:27:11
|
LL | foo2: Foo<'a, 'b, 'c>,
- | ^^^^^^^^^^^^^^^ 2 unexpected lifetime parameters
+ | ^^^^^^^^^^^^^^^ 2 unexpected lifetime arguments
error: aborting due to 3 previous errors
--> $DIR/E0244.rs:12:23
|
LL | struct Bar<S, T> { x: Foo<S, T> }
- | ^^^^^^^^^ expected no type arguments
+ | ^^^^^^^^^ 2 unexpected type arguments
error: aborting due to previous error
--> $DIR/E0401.rs:32:25
|
LL | impl<T> Iterator for A<T> {
- | ---- `Self` type implicitely declared here, on the `impl`
+ | ---- `Self` type implicitly declared here, on the `impl`
...
LL | fn helper(sel: &Self) -> u8 { //~ ERROR E0401
| ------ ^^^^ use of type variable from outer function
--- /dev/null
+// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![feature(exhaustive_integer_patterns)]
+#![feature(exclusive_range_pattern)]
+#![deny(unreachable_patterns)]
+
+use std::{char, usize, u8, u16, u32, u64, u128, isize, i8, i16, i32, i64, i128};
+
+fn main() {
+ let x: u8 = 0;
+
+ // A single range covering the entire domain.
+ match x {
+ 0 ..= 255 => {} // ok
+ }
+
+ // A combination of ranges and values.
+ // These are currently allowed to be overlapping.
+ match x {
+ 0 ..= 32 => {}
+ 33 => {}
+ 34 .. 128 => {}
+ 100 ..= 200 => {}
+ 200 => {} //~ ERROR unreachable pattern
+ 201 ..= 255 => {}
+ }
+
+ // An incomplete set of values.
+ match x { //~ ERROR non-exhaustive patterns
+ 0 .. 128 => {}
+ }
+
+ // A more incomplete set of values.
+ match x { //~ ERROR non-exhaustive patterns
+ 0 ..= 10 => {}
+ 20 ..= 30 => {}
+ 35 => {}
+ 70 .. 255 => {}
+ }
+
+ let x: i8 = 0;
+ match x { //~ ERROR non-exhaustive patterns
+ -7 => {}
+ -5..=120 => {}
+ -2..=20 => {} //~ ERROR unreachable pattern
+ 125 => {}
+ }
+
+ // Let's test other types too!
+ let c: char = '\u{0}';
+ match c {
+ '\u{0}' ..= char::MAX => {} // ok
+ }
+
+ // We can actually get away with just covering the
+ // following two ranges, which correspond to all
+ // valid Unicode Scalar Values.
+ match c {
+ '\u{0000}' ..= '\u{D7FF}' => {}
+ '\u{E000}' ..= '\u{10_FFFF}' => {}
+ }
+
+ match 0usize {
+ 0 ..= usize::MAX => {} // ok
+ }
+
+ match 0u16 {
+ 0 ..= u16::MAX => {} // ok
+ }
+
+ match 0u32 {
+ 0 ..= u32::MAX => {} // ok
+ }
+
+ match 0u64 {
+ 0 ..= u64::MAX => {} // ok
+ }
+
+ match 0u128 {
+ 0 ..= u128::MAX => {} // ok
+ }
+
+ match 0isize {
+ isize::MIN ..= isize::MAX => {} // ok
+ }
+
+ match 0i8 {
+ -128 ..= 127 => {} // ok
+ }
+
+ match 0i8 { //~ ERROR non-exhaustive patterns
+ -127 ..= 127 => {}
+ }
+
+ match 0i16 {
+ i16::MIN ..= i16::MAX => {} // ok
+ }
+
+ match 0i16 { //~ ERROR non-exhaustive patterns
+ i16::MIN ..= -1 => {}
+ 1 ..= i16::MAX => {}
+ }
+
+ match 0i32 {
+ i32::MIN ..= i32::MAX => {} // ok
+ }
+
+ match 0i64 {
+ i64::MIN ..= i64::MAX => {} // ok
+ }
+
+ match 0i128 {
+ i128::MIN ..= i128::MAX => {} // ok
+ }
+
+ // Make sure that guards don't factor into the exhaustiveness checks.
+ match 0u8 { //~ ERROR non-exhaustive patterns
+ 0 .. 128 => {}
+ 128 ..= 255 if true => {}
+ }
+
+ match 0u8 {
+ 0 .. 128 => {}
+ 128 ..= 255 if false => {}
+ 128 ..= 255 => {} // ok, because previous arm was guarded
+ }
+
+ // Now things start getting a bit more interesting. Testing products!
+ match (0u8, Some(())) { //~ ERROR non-exhaustive patterns
+ (1, _) => {}
+ (_, None) => {}
+ }
+
+ match (0u8, true) { //~ ERROR non-exhaustive patterns
+ (0 ..= 125, false) => {}
+ (128 ..= 255, false) => {}
+ (0 ..= 255, true) => {}
+ }
+
+ match (0u8, true) { // ok
+ (0 ..= 125, false) => {}
+ (128 ..= 255, false) => {}
+ (0 ..= 255, true) => {}
+ (125 .. 128, false) => {}
+ }
+
+ match 0u8 { // ok
+ 0 .. 2 => {}
+ 1 ..= 2 => {}
+ _ => {}
+ }
+
+ const LIM: u128 = u128::MAX - 1;
+ match 0u128 { //~ ERROR non-exhaustive patterns
+ 0 ..= LIM => {}
+ }
+
+ match 0u128 { //~ ERROR non-exhaustive patterns
+ 0 ..= 4 => {}
+ }
+
+ match 0u128 { //~ ERROR non-exhaustive patterns
+ 4 ..= u128::MAX => {}
+ }
+}
--- /dev/null
+error: unreachable pattern
+ --> $DIR/exhaustive_integer_patterns.rs:32:9
+ |
+LL | 200 => {} //~ ERROR unreachable pattern
+ | ^^^
+ |
+note: lint level defined here
+ --> $DIR/exhaustive_integer_patterns.rs:13:9
+ |
+LL | #![deny(unreachable_patterns)]
+ | ^^^^^^^^^^^^^^^^^^^^
+
+error[E0004]: non-exhaustive patterns: `128u8..=255u8` not covered
+ --> $DIR/exhaustive_integer_patterns.rs:37:11
+ |
+LL | match x { //~ ERROR non-exhaustive patterns
+ | ^ pattern `128u8..=255u8` not covered
+
+error[E0004]: non-exhaustive patterns: `11u8..=19u8`, `31u8..=34u8`, `36u8..=69u8` and 1 more not covered
+ --> $DIR/exhaustive_integer_patterns.rs:42:11
+ |
+LL | match x { //~ ERROR non-exhaustive patterns
+ | ^ patterns `11u8..=19u8`, `31u8..=34u8`, `36u8..=69u8` and 1 more not covered
+
+error: unreachable pattern
+ --> $DIR/exhaustive_integer_patterns.rs:53:9
+ |
+LL | -2..=20 => {} //~ ERROR unreachable pattern
+ | ^^^^^^^
+
+error[E0004]: non-exhaustive patterns: `-128i8..=-8i8`, `-6i8`, `121i8..=124i8` and 1 more not covered
+ --> $DIR/exhaustive_integer_patterns.rs:50:11
+ |
+LL | match x { //~ ERROR non-exhaustive patterns
+ | ^ patterns `-128i8..=-8i8`, `-6i8`, `121i8..=124i8` and 1 more not covered
+
+error[E0004]: non-exhaustive patterns: `-128i8` not covered
+ --> $DIR/exhaustive_integer_patterns.rs:99:11
+ |
+LL | match 0i8 { //~ ERROR non-exhaustive patterns
+ | ^^^ pattern `-128i8` not covered
+
+error[E0004]: non-exhaustive patterns: `0i16` not covered
+ --> $DIR/exhaustive_integer_patterns.rs:107:11
+ |
+LL | match 0i16 { //~ ERROR non-exhaustive patterns
+ | ^^^^ pattern `0i16` not covered
+
+error[E0004]: non-exhaustive patterns: `128u8..=255u8` not covered
+ --> $DIR/exhaustive_integer_patterns.rs:125:11
+ |
+LL | match 0u8 { //~ ERROR non-exhaustive patterns
+ | ^^^ pattern `128u8..=255u8` not covered
+
+error[E0004]: non-exhaustive patterns: `(0u8, Some(_))` and `(2u8..=255u8, Some(_))` not covered
+ --> $DIR/exhaustive_integer_patterns.rs:137:11
+ |
+LL | match (0u8, Some(())) { //~ ERROR non-exhaustive patterns
+ | ^^^^^^^^^^^^^^^ patterns `(0u8, Some(_))` and `(2u8..=255u8, Some(_))` not covered
+
+error[E0004]: non-exhaustive patterns: `(126u8..=127u8, false)` not covered
+ --> $DIR/exhaustive_integer_patterns.rs:142:11
+ |
+LL | match (0u8, true) { //~ ERROR non-exhaustive patterns
+ | ^^^^^^^^^^^ pattern `(126u8..=127u8, false)` not covered
+
+error[E0004]: non-exhaustive patterns: `340282366920938463463374607431768211455u128` not covered
+ --> $DIR/exhaustive_integer_patterns.rs:162:11
+ |
+LL | match 0u128 { //~ ERROR non-exhaustive patterns
+ | ^^^^^ pattern `340282366920938463463374607431768211455u128` not covered
+
+error[E0004]: non-exhaustive patterns: `5u128..=340282366920938463463374607431768211455u128` not covered
+ --> $DIR/exhaustive_integer_patterns.rs:166:11
+ |
+LL | match 0u128 { //~ ERROR non-exhaustive patterns
+ | ^^^^^ pattern `5u128..=340282366920938463463374607431768211455u128` not covered
+
+error[E0004]: non-exhaustive patterns: `0u128..=3u128` not covered
+ --> $DIR/exhaustive_integer_patterns.rs:170:11
+ |
+LL | match 0u128 { //~ ERROR non-exhaustive patterns
+ | ^^^^^ pattern `0u128..=3u128` not covered
+
+error: aborting due to 13 previous errors
+
+For more information about this error, try `rustc --explain E0004`.
// extern functions are extern "C" fn
let _x: extern "C" fn() = f; // OK
is_fn(f);
- //~^ ERROR `extern "C" fn() {f}: std::ops::Fn<()>` is not satisfied
+ //~^ ERROR expected a `std::ops::Fn<()>` closure, found `extern "C" fn() {f}`
}
-error[E0277]: the trait bound `extern "C" fn() {f}: std::ops::Fn<()>` is not satisfied
+error[E0277]: expected a `std::ops::Fn<()>` closure, found `extern "C" fn() {f}`
--> $DIR/extern-wrong-value-type.rs:19:5
|
LL | is_fn(f);
- | ^^^^^ the trait `std::ops::Fn<()>` is not implemented for `extern "C" fn() {f}`
+ | ^^^^^ expected an `Fn<()>` closure, found `extern "C" fn() {f}`
|
+ = help: the trait `std::ops::Fn<()>` is not implemented for `extern "C" fn() {f}`
+ = note: wrap the `extern "C" fn() {f}` in a closure with no arguments: `|| { /* code */ }
note: required by `is_fn`
--> $DIR/extern-wrong-value-type.rs:14:1
|
--- /dev/null
+// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+fn main() {
+ let x: u8 = 0;
+ match x { //~ ERROR non-exhaustive patterns: `_` not covered
+ 0 ..= 255 => {}
+ }
+}
--- /dev/null
+error[E0004]: non-exhaustive patterns: `_` not covered
+ --> $DIR/feature-gate-exhaustive_integer_patterns.rs:13:11
+ |
+LL | match x { //~ ERROR non-exhaustive patterns: `_` not covered
+ | ^ pattern `_` not covered
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0004`.
error[E0658]: The attribute `fake_attr` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/feature-gate-custom_attribute.rs:17:1
+ --> $DIR/feature-gate-custom_attribute.rs:17:3
|
LL | #[fake_attr] //~ ERROR attribute `fake_attr` is currently unknown
- | ^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `fake_attr` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/feature-gate-custom_attribute.rs:18:1
+ --> $DIR/feature-gate-custom_attribute.rs:18:3
|
LL | #[fake_attr(100)] //~ ERROR attribute `fake_attr` is currently unknown
- | ^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `fake_attr` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/feature-gate-custom_attribute.rs:19:1
+ --> $DIR/feature-gate-custom_attribute.rs:19:3
|
LL | #[fake_attr(1, 2, 3)] //~ ERROR attribute `fake_attr` is currently unknown
- | ^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `fake_attr` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/feature-gate-custom_attribute.rs:20:1
+ --> $DIR/feature-gate-custom_attribute.rs:20:3
|
LL | #[fake_attr("hello")] //~ ERROR attribute `fake_attr` is currently unknown
- | ^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `fake_attr` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/feature-gate-custom_attribute.rs:21:1
+ --> $DIR/feature-gate-custom_attribute.rs:21:3
|
LL | #[fake_attr(name = "hello")] //~ ERROR attribute `fake_attr` is currently unknown
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `fake_attr` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/feature-gate-custom_attribute.rs:22:1
+ --> $DIR/feature-gate-custom_attribute.rs:22:3
|
LL | #[fake_attr(1, "hi", key = 12, true, false)] //~ ERROR attribute `fake_attr` is currently unknown
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `fake_attr` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/feature-gate-custom_attribute.rs:23:1
+ --> $DIR/feature-gate-custom_attribute.rs:23:3
|
LL | #[fake_attr(key = "hello", val = 10)] //~ ERROR attribute `fake_attr` is currently unknown
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `fake_attr` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/feature-gate-custom_attribute.rs:24:1
+ --> $DIR/feature-gate-custom_attribute.rs:24:3
|
LL | #[fake_attr(key("hello"), val(10))] //~ ERROR attribute `fake_attr` is currently unknown
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `fake_attr` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/feature-gate-custom_attribute.rs:25:1
+ --> $DIR/feature-gate-custom_attribute.rs:25:3
|
LL | #[fake_attr(enabled = true, disabled = false)] //~ ERROR attribute `fake_attr` is currently unknown
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `fake_attr` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/feature-gate-custom_attribute.rs:26:1
+ --> $DIR/feature-gate-custom_attribute.rs:26:3
|
LL | #[fake_attr(true)] //~ ERROR attribute `fake_attr` is currently unknown
- | ^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `fake_attr` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/feature-gate-custom_attribute.rs:27:1
+ --> $DIR/feature-gate-custom_attribute.rs:27:3
|
LL | #[fake_attr(pi = 3.14159)] //~ ERROR attribute `fake_attr` is currently unknown
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `fake_attr` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/feature-gate-custom_attribute.rs:28:1
+ --> $DIR/feature-gate-custom_attribute.rs:28:3
|
LL | #[fake_attr(b"hi")] //~ ERROR attribute `fake_attr` is currently unknown
- | ^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `fake_doc` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/feature-gate-custom_attribute.rs:29:1
+ --> $DIR/feature-gate-custom_attribute.rs:29:3
|
LL | #[fake_doc(r"doc")] //~ ERROR attribute `fake_doc` is currently unknown
- | ^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: attributes of the form `#[derive_*]` are reserved for the compiler (see issue #29644)
- --> $DIR/feature-gate-custom_derive.rs:11:1
+ --> $DIR/feature-gate-custom_derive.rs:11:3
|
LL | #[derive_Clone]
- | ^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= help: add #![feature(custom_derive)] to the crate attributes to enable
+++ /dev/null
-// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-// Test that the MSP430 interrupt ABI cannot be used when msp430_interrupt
-// feature gate is not used.
-
-macro_rules! m { ($v:vis) => {} }
-//~^ ERROR :vis fragment specifier is experimental and subject to change
-
-fn main() {
- m!(pub);
-}
+++ /dev/null
-error[E0658]: :vis fragment specifier is experimental and subject to change (see issue #41022)
- --> $DIR/feature-gate-macro-vis-matcher.rs:14:19
- |
-LL | macro_rules! m { ($v:vis) => {} }
- | ^^^^^^
- |
- = help: add #![feature(macro_vis_matcher)] to the crate attributes to enable
-
-error: aborting due to previous error
-
-For more information about this error, try `rustc --explain E0658`.
+++ /dev/null
-// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-fn main() {
- let r#foo = 3; //~ ERROR raw identifiers are experimental and subject to change
- println!("{}", foo);
-}
+++ /dev/null
-error[E0658]: raw identifiers are experimental and subject to change (see issue #48589)
- --> $DIR/feature-gate-raw-identifiers.rs:12:9
- |
-LL | let r#foo = 3; //~ ERROR raw identifiers are experimental and subject to change
- | ^^^^^
- |
- = help: add #![feature(raw_identifiers)] to the crate attributes to enable
-
-error: aborting due to previous error
-
-For more information about this error, try `rustc --explain E0658`.
error[E0658]: unless otherwise specified, attributes with the prefix `rustc_` are reserved for internal compiler diagnostics (see issue #29642)
- --> $DIR/feature-gate-rustc-attrs.rs:15:1
+ --> $DIR/feature-gate-rustc-attrs.rs:15:3
|
LL | #[rustc_foo]
- | ^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(rustc_attrs)] to the crate attributes to enable
error[E0658]: tool attributes are unstable (see issue #44690)
- --> $DIR/feature-gate-tool_attributes.rs:12:5
+ --> $DIR/feature-gate-tool_attributes.rs:12:7
|
LL | #[rustfmt::skip] //~ ERROR tool attributes are unstable
- | ^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^
|
= help: add #![feature(tool_attributes)] to the crate attributes to enable
//~| found type `std::boxed::Box<dyn std::ops::FnMut() -> isize>`
needs_fn(1);
- //~^ ERROR : std::ops::Fn<(isize,)>`
+ //~^ ERROR expected a `std::ops::Fn<(isize,)>` closure, found `{integer}`
}
= note: expected type `()`
found type `std::boxed::Box<dyn std::ops::FnMut() -> isize>`
-error[E0277]: the trait bound `{integer}: std::ops::Fn<(isize,)>` is not satisfied
+error[E0277]: expected a `std::ops::Fn<(isize,)>` closure, found `{integer}`
--> $DIR/fn-trait-formatting.rs:29:5
|
LL | needs_fn(1);
- | ^^^^^^^^ the trait `std::ops::Fn<(isize,)>` is not implemented for `{integer}`
+ | ^^^^^^^^ expected an `Fn<(isize,)>` closure, found `{integer}`
|
- = help: the following implementations were found:
- <&'a F as std::ops::Fn<A>>
- <core::str::LinesAnyMap as std::ops::Fn<(&'a str,)>>
+ = help: the trait `std::ops::Fn<(isize,)>` is not implemented for `{integer}`
note: required by `needs_fn`
--> $DIR/fn-trait-formatting.rs:13:1
|
--- /dev/null
+// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+struct Foo<'a, T: 'a>(&'a T);
+
+struct Bar<'a>(&'a ());
+
+fn main() {
+ Foo::<'static, 'static, ()>(&0); //~ ERROR wrong number of lifetime arguments
+ //~^ ERROR mismatched types
+
+ Bar::<'static, 'static, ()>(&()); //~ ERROR wrong number of lifetime arguments
+ //~^ ERROR wrong number of type arguments
+}
--- /dev/null
+error[E0088]: wrong number of lifetime arguments: expected 1, found 2
+ --> $DIR/generic-arg-mismatch-recover.rs:16:20
+ |
+LL | Foo::<'static, 'static, ()>(&0); //~ ERROR wrong number of lifetime arguments
+ | ^^^^^^^ unexpected lifetime argument
+
+error[E0308]: mismatched types
+ --> $DIR/generic-arg-mismatch-recover.rs:16:33
+ |
+LL | Foo::<'static, 'static, ()>(&0); //~ ERROR wrong number of lifetime arguments
+ | ^^ expected (), found integral variable
+ |
+ = note: expected type `&'static ()`
+ found type `&{integer}`
+
+error[E0088]: wrong number of lifetime arguments: expected 1, found 2
+ --> $DIR/generic-arg-mismatch-recover.rs:19:20
+ |
+LL | Bar::<'static, 'static, ()>(&()); //~ ERROR wrong number of lifetime arguments
+ | ^^^^^^^ unexpected lifetime argument
+
+error[E0087]: wrong number of type arguments: expected 0, found 1
+ --> $DIR/generic-arg-mismatch-recover.rs:19:29
+ |
+LL | Bar::<'static, 'static, ()>(&()); //~ ERROR wrong number of lifetime arguments
+ | ^^ unexpected type argument
+
+error: aborting due to 4 previous errors
+
+Some errors occurred: E0087, E0088, E0308.
+For more information about an error, try `rustc --explain E0087`.
--> $DIR/generic-impl-more-params-with-defaults.rs:23:5
|
LL | Vec::<isize, Heap, bool>::new();
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected at most 2 type arguments
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected at most 2 type argument
error: aborting due to previous error
--> $DIR/generic-type-more-params-with-defaults.rs:19:12
|
LL | let _: Vec<isize, Heap, bool>;
- | ^^^^^^^^^^^^^^^^^^^^^^ expected at most 2 type arguments
+ | ^^^^^^^^^^^^^^^^^^^^^^ expected at most 2 type argument
error: aborting due to previous error
--- /dev/null
+// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// compile-pass
+
+mod m {
+ pub struct S(u8);
+
+ use S as Z;
+}
+
+use m::*;
+
+fn main() {}
--- /dev/null
+// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Ambiguity between a `macro_rules` macro and a non-existent import recovered as `Def::Err`
+
+macro_rules! mac { () => () }
+
+mod m {
+ use nonexistent_module::mac; //~ ERROR unresolved import `nonexistent_module`
+
+ mac!(); //~ ERROR `mac` is ambiguous
+}
+
+fn main() {}
--- /dev/null
+error[E0432]: unresolved import `nonexistent_module`
+ --> $DIR/issue-53269.rs:16:9
+ |
+LL | use nonexistent_module::mac; //~ ERROR unresolved import `nonexistent_module`
+ | ^^^^^^^^^^^^^^^^^^ Maybe a missing `extern crate nonexistent_module;`?
+
+error[E0659]: `mac` is ambiguous
+ --> $DIR/issue-53269.rs:18:5
+ |
+LL | mac!(); //~ ERROR `mac` is ambiguous
+ | ^^^
+ |
+note: `mac` could refer to the name defined here
+ --> $DIR/issue-53269.rs:13:1
+ |
+LL | macro_rules! mac { () => () }
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+note: `mac` could also refer to the name imported here
+ --> $DIR/issue-53269.rs:16:9
+ |
+LL | use nonexistent_module::mac; //~ ERROR unresolved import `nonexistent_module`
+ | ^^^^^^^^^^^^^^^^^^^^^^^
+
+error: aborting due to 2 previous errors
+
+Some errors occurred: E0432, E0659.
+For more information about an error, try `rustc --explain E0432`.
--- /dev/null
+// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Macro from prelude is shadowed by non-existent import recovered as `Def::Err`.
+
+use std::assert; //~ ERROR unresolved import `std::assert`
+
+fn main() {
+ assert!(true);
+}
--- /dev/null
+error[E0432]: unresolved import `std::assert`
+ --> $DIR/issue-53512.rs:13:5
+ |
+LL | use std::assert; //~ ERROR unresolved import `std::assert`
+ | ^^^^^^^^^^^ no `assert` in the root
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0432`.
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// `#[macro_export] macro_rules` that doen't originate from macro expansions can be placed
+// `#[macro_export] macro_rules` that doesn't originate from macro expansions can be placed
// into the root module soon enough to act as usual items and shadow globs and preludes.
#![feature(decl_macro)]
-error: `m` is ambiguous
+error[E0659]: `m` is ambiguous
--> $DIR/macros.rs:48:5
|
LL | m!(); //~ ERROR ambiguous
| ^
|
-note: `m` could refer to the macro defined here
+note: `m` could refer to the name defined here
--> $DIR/macros.rs:46:5
|
LL | macro_rules! m { () => {} }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
-note: `m` could also refer to the macro imported here
+note: `m` could also refer to the name imported here
--> $DIR/macros.rs:47:9
|
LL | use two_macros::m;
mod m5 {
macro_rules! m { () => {
- macro_rules! panic { () => {} } //~ ERROR `panic` is already in scope
+ macro_rules! panic { () => {} }
} }
m!();
- panic!();
+ panic!(); //~ ERROR `panic` is ambiguous
}
#[macro_use(n)]
-error: `panic` is already in scope
+error[E0659]: `panic` is ambiguous
+ --> $DIR/shadow_builtin_macros.rs:43:5
+ |
+LL | panic!(); //~ ERROR `panic` is ambiguous
+ | ^^^^^
+ |
+note: `panic` could refer to the name defined here
--> $DIR/shadow_builtin_macros.rs:40:9
|
-LL | macro_rules! panic { () => {} } //~ ERROR `panic` is already in scope
+LL | macro_rules! panic { () => {} }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
LL | } }
LL | m!();
| ----- in this macro invocation
- |
- = note: macro-expanded `macro_rules!`s may not shadow existing macros (see RFC 1560)
+ = note: `panic` is also a builtin macro
+ = note: macro-expanded macros do not shadow
error[E0659]: `panic` is ambiguous
--> $DIR/shadow_builtin_macros.rs:25:14
$(
fn $n() {
S::f::<i64>();
- //~^ ERROR too many type parameters provided
+ //~^ ERROR wrong number of type arguments
}
)*
}
}
impl_add!(a b);
+
+fn main() {}
-error[E0601]: `main` function not found in crate `issue_53251`
- |
- = note: consider adding a `main` function to `$DIR/issue-53251.rs`
-
-error[E0087]: too many type parameters provided: expected at most 0 type parameters, found 1 type parameter
+error[E0087]: wrong number of type arguments: expected 0, found 1
--> $DIR/issue-53251.rs:21:24
|
LL | S::f::<i64>();
- | ^^^ expected 0 type parameters
+ | ^^^ unexpected type argument
...
LL | impl_add!(a b);
| --------------- in this macro invocation
-error: aborting due to 2 previous errors
+error: aborting due to previous error
-Some errors occurred: E0087, E0601.
-For more information about an error, try `rustc --explain E0087`.
+For more information about this error, try `rustc --explain E0087`.
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// Test that `Box` cannot be used with a lifetime parameter.
+// Test that `Box` cannot be used with a lifetime argument.
struct Foo<'a> {
- x: Box<'a, isize> //~ ERROR wrong number of lifetime parameters
+ x: Box<'a, isize> //~ ERROR wrong number of lifetime arguments
}
pub fn main() {
-error[E0107]: wrong number of lifetime parameters: expected 0, found 1
- --> $DIR/issue-18423.rs:14:8
+error[E0107]: wrong number of lifetime arguments: expected 0, found 1
+ --> $DIR/issue-18423.rs:14:12
|
-LL | x: Box<'a, isize> //~ ERROR wrong number of lifetime parameters
- | ^^^^^^^^^^^^^^ unexpected lifetime parameter
+LL | x: Box<'a, isize> //~ ERROR wrong number of lifetime arguments
+ | ^^ unexpected lifetime argument
error: aborting due to previous error
let ptr: *mut () = 0 as *mut _;
let _: &mut Fn() = unsafe {
&mut *(ptr as *mut Fn())
- //~^ ERROR `(): std::ops::Fn<()>` is not satisfied
+ //~^ ERROR expected a `std::ops::Fn<()>` closure, found `()`
};
}
-error[E0277]: the trait bound `(): std::ops::Fn<()>` is not satisfied
+error[E0277]: expected a `std::ops::Fn<()>` closure, found `()`
--> $DIR/issue-22034.rs:18:16
|
LL | &mut *(ptr as *mut Fn())
- | ^^^ the trait `std::ops::Fn<()>` is not implemented for `()`
+ | ^^^ expected an `Fn<()>` closure, found `()`
|
+ = help: the trait `std::ops::Fn<()>` is not implemented for `()`
+ = note: wrap the `()` in a closure with no arguments: `|| { /* code */ }
= note: required for the cast to the object type `dyn std::ops::Fn()`
error: aborting due to previous error
-error[E0277]: the trait bound `(): std::ops::FnMut<(_, char)>` is not satisfied
+error[E0277]: expected a `std::ops::FnMut<(_, char)>` closure, found `()`
--> $DIR/issue-23966.rs:12:16
|
LL | "".chars().fold(|_, _| (), ());
- | ^^^^ the trait `std::ops::FnMut<(_, char)>` is not implemented for `()`
+ | ^^^^ expected an `FnMut<(_, char)>` closure, found `()`
+ |
+ = help: the trait `std::ops::FnMut<(_, char)>` is not implemented for `()`
error: aborting due to previous error
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// Checks lexical scopes cannot see through normal module boundries
+// Checks lexical scopes cannot see through normal module boundaries
fn f() {
fn g() {}
| ^ use of type variable from outer function
error[E0244]: wrong number of type arguments: expected 0, found 1
- --> $DIR/issue-3214.rs:16:22
+ --> $DIR/issue-3214.rs:16:26
|
LL | impl<T> Drop for foo<T> {
- | ^^^^^^ expected no type arguments
+ | ^ unexpected type argument
error: aborting due to 2 previous errors
error[E0658]: attributes of the form `#[derive_*]` are reserved for the compiler (see issue #29644)
- --> $DIR/issue-32655.rs:16:9
+ --> $DIR/issue-32655.rs:16:11
|
LL | #[derive_Clone] //~ ERROR attributes of the form
- | ^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
...
LL | foo!();
| ------- in this macro invocation
= help: add #![feature(custom_derive)] to the crate attributes to enable
error[E0658]: attributes of the form `#[derive_*]` are reserved for the compiler (see issue #29644)
- --> $DIR/issue-32655.rs:28:5
+ --> $DIR/issue-32655.rs:28:7
|
LL | #[derive_Clone] //~ ERROR attributes of the form
- | ^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^
|
= help: add #![feature(custom_derive)] to the crate attributes to enable
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
macro_rules! foo {
($($p:vis)*) => {} //~ ERROR repetition matches empty token tree
// run-pass
-// This test has structs and functions that are by definiton unusable
+// This test has structs and functions that are by definition unusable
// all over the place, so just go ahead and allow dead_code
#![allow(dead_code)]
error[E0658]: The attribute `marco_use` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/issue-49074.rs:13:1
+ --> $DIR/issue-49074.rs:13:3
|
LL | #[marco_use] // typo
- | ^^^^^^^^^^^^
+ | ^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// Confirm that we don't accidently divide or mod by zero in llvm_type
+// Confirm that we don't accidentally divide or mod by zero in llvm_type
// compile-pass
#![feature(label_break_value)]
-// These are forbidden occurences of label-break-value
+// These are forbidden occurrences of label-break-value
fn labeled_unsafe() {
unsafe 'b: {} //~ ERROR expected one of `extern`, `fn`, or `{`
// except according to those terms.
// FIXME: Change to UI Test
-// Check notes are placed on an assignment that can actually precede the current assigmnent
-// Don't emmit a first assignment for assignment in a loop.
+// Check notes are placed on an assignment that can actually precede the current assignment
+// Don't emit a first assignment for assignment in a loop.
// compile-flags: -Zborrowck=compare
use other::*;
mod foo {
- // Test that this is unused even though an earler `extern crate` is used.
+ // Test that this is unused even though an earlier `extern crate` is used.
extern crate lint_unused_extern_crate2; //~ ERROR unused extern crate
}
// compile-pass
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![allow(unused)]
#![warn(unreachable_pub)]
// compile-pass
#![feature(crate_visibility_modifier)]
-#![feature(macro_vis_matcher)]
+#![cfg_attr(stage0, feature(macro_vis_matcher))]
#![allow(unused)]
#![warn(unreachable_pub)]
| ^^^^^^^^^^^^^^
error[E0658]: The attribute `macro_reexport` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/macro-reexport-removed.rs:15:1
+ --> $DIR/macro-reexport-removed.rs:15:3
|
LL | #[macro_reexport(macro_one)] //~ ERROR attribute `macro_reexport` is currently unknown
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
fn method_call() {
S.early(); // OK
S.early::<'static>();
- //~^ ERROR expected 2 lifetime parameters, found 1 lifetime parameter
+ //~^ ERROR wrong number of lifetime arguments: expected 2, found 1
S.early::<'static, 'static, 'static>();
- //~^ ERROR expected at most 2 lifetime parameters, found 3 lifetime parameters
+ //~^ ERROR wrong number of lifetime arguments: expected 2, found 3
let _: &u8 = S.life_and_type::<'static>();
S.life_and_type::<u8>();
S.life_and_type::<'static, u8>();
S::early(S); // OK
S::early::<'static>(S);
- //~^ ERROR expected 2 lifetime parameters, found 1 lifetime parameter
+ //~^ ERROR wrong number of lifetime arguments: expected 2, found 1
S::early::<'static, 'static, 'static>(S);
- //~^ ERROR expected at most 2 lifetime parameters, found 3 lifetime parameters
+ //~^ ERROR wrong number of lifetime arguments: expected 2, found 3
let _: &u8 = S::life_and_type::<'static>(S);
S::life_and_type::<u8>(S);
S::life_and_type::<'static, u8>(S);
-error[E0090]: too few lifetime parameters provided: expected 2 lifetime parameters, found 1 lifetime parameter
+error[E0090]: wrong number of lifetime arguments: expected 2, found 1
--> $DIR/method-call-lifetime-args-fail.rs:26:7
|
LL | S.early::<'static>();
- | ^^^^^ expected 2 lifetime parameters
+ | ^^^^^ expected 2 lifetime arguments
-error[E0088]: too many lifetime parameters provided: expected at most 2 lifetime parameters, found 3 lifetime parameters
+error[E0088]: wrong number of lifetime arguments: expected 2, found 3
--> $DIR/method-call-lifetime-args-fail.rs:28:33
|
LL | S.early::<'static, 'static, 'static>();
- | ^^^^^^^ expected 2 lifetime parameters
+ | ^^^^^^^ unexpected lifetime argument
error: cannot specify lifetime arguments explicitly if late bound lifetime parameters are present
--> $DIR/method-call-lifetime-args-fail.rs:37:15
LL | fn late_unused_early<'a, 'b>(self) -> &'b u8 { loop {} }
| ^^
-error[E0090]: too few lifetime parameters provided: expected 2 lifetime parameters, found 1 lifetime parameter
+error[E0090]: wrong number of lifetime arguments: expected 2, found 1
--> $DIR/method-call-lifetime-args-fail.rs:73:5
|
LL | S::early::<'static>(S);
- | ^^^^^^^^^^^^^^^^^^^ expected 2 lifetime parameters
+ | ^^^^^^^^^^^^^^^^^^^ expected 2 lifetime arguments
-error[E0088]: too many lifetime parameters provided: expected at most 2 lifetime parameters, found 3 lifetime parameters
+error[E0088]: wrong number of lifetime arguments: expected 2, found 3
--> $DIR/method-call-lifetime-args-fail.rs:75:34
|
LL | S::early::<'static, 'static, 'static>(S);
- | ^^^^^^^ expected 2 lifetime parameters
+ | ^^^^^^^ unexpected lifetime argument
error: aborting due to 18 previous errors
// compile-flags: -Z parse-only
-#![feature(raw_identifiers)]
-
fn test_if() {
r#if true { } //~ ERROR found `true`
}
error: expected one of `!`, `.`, `::`, `;`, `?`, `{`, `}`, or an operator, found `true`
- --> $DIR/raw-literal-keywords.rs:16:10
+ --> $DIR/raw-literal-keywords.rs:14:10
|
LL | r#if true { } //~ ERROR found `true`
| ^^^^ expected one of 8 possible tokens here
error: expected one of `!`, `.`, `::`, `;`, `?`, `{`, `}`, or an operator, found `Test`
- --> $DIR/raw-literal-keywords.rs:20:14
+ --> $DIR/raw-literal-keywords.rs:18:14
|
LL | r#struct Test; //~ ERROR found `Test`
| ^^^^ expected one of 8 possible tokens here
error: expected one of `!`, `.`, `::`, `;`, `?`, `{`, `}`, or an operator, found `Test`
- --> $DIR/raw-literal-keywords.rs:24:13
+ --> $DIR/raw-literal-keywords.rs:22:13
|
LL | r#union Test; //~ ERROR found `Test`
| ^^^^ expected one of 8 possible tokens here
// compile-flags: -Z parse-only
-#![feature(raw_identifiers)]
-
fn self_test(r#self: u32) {
//~^ ERROR `r#self` is not currently supported.
}
error: `r#self` is not currently supported.
- --> $DIR/raw-literal-self.rs:15:14
+ --> $DIR/raw-literal-self.rs:13:14
|
LL | fn self_test(r#self: u32) {
| ^^^^^^
error[E0658]: unless otherwise specified, attributes with the prefix `rustc_` are reserved for internal compiler diagnostics (see issue #29642)
- --> $DIR/reserved-attr-on-macro.rs:11:1
+ --> $DIR/reserved-attr-on-macro.rs:11:3
|
LL | #[rustc_attribute_should_be_reserved] //~ ERROR attributes with the prefix `rustc_` are reserved
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: add #![feature(rustc_attrs)] to the crate attributes to enable
#![feature(generic_associated_types)]
//FIXME(#44265): The lifetime shadowing and type parameter shadowing
-// should cause an error. Now it compiles (errorneously) and this will be addressed
+// should cause an error. Now it compiles (erroneously) and this will be addressed
// by a future PR. Then remove the following:
// compile-pass
#![deny(rust_2018_compatibility)]
-// Don't make a suggestion for a raw identifer replacement unless raw
+// Don't make a suggestion for a raw identifier replacement unless raw
// identifiers are enabled.
fn main() {
--> $DIR/async-ident-allowed.rs:19:9
|
LL | let async = 3; //~ ERROR: is a keyword
- | ^^^^^
+ | ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
|
note: lint level defined here
--> $DIR/async-ident-allowed.rs:13:9
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(raw_identifiers)]
#![allow(dead_code, unused_variables, non_camel_case_types, non_upper_case_globals)]
#![deny(async_idents)]
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(raw_identifiers)]
#![allow(dead_code, unused_variables, non_camel_case_types, non_upper_case_globals)]
#![deny(async_idents)]
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:18:4
+ --> $DIR/async-ident.rs:17:4
|
LL | fn async() {} //~ ERROR async
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
|
note: lint level defined here
- --> $DIR/async-ident.rs:13:9
+ --> $DIR/async-ident.rs:12:9
|
LL | #![deny(async_idents)]
| ^^^^^^^^^^^^
= note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:23:7
+ --> $DIR/async-ident.rs:22:7
|
LL | ($async:expr, async) => {};
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
= note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:23:19
+ --> $DIR/async-ident.rs:22:19
|
LL | ($async:expr, async) => {};
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
= note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:37:11
+ --> $DIR/async-ident.rs:36:11
|
LL | trait async {}
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
= note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:41:10
+ --> $DIR/async-ident.rs:40:10
|
LL | impl async for MyStruct {}
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
= note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:47:12
+ --> $DIR/async-ident.rs:46:12
|
LL | static async: u32 = 0;
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
= note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:53:11
+ --> $DIR/async-ident.rs:52:11
|
LL | const async: u32 = 0;
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
= note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:59:15
+ --> $DIR/async-ident.rs:58:15
|
LL | impl Foo { fn async() {} }
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
= note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:64:12
+ --> $DIR/async-ident.rs:63:12
|
LL | struct async {}
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
= note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:67:9
+ --> $DIR/async-ident.rs:66:9
|
LL | let async: async = async {};
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
= note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:67:16
+ --> $DIR/async-ident.rs:66:16
|
LL | let async: async = async {};
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
= note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:67:24
+ --> $DIR/async-ident.rs:66:24
|
LL | let async: async = async {};
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
= note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:78:19
+ --> $DIR/async-ident.rs:77:19
|
LL | () => (pub fn async() {})
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
= note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
error: `async` is a keyword in the 2018 edition
- --> $DIR/async-ident.rs:85:6
+ --> $DIR/async-ident.rs:84:6
|
LL | (async) => (1)
| ^^^^^ help: you can use a raw identifier to stay compatible: `r#async`
error[E0244]: wrong number of type arguments: expected 0, found 1
- --> $DIR/seq-args.rs:14:9
+ --> $DIR/seq-args.rs:14:13
|
LL | impl<T> seq<T> for Vec<T> { //~ ERROR wrong number of type arguments
- | ^^^^^^ expected no type arguments
+ | ^ unexpected type argument
error[E0244]: wrong number of type arguments: expected 0, found 1
- --> $DIR/seq-args.rs:17:6
+ --> $DIR/seq-args.rs:17:10
|
LL | impl seq<bool> for u32 { //~ ERROR wrong number of type arguments
- | ^^^^^^^^^ expected no type arguments
+ | ^^^^ unexpected type argument
error: aborting due to 2 previous errors
// except according to those terms.
// Test that we DO NOT warn when lifetime name is used multiple
-// argments, or more than once in a single argument.
+// arguments, or more than once in a single argument.
//
// compile-pass
error[E0658]: The attribute `foo` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/issue-36530.rs:11:1
+ --> $DIR/issue-36530.rs:11:3
|
LL | #[foo] //~ ERROR is currently unknown to the compiler
- | ^^^^^^
+ | ^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
error[E0658]: The attribute `foo` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
- --> $DIR/issue-36530.rs:13:5
+ --> $DIR/issue-36530.rs:13:8
|
LL | #![foo] //~ ERROR is currently unknown to the compiler
- | ^^^^^^^
+ | ^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
found type `{integer}`
error[E0244]: wrong number of type arguments: expected 0, found 1
- --> $DIR/structure-constructor-type-mismatch.rs:58:15
+ --> $DIR/structure-constructor-type-mismatch.rs:58:24
|
LL | let pt3 = PointF::<i32> { //~ ERROR wrong number of type arguments
- | ^^^^^^^^^^^^^ expected no type arguments
+ | ^^^ unexpected type argument
error[E0308]: mismatched types
--> $DIR/structure-constructor-type-mismatch.rs:59:12
found type `{integer}`
error[E0244]: wrong number of type arguments: expected 0, found 1
- --> $DIR/structure-constructor-type-mismatch.rs:64:9
+ --> $DIR/structure-constructor-type-mismatch.rs:64:18
|
LL | PointF::<u32> { .. } => {} //~ ERROR wrong number of type arguments
- | ^^^^^^^^^^^^^ expected no type arguments
+ | ^^^ unexpected type argument
error[E0308]: mismatched types
--> $DIR/structure-constructor-type-mismatch.rs:64:9
let _: S<'static, 'static +>;
//~^ at least one non-builtin trait is required for an object type
let _: S<'static, 'static>;
- //~^ ERROR wrong number of lifetime parameters: expected 1, found 2
+ //~^ ERROR wrong number of lifetime arguments: expected 1, found 2
//~| ERROR wrong number of type arguments: expected 1, found 0
let _: S<'static +, 'static>;
//~^ ERROR lifetime parameters must be declared prior to type parameters
LL | let _: S<'static, 'static +>;
| ^^^^^^^^^
-error[E0107]: wrong number of lifetime parameters: expected 1, found 2
- --> $DIR/trait-object-vs-lifetime.rs:23:12
+error[E0107]: wrong number of lifetime arguments: expected 1, found 2
+ --> $DIR/trait-object-vs-lifetime.rs:23:23
|
LL | let _: S<'static, 'static>;
- | ^^^^^^^^^^^^^^^^^^^ unexpected lifetime parameter
+ | ^^^^^^^ unexpected lifetime argument
error[E0243]: wrong number of type arguments: expected 1, found 0
--> $DIR/trait-object-vs-lifetime.rs:23:12
impl bar for u32 { fn dup(&self) -> u32 { *self } fn blah<X>(&self) {} }
fn main() {
- 10.dup::<i32>(); //~ ERROR expected at most 0 type parameters, found 1 type parameter
- 10.blah::<i32, i32>(); //~ ERROR expected at most 1 type parameter, found 2 type parameters
+ 10.dup::<i32>(); //~ ERROR wrong number of type arguments: expected 0, found 1
+ 10.blah::<i32, i32>(); //~ ERROR wrong number of type arguments: expected 1, found 2
(box 10 as Box<bar>).dup();
//~^ ERROR E0038
//~| ERROR E0038
-error[E0087]: too many type parameters provided: expected at most 0 type parameters, found 1 type parameter
+error[E0087]: wrong number of type arguments: expected 0, found 1
--> $DIR/trait-test-2.rs:18:14
|
-LL | 10.dup::<i32>(); //~ ERROR expected at most 0 type parameters, found 1 type parameter
- | ^^^ expected 0 type parameters
+LL | 10.dup::<i32>(); //~ ERROR wrong number of type arguments: expected 0, found 1
+ | ^^^ unexpected type argument
-error[E0087]: too many type parameters provided: expected at most 1 type parameter, found 2 type parameters
+error[E0087]: wrong number of type arguments: expected 1, found 2
--> $DIR/trait-test-2.rs:19:20
|
-LL | 10.blah::<i32, i32>(); //~ ERROR expected at most 1 type parameter, found 2 type parameters
- | ^^^ expected 1 type parameter
+LL | 10.blah::<i32, i32>(); //~ ERROR wrong number of type arguments: expected 1, found 2
+ | ^^^ unexpected type argument
error[E0277]: the trait bound `dyn bar: bar` is not satisfied
--> $DIR/trait-test-2.rs:20:26
//~^ ERROR wrong number of type arguments: expected 0, found 1 [E0244]
struct MyStruct2<'a, T: Copy<'a>>;
-//~^ ERROR: wrong number of lifetime parameters: expected 0, found 1
+//~^ ERROR: wrong number of lifetime arguments: expected 0, found 1
fn foo2<'a, T:Copy<'a, U>, U>(x: T) {}
//~^ ERROR wrong number of type arguments: expected 0, found 1 [E0244]
-//~| ERROR: wrong number of lifetime parameters: expected 0, found 1
+//~| ERROR: wrong number of lifetime arguments: expected 0, found 1
fn main() {
}
error[E0244]: wrong number of type arguments: expected 0, found 1
- --> $DIR/typeck-builtin-bound-type-parameters.rs:11:11
+ --> $DIR/typeck-builtin-bound-type-parameters.rs:11:16
|
LL | fn foo1<T:Copy<U>, U>(x: T) {}
- | ^^^^^^^ expected no type arguments
+ | ^ unexpected type argument
error[E0244]: wrong number of type arguments: expected 0, found 1
- --> $DIR/typeck-builtin-bound-type-parameters.rs:14:14
+ --> $DIR/typeck-builtin-bound-type-parameters.rs:14:19
|
LL | trait Trait: Copy<Send> {}
- | ^^^^^^^^^^ expected no type arguments
+ | ^^^^ unexpected type argument
error[E0244]: wrong number of type arguments: expected 0, found 1
- --> $DIR/typeck-builtin-bound-type-parameters.rs:17:21
+ --> $DIR/typeck-builtin-bound-type-parameters.rs:17:26
|
LL | struct MyStruct1<T: Copy<T>>;
- | ^^^^^^^ expected no type arguments
+ | ^ unexpected type argument
-error[E0107]: wrong number of lifetime parameters: expected 0, found 1
- --> $DIR/typeck-builtin-bound-type-parameters.rs:20:25
+error[E0107]: wrong number of lifetime arguments: expected 0, found 1
+ --> $DIR/typeck-builtin-bound-type-parameters.rs:20:30
|
LL | struct MyStruct2<'a, T: Copy<'a>>;
- | ^^^^^^^^ unexpected lifetime parameter
+ | ^^ unexpected lifetime argument
-error[E0107]: wrong number of lifetime parameters: expected 0, found 1
- --> $DIR/typeck-builtin-bound-type-parameters.rs:24:15
+error[E0107]: wrong number of lifetime arguments: expected 0, found 1
+ --> $DIR/typeck-builtin-bound-type-parameters.rs:24:20
|
LL | fn foo2<'a, T:Copy<'a, U>, U>(x: T) {}
- | ^^^^^^^^^^^ unexpected lifetime parameter
+ | ^^ unexpected lifetime argument
error[E0244]: wrong number of type arguments: expected 0, found 1
- --> $DIR/typeck-builtin-bound-type-parameters.rs:24:15
+ --> $DIR/typeck-builtin-bound-type-parameters.rs:24:24
|
LL | fn foo2<'a, T:Copy<'a, U>, U>(x: T) {}
- | ^^^^^^^^^^^ expected no type arguments
+ | ^ unexpected type argument
error: aborting due to 6 previous errors
error[E0244]: wrong number of type arguments: expected 1, found 2
- --> $DIR/typeck_type_placeholder_lifetime_1.rs:19:12
+ --> $DIR/typeck_type_placeholder_lifetime_1.rs:19:19
|
LL | let c: Foo<_, _> = Foo { r: &5 };
- | ^^^^^^^^^ expected 1 type argument
+ | ^ unexpected type argument
error: aborting due to previous error
error[E0244]: wrong number of type arguments: expected 1, found 2
- --> $DIR/typeck_type_placeholder_lifetime_2.rs:19:12
+ --> $DIR/typeck_type_placeholder_lifetime_2.rs:19:19
|
LL | let c: Foo<_, usize> = Foo { r: &5 };
- | ^^^^^^^^^^^^^ expected 1 type argument
+ | ^^^^^ unexpected type argument
error: aborting due to previous error
fn main() {
<String as IntoCow>::into_cow("foo".to_string());
- //~^ ERROR too few type parameters provided: expected 1 type parameter
+ //~^ ERROR wrong number of type arguments: expected 1, found 0
}
-error[E0089]: too few type parameters provided: expected 1 type parameter, found 0 type parameters
+error[E0089]: wrong number of type arguments: expected 1, found 0
--> $DIR/ufcs-qpath-missing-params.rs:24:5
|
LL | <String as IntoCow>::into_cow("foo".to_string());
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected 1 type parameter
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected 1 type argument
error: aborting due to previous error
}
fn test2(x: &Foo<(isize,),Output=()>, y: &Foo(isize)) {
-//~^ ERROR wrong number of lifetime parameters: expected 1, found 0
+//~^ ERROR wrong number of lifetime arguments: expected 1, found 0
// Here, the omitted lifetimes are expanded to distinct things.
same_type(x, y)
}
-error[E0107]: wrong number of lifetime parameters: expected 1, found 0
+error[E0107]: wrong number of lifetime arguments: expected 1, found 0
--> $DIR/unboxed-closure-sugar-region.rs:40:43
|
LL | fn test2(x: &Foo<(isize,),Output=()>, y: &Foo(isize)) {
- | ^^^^^^^^^^ expected 1 lifetime parameter
+ | ^^^^^^^^^^ expected 1 lifetime argument
error: aborting due to previous error
error[E0244]: wrong number of type arguments: expected 0, found 1
- --> $DIR/unboxed-closure-sugar-wrong-number-number-type-parameters.rs:15:11
+ --> $DIR/unboxed-closure-sugar-wrong-number-number-type-parameters.rs:15:15
|
LL | fn foo(_: Zero())
- | ^^^^^^ expected no type arguments
+ | ^^ unexpected type argument
error[E0220]: associated type `Output` not found for `Zero`
--> $DIR/unboxed-closure-sugar-wrong-number-number-type-parameters.rs:15:15
error[E0244]: wrong number of type arguments: expected 0, found 1
- --> $DIR/unboxed-closure-sugar-wrong-trait.rs:15:8
+ --> $DIR/unboxed-closure-sugar-wrong-trait.rs:15:13
|
LL | fn f<F:Trait(isize) -> isize>(x: F) {}
- | ^^^^^^^^^^^^^^^^^^^^^ expected no type arguments
+ | ^^^^^^^^^^^^^^^^ unexpected type argument
error[E0220]: associated type `Output` not found for `Trait`
--> $DIR/unboxed-closure-sugar-wrong-trait.rs:15:24
-error[E0277]: the trait bound `S: std::ops::Fn<(isize,)>` is not satisfied
+error[E0277]: expected a `std::ops::Fn<(isize,)>` closure, found `S`
--> $DIR/unboxed-closures-fnmut-as-fn.rs:38:13
|
LL | let x = call_it(&S, 22);
- | ^^^^^^^ the trait `std::ops::Fn<(isize,)>` is not implemented for `S`
+ | ^^^^^^^ expected an `Fn<(isize,)>` closure, found `S`
|
+ = help: the trait `std::ops::Fn<(isize,)>` is not implemented for `S`
note: required by `call_it`
--> $DIR/unboxed-closures-fnmut-as-fn.rs:33:1
|
-error[E0277]: the trait bound `for<'r> for<'s> unsafe fn(&'s isize) -> isize {square}: std::ops::Fn<(&'r isize,)>` is not satisfied
+error[E0277]: expected a `std::ops::Fn<(&isize,)>` closure, found `for<'r> unsafe fn(&'r isize) -> isize {square}`
--> $DIR/unboxed-closures-unsafe-extern-fn.rs:22:13
|
LL | let x = call_it(&square, 22);
- | ^^^^^^^ the trait `for<'r> std::ops::Fn<(&'r isize,)>` is not implemented for `for<'r> unsafe fn(&'r isize) -> isize {square}`
+ | ^^^^^^^ expected an `Fn<(&isize,)>` closure, found `for<'r> unsafe fn(&'r isize) -> isize {square}`
|
+ = help: the trait `for<'r> std::ops::Fn<(&'r isize,)>` is not implemented for `for<'r> unsafe fn(&'r isize) -> isize {square}`
note: required by `call_it`
--> $DIR/unboxed-closures-unsafe-extern-fn.rs:17:1
|
LL | fn call_it<F:Fn(&isize)->isize>(_: &F, _: isize) -> isize { 0 }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-error[E0277]: the trait bound `for<'r> for<'s> unsafe fn(&'s isize) -> isize {square}: std::ops::FnMut<(&'r isize,)>` is not satisfied
+error[E0277]: expected a `std::ops::FnMut<(&isize,)>` closure, found `for<'r> unsafe fn(&'r isize) -> isize {square}`
--> $DIR/unboxed-closures-unsafe-extern-fn.rs:27:13
|
LL | let y = call_it_mut(&mut square, 22);
- | ^^^^^^^^^^^ the trait `for<'r> std::ops::FnMut<(&'r isize,)>` is not implemented for `for<'r> unsafe fn(&'r isize) -> isize {square}`
+ | ^^^^^^^^^^^ expected an `FnMut<(&isize,)>` closure, found `for<'r> unsafe fn(&'r isize) -> isize {square}`
|
+ = help: the trait `for<'r> std::ops::FnMut<(&'r isize,)>` is not implemented for `for<'r> unsafe fn(&'r isize) -> isize {square}`
note: required by `call_it_mut`
--> $DIR/unboxed-closures-unsafe-extern-fn.rs:18:1
|
LL | fn call_it_mut<F:FnMut(&isize)->isize>(_: &mut F, _: isize) -> isize { 0 }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-error[E0277]: the trait bound `for<'r> for<'s> unsafe fn(&'s isize) -> isize {square}: std::ops::FnOnce<(&'r isize,)>` is not satisfied
+error[E0277]: expected a `std::ops::FnOnce<(&isize,)>` closure, found `for<'r> unsafe fn(&'r isize) -> isize {square}`
--> $DIR/unboxed-closures-unsafe-extern-fn.rs:32:13
|
LL | let z = call_it_once(square, 22);
- | ^^^^^^^^^^^^ the trait `for<'r> std::ops::FnOnce<(&'r isize,)>` is not implemented for `for<'r> unsafe fn(&'r isize) -> isize {square}`
+ | ^^^^^^^^^^^^ expected an `FnOnce<(&isize,)>` closure, found `for<'r> unsafe fn(&'r isize) -> isize {square}`
|
+ = help: the trait `for<'r> std::ops::FnOnce<(&'r isize,)>` is not implemented for `for<'r> unsafe fn(&'r isize) -> isize {square}`
note: required by `call_it_once`
--> $DIR/unboxed-closures-unsafe-extern-fn.rs:19:1
|
-error[E0277]: the trait bound `for<'r> for<'s> extern "C" fn(&'s isize) -> isize {square}: std::ops::Fn<(&'r isize,)>` is not satisfied
+error[E0277]: expected a `std::ops::Fn<(&isize,)>` closure, found `for<'r> extern "C" fn(&'r isize) -> isize {square}`
--> $DIR/unboxed-closures-wrong-abi.rs:22:13
|
LL | let x = call_it(&square, 22);
- | ^^^^^^^ the trait `for<'r> std::ops::Fn<(&'r isize,)>` is not implemented for `for<'r> extern "C" fn(&'r isize) -> isize {square}`
+ | ^^^^^^^ expected an `Fn<(&isize,)>` closure, found `for<'r> extern "C" fn(&'r isize) -> isize {square}`
|
+ = help: the trait `for<'r> std::ops::Fn<(&'r isize,)>` is not implemented for `for<'r> extern "C" fn(&'r isize) -> isize {square}`
note: required by `call_it`
--> $DIR/unboxed-closures-wrong-abi.rs:17:1
|
LL | fn call_it<F:Fn(&isize)->isize>(_: &F, _: isize) -> isize { 0 }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-error[E0277]: the trait bound `for<'r> for<'s> extern "C" fn(&'s isize) -> isize {square}: std::ops::FnMut<(&'r isize,)>` is not satisfied
+error[E0277]: expected a `std::ops::FnMut<(&isize,)>` closure, found `for<'r> extern "C" fn(&'r isize) -> isize {square}`
--> $DIR/unboxed-closures-wrong-abi.rs:27:13
|
LL | let y = call_it_mut(&mut square, 22);
- | ^^^^^^^^^^^ the trait `for<'r> std::ops::FnMut<(&'r isize,)>` is not implemented for `for<'r> extern "C" fn(&'r isize) -> isize {square}`
+ | ^^^^^^^^^^^ expected an `FnMut<(&isize,)>` closure, found `for<'r> extern "C" fn(&'r isize) -> isize {square}`
|
+ = help: the trait `for<'r> std::ops::FnMut<(&'r isize,)>` is not implemented for `for<'r> extern "C" fn(&'r isize) -> isize {square}`
note: required by `call_it_mut`
--> $DIR/unboxed-closures-wrong-abi.rs:18:1
|
LL | fn call_it_mut<F:FnMut(&isize)->isize>(_: &mut F, _: isize) -> isize { 0 }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-error[E0277]: the trait bound `for<'r> for<'s> extern "C" fn(&'s isize) -> isize {square}: std::ops::FnOnce<(&'r isize,)>` is not satisfied
+error[E0277]: expected a `std::ops::FnOnce<(&isize,)>` closure, found `for<'r> extern "C" fn(&'r isize) -> isize {square}`
--> $DIR/unboxed-closures-wrong-abi.rs:32:13
|
LL | let z = call_it_once(square, 22);
- | ^^^^^^^^^^^^ the trait `for<'r> std::ops::FnOnce<(&'r isize,)>` is not implemented for `for<'r> extern "C" fn(&'r isize) -> isize {square}`
+ | ^^^^^^^^^^^^ expected an `FnOnce<(&isize,)>` closure, found `for<'r> extern "C" fn(&'r isize) -> isize {square}`
|
+ = help: the trait `for<'r> std::ops::FnOnce<(&'r isize,)>` is not implemented for `for<'r> extern "C" fn(&'r isize) -> isize {square}`
note: required by `call_it_once`
--> $DIR/unboxed-closures-wrong-abi.rs:19:1
|
-error[E0277]: the trait bound `for<'r> unsafe fn(isize) -> isize {square}: std::ops::Fn<(&'r isize,)>` is not satisfied
+error[E0277]: expected a `std::ops::Fn<(&isize,)>` closure, found `unsafe fn(isize) -> isize {square}`
--> $DIR/unboxed-closures-wrong-arg-type-extern-fn.rs:23:13
|
LL | let x = call_it(&square, 22);
- | ^^^^^^^ the trait `for<'r> std::ops::Fn<(&'r isize,)>` is not implemented for `unsafe fn(isize) -> isize {square}`
+ | ^^^^^^^ expected an `Fn<(&isize,)>` closure, found `unsafe fn(isize) -> isize {square}`
|
+ = help: the trait `for<'r> std::ops::Fn<(&'r isize,)>` is not implemented for `unsafe fn(isize) -> isize {square}`
note: required by `call_it`
--> $DIR/unboxed-closures-wrong-arg-type-extern-fn.rs:18:1
|
LL | fn call_it<F:Fn(&isize)->isize>(_: &F, _: isize) -> isize { 0 }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-error[E0277]: the trait bound `for<'r> unsafe fn(isize) -> isize {square}: std::ops::FnMut<(&'r isize,)>` is not satisfied
+error[E0277]: expected a `std::ops::FnMut<(&isize,)>` closure, found `unsafe fn(isize) -> isize {square}`
--> $DIR/unboxed-closures-wrong-arg-type-extern-fn.rs:28:13
|
LL | let y = call_it_mut(&mut square, 22);
- | ^^^^^^^^^^^ the trait `for<'r> std::ops::FnMut<(&'r isize,)>` is not implemented for `unsafe fn(isize) -> isize {square}`
+ | ^^^^^^^^^^^ expected an `FnMut<(&isize,)>` closure, found `unsafe fn(isize) -> isize {square}`
|
+ = help: the trait `for<'r> std::ops::FnMut<(&'r isize,)>` is not implemented for `unsafe fn(isize) -> isize {square}`
note: required by `call_it_mut`
--> $DIR/unboxed-closures-wrong-arg-type-extern-fn.rs:19:1
|
LL | fn call_it_mut<F:FnMut(&isize)->isize>(_: &mut F, _: isize) -> isize { 0 }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-error[E0277]: the trait bound `for<'r> unsafe fn(isize) -> isize {square}: std::ops::FnOnce<(&'r isize,)>` is not satisfied
+error[E0277]: expected a `std::ops::FnOnce<(&isize,)>` closure, found `unsafe fn(isize) -> isize {square}`
--> $DIR/unboxed-closures-wrong-arg-type-extern-fn.rs:33:13
|
LL | let z = call_it_once(square, 22);
- | ^^^^^^^^^^^^ the trait `for<'r> std::ops::FnOnce<(&'r isize,)>` is not implemented for `unsafe fn(isize) -> isize {square}`
+ | ^^^^^^^^^^^^ expected an `FnOnce<(&isize,)>` closure, found `unsafe fn(isize) -> isize {square}`
|
+ = help: the trait `for<'r> std::ops::FnOnce<(&'r isize,)>` is not implemented for `unsafe fn(isize) -> isize {square}`
note: required by `call_it_once`
--> $DIR/unboxed-closures-wrong-arg-type-extern-fn.rs:20:1
|
// normalize-stderr-test "allocation \d+" -> "allocation N"
// normalize-stderr-test "size \d+" -> "size N"
+union BoolTransmute {
+ val: u8,
+ bl: bool,
+}
+
#[repr(C)]
#[derive(Copy, Clone)]
struct SliceRepr {
bad: BadSliceRepr,
slice: &'static [u8],
str: &'static str,
+ my_str: &'static Str,
}
#[repr(C)]
}
trait Trait {}
+impl Trait for bool {}
+
+struct Str(str);
// OK
const A: &str = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 1 } }.str};
-// should lint
+// bad str
const B: &str = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 999 } }.str};
-// bad
+//~^ ERROR this constant likely exhibits undefined behavior
+// bad str
const C: &str = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.str};
//~^ ERROR this constant likely exhibits undefined behavior
+// bad str in Str
+const C2: &Str = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.my_str};
+//~^ ERROR this constant likely exhibits undefined behavior
// OK
const A2: &[u8] = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 1 } }.slice};
-// should lint
+// bad slice
const B2: &[u8] = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 999 } }.slice};
-// bad
-const C2: &[u8] = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.slice};
+//~^ ERROR this constant likely exhibits undefined behavior
+// bad slice
+const C3: &[u8] = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.slice};
//~^ ERROR this constant likely exhibits undefined behavior
-// bad
+// bad trait object
const D: &Trait = unsafe { DynTransmute { repr: DynRepr { ptr: &92, vtable: &3 } }.rust};
//~^ ERROR this constant likely exhibits undefined behavior
-// bad
+// bad trait object
const E: &Trait = unsafe { DynTransmute { repr2: DynRepr2 { ptr: &92, vtable: &3 } }.rust};
//~^ ERROR this constant likely exhibits undefined behavior
-// bad
+// bad trait object
const F: &Trait = unsafe { DynTransmute { bad: BadDynRepr { ptr: &92, vtable: 3 } }.rust};
//~^ ERROR this constant likely exhibits undefined behavior
+// bad data *inside* the trait object
+const G: &Trait = &unsafe { BoolTransmute { val: 3 }.bl };
+//~^ ERROR this constant likely exhibits undefined behavior
+
+// bad data *inside* the slice
+const H: &[bool] = &[unsafe { BoolTransmute { val: 3 }.bl }];
+//~^ ERROR this constant likely exhibits undefined behavior
+
fn main() {
}
error[E0080]: this constant likely exhibits undefined behavior
- --> $DIR/union-ub-fat-ptr.rs:72:1
+ --> $DIR/union-ub-fat-ptr.rs:79:1
+ |
+LL | const B: &str = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 999 } }.str};
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ memory access at offset N, outside bounds of allocation N which has size N
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
+
+error[E0080]: this constant likely exhibits undefined behavior
+ --> $DIR/union-ub-fat-ptr.rs:82:1
|
LL | const C: &str = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.str};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered length is not a valid integer
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered fat pointer length is not a valid integer
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
error[E0080]: this constant likely exhibits undefined behavior
- --> $DIR/union-ub-fat-ptr.rs:80:1
+ --> $DIR/union-ub-fat-ptr.rs:85:1
|
-LL | const C2: &[u8] = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.slice};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered length is not a valid integer
+LL | const C2: &Str = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.my_str};
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered fat pointer length is not a valid integer
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
error[E0080]: this constant likely exhibits undefined behavior
- --> $DIR/union-ub-fat-ptr.rs:84:1
+ --> $DIR/union-ub-fat-ptr.rs:91:1
+ |
+LL | const B2: &[u8] = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 999 } }.slice};
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ memory access at offset N, outside bounds of allocation N which has size N
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
+
+error[E0080]: this constant likely exhibits undefined behavior
+ --> $DIR/union-ub-fat-ptr.rs:94:1
+ |
+LL | const C3: &[u8] = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.slice};
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered fat pointer length is not a valid integer
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
+
+error[E0080]: this constant likely exhibits undefined behavior
+ --> $DIR/union-ub-fat-ptr.rs:98:1
|
LL | const D: &Trait = unsafe { DynTransmute { repr: DynRepr { ptr: &92, vtable: &3 } }.rust};
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ tried to access memory with alignment N, but alignment N is required
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
error[E0080]: this constant likely exhibits undefined behavior
- --> $DIR/union-ub-fat-ptr.rs:87:1
+ --> $DIR/union-ub-fat-ptr.rs:101:1
|
LL | const E: &Trait = unsafe { DynTransmute { repr2: DynRepr2 { ptr: &92, vtable: &3 } }.rust};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ memory access at offset N, outside bounds of allocation N which has size N
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ a memory access tried to interpret some bytes as a pointer
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
error[E0080]: this constant likely exhibits undefined behavior
- --> $DIR/union-ub-fat-ptr.rs:90:1
+ --> $DIR/union-ub-fat-ptr.rs:104:1
|
LL | const F: &Trait = unsafe { DynTransmute { bad: BadDynRepr { ptr: &92, vtable: 3 } }.rust};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered vtable address is not a pointer
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered fat pointer vtable is not a valid pointer
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
+
+error[E0080]: this constant likely exhibits undefined behavior
+ --> $DIR/union-ub-fat-ptr.rs:108:1
+ |
+LL | const G: &Trait = &unsafe { BoolTransmute { val: 3 }.bl };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 3 at .<deref>, but expected something in the range 0..=1
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
+
+error[E0080]: this constant likely exhibits undefined behavior
+ --> $DIR/union-ub-fat-ptr.rs:112:1
+ |
+LL | const H: &[bool] = &[unsafe { BoolTransmute { val: 3 }.bl }];
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 3 at .<deref>[0], but expected something in the range 0..=1
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior
-error: aborting due to 5 previous errors
+error: aborting due to 10 previous errors
For more information about this error, try `rustc --explain E0080`.
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// ignore-arm stdcall isn't suppported
+// ignore-arm stdcall isn't supported
fn baz(f: extern "stdcall" fn(usize, ...)) {
//~^ ERROR: variadic function must have C or cdecl calling convention
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// ignore-arm stdcall isn't suppported
-// ignore-aarch64 stdcall isn't suppported
+// ignore-arm stdcall isn't supported
+// ignore-aarch64 stdcall isn't supported
extern "stdcall" {
fn printf(_: *const u8, ...); //~ ERROR: variadic function must have C or cdecl calling
// Test that we can quantify lifetimes outside a constraint (i.e., including
// the self type) in a where clause. Specifically, test that implementing for a
-// specific lifetime is not enough to satisify the `for<'a> ...` constraint, which
+// specific lifetime is not enough to satisfy the `for<'a> ...` constraint, which
// should require *all* lifetimes.
static X: &'static u32 = &42;
self.clippy_version = self.version("clippy", "x86_64-unknown-linux-gnu");
self.rustfmt_version = self.version("rustfmt", "x86_64-unknown-linux-gnu");
self.llvm_tools_version = self.version("llvm-tools", "x86_64-unknown-linux-gnu");
- self.lldb_version = self.version("lldb", "x86_64-unknown-linux-gnu");
+ // lldb is only built for macOS.
+ self.lldb_version = self.version("lldb", "x86_64-apple-darwin");
self.rust_git_commit_hash = self.git_commit_hash("rust", "x86_64-unknown-linux-gnu");
self.cargo_git_commit_hash = self.git_commit_hash("cargo", "x86_64-unknown-linux-gnu");
-Subproject commit f76ea3ca16ed22dde8ef929db74a4b4df6f2f899
+Subproject commit 813b3b952c07b6b85732c3fbdf3eb74f61a9fa96