3. Run `mingw32_shell.bat` or `mingw64_shell.bat` from wherever you installed
MSYS2 (i.e. `C:\msys`), depending on whether you want 32-bit or 64-bit Rust.
+ (As of the latest version of MSYS2 you have to run `msys2_shell.cmd -mingw32`
+ or `msys2_shell.cmd -mingw64` from the command line instead)
4. Navigate to Rust's source code, configure and build it:
log graphviz rustc_llvm rustc_back rustc_data_structures\
rustc_const_math
DEPS_rustc_back := std syntax flate log libc
-DEPS_rustc_borrowck := rustc rustc_mir log graphviz syntax
+DEPS_rustc_borrowck := rustc log graphviz syntax rustc_mir
DEPS_rustc_data_structures := std log serialize
DEPS_rustc_driver := arena flate getopts graphviz libc rustc rustc_back rustc_borrowck \
rustc_typeck rustc_mir rustc_resolve log syntax serialize rustc_llvm \
DEPS_rustc_mir := rustc syntax rustc_const_math rustc_const_eval rustc_bitflags
DEPS_rustc_resolve := arena rustc log syntax
DEPS_rustc_platform_intrinsics := std
-DEPS_rustc_plugin := rustc rustc_metadata syntax rustc_mir
+DEPS_rustc_plugin := rustc rustc_metadata syntax
DEPS_rustc_privacy := rustc log syntax
-DEPS_rustc_trans := arena flate getopts graphviz libc rustc rustc_back rustc_mir \
+DEPS_rustc_trans := arena flate getopts graphviz libc rustc rustc_back \
log syntax serialize rustc_llvm rustc_platform_intrinsics \
rustc_const_math rustc_const_eval rustc_incremental
DEPS_rustc_incremental := rbml rustc serialize rustc_data_structures
fn search<P: AsRef<Path>>
(file_path: P, city: &str)
- -> Result<Vec<PopulationCount>, Box<Error+Send+Sync>> {
+ -> Result<Vec<PopulationCount>, Box<Error>> {
let mut found = vec![];
let file = try!(File::open(file_path));
let mut rdr = csv::Reader::from_reader(file);
`Result<T, E>`, the `try!` macro will return early from the function if an
error occurs.
-There is one big gotcha in this code: we used `Box<Error + Send + Sync>`
-instead of `Box<Error>`. We did this so we could convert a plain string to an
-error type. We need these extra bounds so that we can use the
-[corresponding `From`
-impls](../std/convert/trait.From.html):
+At the end of `search` we also convert a plain string to an error type
+by using the [corresponding `From` impls](../std/convert/trait.From.html):
```rust,ignore
// We are making use of this impl in the code above, since we call `From::from`
// on a `&'static str`.
-impl<'a, 'b> From<&'b str> for Box<Error + Send + Sync + 'a>
+impl<'a> From<&'a str> for Box<Error>
// But this is also useful when you need to allocate a new string for an
// error message, usually with `format!`.
-impl From<String> for Box<Error + Send + Sync>
+impl From<String> for Box<Error>
```
Since `search` now returns a `Result<T, E>`, `main` should use case analysis
fn search<P: AsRef<Path>>
(file_path: &Option<P>, city: &str)
- -> Result<Vec<PopulationCount>, Box<Error+Send+Sync>> {
+ -> Result<Vec<PopulationCount>, Box<Error>> {
let mut found = vec![];
let input: Box<io::Read> = match *file_path {
None => Box::new(io::stdin()),
`unwrap`. Be warned: if it winds up in someone else's hands, don't be
surprised if they are agitated by poor error messages!
* If you're writing a quick 'n' dirty program and feel ashamed about panicking
- anyway, then use either a `String` or a `Box<Error + Send + Sync>` for your
- error type (the `Box<Error + Send + Sync>` type is because of the
- [available `From` impls](../std/convert/trait.From.html)).
+ anyway, then use either a `String` or a `Box<Error>` for your
+ error type.
* Otherwise, in a program, define your own error types with appropriate
[`From`](../std/convert/trait.From.html)
and
let v = vec!["match_this", "1"];
match &v[..] {
- ["match_this", second] => println!("The second element is {}", second),
+ &["match_this", second] => println!("The second element is {}", second),
_ => {},
}
}
fn is_symmetric(list: &[u32]) -> bool {
match list {
- [] | [_] => true,
- [x, inside.., y] if x == y => is_symmetric(inside),
+ &[] | &[_] => true,
+ &[x, ref inside.., y] if x == y => is_symmetric(inside),
_ => false
}
}
% How Safe and Unsafe Interact
-So what's the relationship between Safe and Unsafe Rust? How do they interact?
-
-Rust models the separation between Safe and Unsafe Rust with the `unsafe`
-keyword, which can be thought as a sort of *foreign function interface* (FFI)
-between Safe and Unsafe Rust. This is the magic behind why we can say Safe Rust
-is a safe language: all the scary unsafe bits are relegated exclusively to FFI
-*just like every other safe language*.
-
-However because one language is a subset of the other, the two can be cleanly
-intermixed as long as the boundary between Safe and Unsafe Rust is denoted with
-the `unsafe` keyword. No need to write headers, initialize runtimes, or any of
-that other FFI boiler-plate.
-
-There are several places `unsafe` can appear in Rust today, which can largely be
-grouped into two categories:
-
-* There are unchecked contracts here. To declare you understand this, I require
-you to write `unsafe` elsewhere:
- * On functions, `unsafe` is declaring the function to be unsafe to call.
- Users of the function must check the documentation to determine what this
- means, and then have to write `unsafe` somewhere to identify that they're
- aware of the danger.
- * On trait declarations, `unsafe` is declaring that *implementing* the trait
- is an unsafe operation, as it has contracts that other unsafe code is free
- to trust blindly. (More on this below.)
-
-* I am declaring that I have, to the best of my knowledge, adhered to the
-unchecked contracts:
- * On trait implementations, `unsafe` is declaring that the contract of the
- `unsafe` trait has been upheld.
- * On blocks, `unsafe` is declaring any unsafety from an unsafe
- operation within to be handled, and therefore the parent function is safe.
-
-There is also `#[unsafe_no_drop_flag]`, which is a special case that exists for
-historical reasons and is in the process of being phased out. See the section on
-[drop flags] for details.
-
-Some examples of unsafe functions:
-
-* `slice::get_unchecked` will perform unchecked indexing, allowing memory
- safety to be freely violated.
-* every raw pointer to sized type has intrinsic `offset` method that invokes
- Undefined Behavior if it is not "in bounds" as defined by LLVM.
-* `mem::transmute` reinterprets some value as having the given type,
- bypassing type safety in arbitrary ways. (see [conversions] for details)
-* All FFI functions are `unsafe` because they can do arbitrary things.
- C being an obvious culprit, but generally any language can do something
- that Rust isn't happy about.
+What's the relationship between Safe Rust and Unsafe Rust? How do they
+interact?
+
+The separation between Safe Rust and Unsafe Rust is controlled with the
+`unsafe` keyword, which acts as an interface from one to the other. This is
+why we can say Safe Rust is a safe language: all the unsafe parts are kept
+exclusively behind the boundary.
+
+The `unsafe` keyword has two uses: to declare the existence of contracts the
+compiler can't check, and to declare that the adherence of some code to
+those contracts has been checked by the programmer.
+
+You can use `unsafe` to indicate the existence of unchecked contracts on
+_functions_ and on _trait declarations_. On functions, `unsafe` means that
+users of the function must check that function's documentation to ensure
+they are using it in a way that maintains the contracts the function
+requires. On trait declarations, `unsafe` means that implementors of the
+trait must check the trait documentation to ensure their implementation
+maintains the contracts the trait requires.
+
+You can use `unsafe` on a block to declare that all constraints required
+by an unsafe function within the block have been adhered to, and the code
+can therefore be trusted. You can use `unsafe` on a trait implementation
+to declare that the implementation of that trait has adhered to whatever
+contracts the trait's documentation requires.
+
+There is also the `#[unsafe_no_drop_flag]` attribute, which exists for
+historic reasons and is being phased out. See the section on [drop flags]
+for details.
+
+The standard library has a number of unsafe functions, including:
+
+* `slice::get_unchecked`, which performs unchecked indexing, allowing
+ memory safety to be freely violated.
+* `mem::transmute` reinterprets some value as having a given type, bypassing
+ type safety in arbitrary ways (see [conversions] for details).
+* Every raw pointer to a sized type has an intrinstic `offset` method that
+ invokes Undefined Behavior if the passed offset is not "in bounds" as
+ defined by LLVM.
+* All FFI functions are `unsafe` because the other language can do arbitrary
+ operations that the Rust compiler can't check.
As of Rust 1.0 there are exactly two unsafe traits:
-* `Send` is a marker trait (it has no actual API) that promises implementors
- are safe to send (move) to another thread.
-* `Sync` is a marker trait that promises that threads can safely share
- implementors through a shared reference.
-
-The need for unsafe traits boils down to the fundamental property of safe code:
-
-**No matter how completely awful Safe code is, it can't cause Undefined
-Behavior.**
-
-This means that Unsafe Rust, **the royal vanguard of Undefined Behavior**, has to be
-*super paranoid* about generic safe code. To be clear, Unsafe Rust is totally free to trust
-specific safe code. Anything else would degenerate into infinite spirals of
-paranoid despair. In particular it's generally regarded as ok to trust the standard library
-to be correct. `std` is effectively an extension of the language, and you
-really just have to trust the language. If `std` fails to uphold the
-guarantees it declares, then it's basically a language bug.
-
-That said, it would be best to minimize *needlessly* relying on properties of
-concrete safe code. Bugs happen! Of course, I must reinforce that this is only
-a concern for Unsafe code. Safe code can blindly trust anyone and everyone
-as far as basic memory-safety is concerned.
-
-On the other hand, safe traits are free to declare arbitrary contracts, but because
-implementing them is safe, unsafe code can't trust those contracts to actually
-be upheld. This is different from the concrete case because *anyone* can
-randomly implement the interface. There is something fundamentally different
-about trusting a particular piece of code to be correct, and trusting *all the
-code that will ever be written* to be correct.
-
-For instance Rust has `PartialOrd` and `Ord` traits to try to differentiate
-between types which can "just" be compared, and those that actually implement a
-total ordering. Pretty much every API that wants to work with data that can be
-compared wants Ord data. For instance, a sorted map like BTreeMap
-*doesn't even make sense* for partially ordered types. If you claim to implement
-Ord for a type, but don't actually provide a proper total ordering, BTreeMap will
-get *really confused* and start making a total mess of itself. Data that is
-inserted may be impossible to find!
-
-But that's okay. BTreeMap is safe, so it guarantees that even if you give it a
-completely garbage Ord implementation, it will still do something *safe*. You
-won't start reading uninitialized or unallocated memory. In fact, BTreeMap
-manages to not actually lose any of your data. When the map is dropped, all the
-destructors will be successfully called! Hooray!
-
-However BTreeMap is implemented using a modest spoonful of Unsafe Rust (most collections
-are). That means that it's not necessarily *trivially true* that a bad Ord
-implementation will make BTreeMap behave safely. BTreeMap must be sure not to rely
-on Ord *where safety is at stake*. Ord is provided by safe code, and safety is not
-safe code's responsibility to uphold.
-
-But wouldn't it be grand if there was some way for Unsafe to trust some trait
-contracts *somewhere*? This is the problem that unsafe traits tackle: by marking
-*the trait itself* as unsafe to implement, unsafe code can trust the implementation
-to uphold the trait's contract. Although the trait implementation may be
-incorrect in arbitrary other ways.
-
-For instance, given a hypothetical UnsafeOrd trait, this is technically a valid
-implementation:
+* `Send` is a marker trait (a trait with no API) that promises implementors are
+ safe to send (move) to another thread.
+* `Sync` is a marker trait that promises threads can safely share implementors
+ through a shared reference.
+
+Much of the Rust standard library also uses Unsafe Rust internally, although
+these implementations are rigorously manually checked, and the Safe Rust
+interfaces provided on top of these implementations can be assumed to be safe.
+
+The need for all of this separation boils down a single fundamental property
+of Safe Rust:
+
+**No matter what, Safe Rust can't cause Undefined Behavior.**
+
+The design of the safe/unsafe split means that Safe Rust inherently has to
+trust that any Unsafe Rust it touches has been written correctly (meaning
+the Unsafe Rust actually maintains whatever contracts it is supposed to
+maintain). On the other hand, Unsafe Rust has to be very careful about
+trusting Safe Rust.
+
+As an example, Rust has the `PartialOrd` and `Ord` traits to differentiate
+between types which can "just" be compared, and those that provide a total
+ordering (where every value of the type is either equal to, greater than,
+or less than any other value of the same type). The sorted map type
+`BTreeMap` doesn't make sense for partially-ordered types, and so it
+requires that any key type for it implements the `Ord` trait. However,
+`BTreeMap` has Unsafe Rust code inside of its implementation, and this
+Unsafe Rust code cannot assume that any `Ord` implementation it gets makes
+sense. The unsafe portions of `BTreeMap`'s internals have to be careful to
+maintain all necessary contracts, even if a key type's `Ord` implementation
+does not implement a total ordering.
+
+Unsafe Rust cannot automatically trust Safe Rust. When writing Unsafe Rust,
+you must be careful to only rely on specific Safe Rust code, and not make
+assumptions about potential future Safe Rust code providing the same
+guarantees.
+
+This is the problem that `unsafe` traits exist to resolve. The `BTreeMap`
+type could theoretically require that keys implement a new trait called
+`UnsafeOrd`, rather than `Ord`, that might look like this:
```rust
-# use std::cmp::Ordering;
-# struct MyType;
-# unsafe trait UnsafeOrd { fn cmp(&self, other: &Self) -> Ordering; }
-unsafe impl UnsafeOrd for MyType {
- fn cmp(&self, other: &Self) -> Ordering {
- Ordering::Equal
- }
+use std::cmp::Ordering;
+
+unsafe trait UnsafeOrd {
+ fn cmp(&self, other: &Self) -> Ordering;
}
```
-But it's probably not the implementation you want.
-
-Rust has traditionally avoided making traits unsafe because it makes Unsafe
-pervasive, which is not desirable. The reason Send and Sync are unsafe is because thread
-safety is a *fundamental property* that unsafe code cannot possibly hope to defend
-against in the same way it would defend against a bad Ord implementation. The
-only way to possibly defend against thread-unsafety would be to *not use
-threading at all*. Making every load and store atomic isn't even sufficient,
-because it's possible for complex invariants to exist between disjoint locations
-in memory. For instance, the pointer and capacity of a Vec must be in sync.
-
-Even concurrent paradigms that are traditionally regarded as Totally Safe like
-message passing implicitly rely on some notion of thread safety -- are you
-really message-passing if you pass a pointer? Send and Sync therefore require
-some fundamental level of trust that Safe code can't provide, so they must be
-unsafe to implement. To help obviate the pervasive unsafety that this would
-introduce, Send (resp. Sync) is automatically derived for all types composed only
-of Send (resp. Sync) values. 99% of types are Send and Sync, and 99% of those
-never actually say it (the remaining 1% is overwhelmingly synchronization
-primitives).
-
-
-
+Then, a type would use `unsafe` to implement `UnsafeOrd`, indicating that
+they've ensured their implementation maintains whatever contracts the
+trait expects. In this situation, the Unsafe Rust in the internals of
+`BTreeMap` could trust that the key type's `UnsafeOrd` implementation is
+correct. If it isn't, it's the fault of the unsafe trait implementation
+code, which is consistent with Rust's safety guarantees.
+
+The decision of whether to mark a trait `unsafe` is an API design choice.
+Rust has traditionally avoided marking traits unsafe because it makes Unsafe
+Rust pervasive, which is not desirable. `Send` and `Sync` are marked unsafe
+because thread safety is a *fundamental property* that unsafe code can't
+possibly hope to defend against in the way it could defend against a bad
+`Ord` implementation. The decision of whether to mark your own traits `unsafe`
+depends on the same sort of consideration. If `unsafe` code cannot reasonably
+expect to defend against a bad implementation of the trait, then marking the
+trait `unsafe` is a reasonable choice.
+
+As an aside, while `Send` and `Sync` are `unsafe` traits, they are
+automatically implemented for types when such derivations are provably safe
+to do. `Send` is automatically derived for all types composed only of values
+whose types also implement `Send`. `Sync` is automatically derived for all
+types composed only of values whose types also implement `Sync`.
+
+This is the dance of Safe Rust and Unsafe Rust. It is designed to make using
+Safe Rust as ergonomic as possible, but requires extra effort and care when
+writing Unsafe Rust. The rest of the book is largely a discussion of the sort
+of care that must be taken, and what contracts it is expected of Unsafe Rust
+to uphold.
[drop flags]: drop-flags.html
[conversions]: conversions.html
+
jemalloc.parent().unwrap().display());
let stem = jemalloc.file_stem().unwrap().to_str().unwrap();
let name = jemalloc.file_name().unwrap().to_str().unwrap();
- let kind = if name.ends_with(".a") {"static"} else {"dylib"};
+ let kind = if name.ends_with(".a") {
+ "static"
+ } else {
+ "dylib"
+ };
println!("cargo:rustc-link-lib={}={}", kind, &stem[3..]);
- return
+ return;
}
let compiler = gcc::Config::new().get_compiler();
let ar = build_helper::cc2ar(compiler.path(), &target);
- let cflags = compiler.args().iter().map(|s| s.to_str().unwrap())
- .collect::<Vec<_>>().join(" ");
+ let cflags = compiler.args()
+ .iter()
+ .map(|s| s.to_str().unwrap())
+ .collect::<Vec<_>>()
+ .join(" ");
let mut stack = src_dir.join("../jemalloc")
- .read_dir().unwrap()
+ .read_dir()
+ .unwrap()
.map(|e| e.unwrap())
.collect::<Vec<_>>();
while let Some(entry) = stack.pop() {
}
let mut cmd = Command::new("sh");
- cmd.arg(src_dir.join("../jemalloc/configure").to_str().unwrap()
+ cmd.arg(src_dir.join("../jemalloc/configure")
+ .to_str()
+ .unwrap()
.replace("C:\\", "/c/")
.replace("\\", "/"))
.current_dir(&build_dir)
run(&mut cmd);
run(Command::new("make")
- .current_dir(&build_dir)
- .arg("build_lib_static")
- .arg("-j").arg(env::var("NUM_JOBS").unwrap()));
+ .current_dir(&build_dir)
+ .arg("build_lib_static")
+ .arg("-j")
+ .arg(env::var("NUM_JOBS").unwrap()));
if target.contains("windows") {
println!("cargo:rustc-link-lib=static=jemalloc");
not(target_env = "musl")),
link(name = "pthread"))]
#[cfg(not(cargobuild))]
-extern {}
+extern "C" {}
// Note that the symbols here are prefixed by default on OSX and Windows (we
// don't explicitly request it), and on Android and DragonFly we explicitly
// request it as unprefixing cause segfaults (mismatches in allocators).
-extern {
+extern "C" {
#[cfg_attr(any(target_os = "macos", target_os = "android", target_os = "ios",
target_os = "dragonfly", target_os = "windows"),
link_name = "je_mallocx")]
// are available.
#[no_mangle]
#[cfg(target_os = "android")]
-pub extern fn pthread_atfork(_prefork: *mut u8,
- _postfork_parent: *mut u8,
- _postfork_child: *mut u8) -> i32 {
+pub extern "C" fn pthread_atfork(_prefork: *mut u8,
+ _postfork_parent: *mut u8,
+ _postfork_child: *mut u8)
+ -> i32 {
0
}
use core::{fmt, intrinsics, mem, ptr};
use borrow::Borrow;
-use Bound::{self, Included, Excluded, Unbounded};
+use Bound::{self, Excluded, Included, Unbounded};
-use super::node::{self, NodeRef, Handle, marker};
+use super::node::{self, Handle, NodeRef, marker};
use super::search;
use super::node::InsertResult::*;
#[stable(feature = "rust1", since = "1.0.0")]
pub struct BTreeMap<K, V> {
root: node::Root<K, V>,
- length: usize
+ length: usize,
}
impl<K, V> Drop for BTreeMap<K, V> {
#[unsafe_destructor_blind_to_params]
fn drop(&mut self) {
unsafe {
- for _ in ptr::read(self).into_iter() { }
+ for _ in ptr::read(self).into_iter() {
+ }
}
}
}
impl<K: Clone, V: Clone> Clone for BTreeMap<K, V> {
fn clone(&self) -> BTreeMap<K, V> {
- fn clone_subtree<K: Clone, V: Clone>(
- node: node::NodeRef<marker::Immut, K, V, marker::LeafOrInternal>)
- -> BTreeMap<K, V> {
+ fn clone_subtree<K: Clone, V: Clone>(node: node::NodeRef<marker::Immut,
+ K,
+ V,
+ marker::LeafOrInternal>)
+ -> BTreeMap<K, V> {
match node.force() {
Leaf(leaf) => {
let mut out_tree = BTreeMap {
root: node::Root::new_leaf(),
- length: 0
+ length: 0,
};
{
let mut out_node = match out_tree.root.as_mut().force() {
Leaf(leaf) => leaf,
- Internal(_) => unreachable!()
+ Internal(_) => unreachable!(),
};
let mut in_edge = leaf.first_edge();
}
out_tree
- },
+ }
Internal(internal) => {
let mut out_tree = clone_subtree(internal.first_edge().descend());
fn get(&self, key: &Q) -> Option<&K> {
match search::search_tree(self.root.as_ref(), key) {
Found(handle) => Some(handle.into_kv().0),
- GoDown(_) => None
+ GoDown(_) => None,
}
}
match search::search_tree(self.root.as_mut(), key) {
Found(handle) => {
Some(OccupiedEntry {
- handle: handle,
- length: &mut self.length,
- _marker: PhantomData,
- }.remove_kv().0)
- },
- GoDown(_) => None
+ handle: handle,
+ length: &mut self.length,
+ _marker: PhantomData,
+ }
+ .remove_kv()
+ .0)
+ }
+ GoDown(_) => None,
}
}
handle: handle,
length: &mut self.length,
_marker: PhantomData,
- }.insert(());
+ }
+ .insert(());
None
}
}
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Iter<'a, K: 'a, V: 'a> {
range: Range<'a, K, V>,
- length: usize
+ length: usize,
}
/// A mutable iterator over a BTreeMap's entries.
#[stable(feature = "rust1", since = "1.0.0")]
pub struct IterMut<'a, K: 'a, V: 'a> {
range: RangeMut<'a, K, V>,
- length: usize
+ length: usize,
}
/// An owning iterator over a BTreeMap's entries.
pub struct IntoIter<K, V> {
front: Handle<NodeRef<marker::Owned, K, V, marker::Leaf>, marker::Edge>,
back: Handle<NodeRef<marker::Owned, K, V, marker::Leaf>, marker::Edge>,
- length: usize
+ length: usize,
}
/// An iterator over a BTreeMap's keys.
/// An iterator over a sub-range of BTreeMap's entries.
pub struct Range<'a, K: 'a, V: 'a> {
front: Handle<NodeRef<marker::Immut<'a>, K, V, marker::Leaf>, marker::Edge>,
- back: Handle<NodeRef<marker::Immut<'a>, K, V, marker::Leaf>, marker::Edge>
+ back: Handle<NodeRef<marker::Immut<'a>, K, V, marker::Leaf>, marker::Edge>,
}
/// A mutable iterator over a sub-range of BTreeMap's entries.
pub enum Entry<'a, K: 'a, V: 'a> {
/// A vacant Entry
#[stable(feature = "rust1", since = "1.0.0")]
- Vacant(
- #[stable(feature = "rust1", since = "1.0.0")] VacantEntry<'a, K, V>
- ),
+ Vacant(#[stable(feature = "rust1", since = "1.0.0")]
+ VacantEntry<'a, K, V>),
/// An occupied Entry
#[stable(feature = "rust1", since = "1.0.0")]
- Occupied(
- #[stable(feature = "rust1", since = "1.0.0")] OccupiedEntry<'a, K, V>
- ),
+ Occupied(#[stable(feature = "rust1", since = "1.0.0")]
+ OccupiedEntry<'a, K, V>),
}
/// A vacant Entry.
/// An occupied Entry.
#[stable(feature = "rust1", since = "1.0.0")]
pub struct OccupiedEntry<'a, K: 'a, V: 'a> {
- handle: Handle<NodeRef<
- marker::Mut<'a>,
- K, V,
- marker::LeafOrInternal
- >, marker::KV>,
+ handle: Handle<NodeRef<marker::Mut<'a>, K, V, marker::LeafOrInternal>, marker::KV>,
length: &'a mut usize,
}
// An iterator for merging two sorted sequences into one
-struct MergeIter<K, V, I: Iterator<Item=(K, V)>> {
+struct MergeIter<K, V, I: Iterator<Item = (K, V)>> {
left: Peekable<I>,
right: Peekable<I>,
}
pub fn new() -> BTreeMap<K, V> {
BTreeMap {
root: node::Root::new_leaf(),
- length: 0
+ length: 0,
}
}
/// assert_eq!(map.get(&2), None);
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
- pub fn get<Q: ?Sized>(&self, key: &Q) -> Option<&V> where K: Borrow<Q>, Q: Ord {
+ pub fn get<Q: ?Sized>(&self, key: &Q) -> Option<&V>
+ where K: Borrow<Q>,
+ Q: Ord
+ {
match search::search_tree(self.root.as_ref(), key) {
Found(handle) => Some(handle.into_kv().1),
- GoDown(_) => None
+ GoDown(_) => None,
}
}
/// assert_eq!(map.contains_key(&2), false);
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
- pub fn contains_key<Q: ?Sized>(&self, key: &Q) -> bool where K: Borrow<Q>, Q: Ord {
+ pub fn contains_key<Q: ?Sized>(&self, key: &Q) -> bool
+ where K: Borrow<Q>,
+ Q: Ord
+ {
self.get(key).is_some()
}
/// ```
// See `get` for implementation notes, this is basically a copy-paste with mut's added
#[stable(feature = "rust1", since = "1.0.0")]
- pub fn get_mut<Q: ?Sized>(&mut self, key: &Q) -> Option<&mut V> where K: Borrow<Q>, Q: Ord {
+ pub fn get_mut<Q: ?Sized>(&mut self, key: &Q) -> Option<&mut V>
+ where K: Borrow<Q>,
+ Q: Ord
+ {
match search::search_tree(self.root.as_mut(), key) {
Found(handle) => Some(handle.into_kv_mut().1),
- GoDown(_) => None
+ GoDown(_) => None,
}
}
/// assert_eq!(map.remove(&1), None);
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
- pub fn remove<Q: ?Sized>(&mut self, key: &Q) -> Option<V> where K: Borrow<Q>, Q: Ord {
+ pub fn remove<Q: ?Sized>(&mut self, key: &Q) -> Option<V>
+ where K: Borrow<Q>,
+ Q: Ord
+ {
match search::search_tree(self.root.as_mut(), key) {
Found(handle) => {
Some(OccupiedEntry {
- handle: handle,
- length: &mut self.length,
- _marker: PhantomData,
- }.remove())
- },
- GoDown(_) => None
+ handle: handle,
+ length: &mut self.length,
+ _marker: PhantomData,
+ }
+ .remove())
+ }
+ GoDown(_) => None,
}
}
min: Bound<&Min>,
max: Bound<&Max>)
-> Range<K, V>
- where K: Borrow<Min> + Borrow<Max>,
+ where K: Borrow<Min> + Borrow<Max>
{
let front = match min {
- Included(key) => match search::search_tree(self.root.as_ref(), key) {
- Found(kv_handle) => match kv_handle.left_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => last_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Excluded(key) => match search::search_tree(self.root.as_ref(), key) {
- Found(kv_handle) => match kv_handle.right_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => first_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Unbounded => first_leaf_edge(self.root.as_ref())
+ Included(key) => {
+ match search::search_tree(self.root.as_ref(), key) {
+ Found(kv_handle) => {
+ match kv_handle.left_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => last_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Excluded(key) => {
+ match search::search_tree(self.root.as_ref(), key) {
+ Found(kv_handle) => {
+ match kv_handle.right_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => first_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Unbounded => first_leaf_edge(self.root.as_ref()),
};
let back = match max {
- Included(key) => match search::search_tree(self.root.as_ref(), key) {
- Found(kv_handle) => match kv_handle.right_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => first_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Excluded(key) => match search::search_tree(self.root.as_ref(), key) {
- Found(kv_handle) => match kv_handle.left_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => last_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Unbounded => last_leaf_edge(self.root.as_ref())
+ Included(key) => {
+ match search::search_tree(self.root.as_ref(), key) {
+ Found(kv_handle) => {
+ match kv_handle.right_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => first_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Excluded(key) => {
+ match search::search_tree(self.root.as_ref(), key) {
+ Found(kv_handle) => {
+ match kv_handle.left_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => last_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Unbounded => last_leaf_edge(self.root.as_ref()),
};
Range {
front: front,
- back: back
+ back: back,
}
}
min: Bound<&Min>,
max: Bound<&Max>)
-> RangeMut<K, V>
- where K: Borrow<Min> + Borrow<Max>,
+ where K: Borrow<Min> + Borrow<Max>
{
let root1 = self.root.as_mut();
let root2 = unsafe { ptr::read(&root1) };
let front = match min {
- Included(key) => match search::search_tree(root1, key) {
- Found(kv_handle) => match kv_handle.left_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => last_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Excluded(key) => match search::search_tree(root1, key) {
- Found(kv_handle) => match kv_handle.right_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => first_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Unbounded => first_leaf_edge(root1)
+ Included(key) => {
+ match search::search_tree(root1, key) {
+ Found(kv_handle) => {
+ match kv_handle.left_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => last_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Excluded(key) => {
+ match search::search_tree(root1, key) {
+ Found(kv_handle) => {
+ match kv_handle.right_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => first_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Unbounded => first_leaf_edge(root1),
};
let back = match max {
- Included(key) => match search::search_tree(root2, key) {
- Found(kv_handle) => match kv_handle.right_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => first_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Excluded(key) => match search::search_tree(root2, key) {
- Found(kv_handle) => match kv_handle.left_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => last_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Unbounded => last_leaf_edge(root2)
+ Included(key) => {
+ match search::search_tree(root2, key) {
+ Found(kv_handle) => {
+ match kv_handle.right_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => first_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Excluded(key) => {
+ match search::search_tree(root2, key) {
+ Found(kv_handle) => {
+ match kv_handle.left_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => last_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Unbounded => last_leaf_edge(root2),
};
RangeMut {
front: front,
back: back,
- _marker: PhantomData
+ _marker: PhantomData,
}
}
#[stable(feature = "rust1", since = "1.0.0")]
pub fn entry(&mut self, key: K) -> Entry<K, V> {
match search::search_tree(self.root.as_mut(), &key) {
- Found(handle) => Occupied(OccupiedEntry {
- handle: handle,
- length: &mut self.length,
- _marker: PhantomData,
- }),
- GoDown(handle) => Vacant(VacantEntry {
- key: key,
- handle: handle,
- length: &mut self.length,
- _marker: PhantomData,
- })
+ Found(handle) => {
+ Occupied(OccupiedEntry {
+ handle: handle,
+ length: &mut self.length,
+ _marker: PhantomData,
+ })
+ }
+ GoDown(handle) => {
+ Vacant(VacantEntry {
+ key: key,
+ handle: handle,
+ length: &mut self.length,
+ _marker: PhantomData,
+ })
+ }
}
}
- fn from_sorted_iter<I: Iterator<Item=(K, V)>>(&mut self, iter: I) {
+ fn from_sorted_iter<I: Iterator<Item = (K, V)>>(&mut self, iter: I) {
let mut cur_node = last_leaf_edge(self.root.as_mut()).into_node();
// Iterate through all key-value pairs, pushing them into nodes at the right level.
for (key, value) in iter {
// Go up again.
test_node = parent.forget_type();
}
- },
+ }
Err(node) => {
// We are at the top, create a new root node and push there.
open_node = node.into_root_mut().push_level();
break;
- },
+ }
}
}
#[unstable(feature = "btree_split_off",
reason = "recently added as part of collections reform 2",
issue = "19986")]
- pub fn split_off<Q: ?Sized + Ord>(&mut self, key: &Q) -> Self where K: Borrow<Q> {
+ pub fn split_off<Q: ?Sized + Ord>(&mut self, key: &Q) -> Self
+ where K: Borrow<Q>
+ {
if self.is_empty() {
return Self::new();
}
let mut split_edge = match search::search_node(left_node, key) {
// key is going to the right tree
Found(handle) => handle.left_edge(),
- GoDown(handle) => handle
+ GoDown(handle) => handle,
};
split_edge.move_suffix(&mut right_node);
left_node = edge.descend();
right_node = node.first_edge().descend();
}
- (Leaf(_), Leaf(_)) => { break; },
- _ => { unreachable!(); }
+ (Leaf(_), Leaf(_)) => {
+ break;
+ }
+ _ => {
+ unreachable!();
+ }
}
}
}
loop {
res += dfs(edge.reborrow().descend());
match edge.right_kv() {
- Ok(right_kv) => { edge = right_kv.right_edge(); },
- Err(_) => { break; }
+ Ok(right_kv) => {
+ edge = right_kv.right_edge();
+ }
+ Err(_) => {
+ break;
+ }
}
}
}
}
impl<'a, K: 'a, V: 'a> ExactSizeIterator for Iter<'a, K, V> {
- fn len(&self) -> usize { self.length }
+ fn len(&self) -> usize {
+ self.length
+ }
}
impl<'a, K, V> Clone for Iter<'a, K, V> {
fn clone(&self) -> Iter<'a, K, V> {
Iter {
range: self.range.clone(),
- length: self.length
+ length: self.length,
}
}
}
}
impl<'a, K: 'a, V: 'a> ExactSizeIterator for IterMut<'a, K, V> {
- fn len(&self) -> usize { self.length }
+ fn len(&self) -> usize {
+ self.length
+ }
}
impl<K, V> IntoIterator for BTreeMap<K, V> {
IntoIter {
front: first_leaf_edge(root1),
back: last_leaf_edge(root2),
- length: len
+ length: len,
}
}
}
impl<K, V> Drop for IntoIter<K, V> {
fn drop(&mut self) {
- for _ in &mut *self { }
+ for _ in &mut *self {
+ }
unsafe {
let leaf_node = ptr::read(&self.front).into_node();
if let Some(first_parent) = leaf_node.deallocate_and_ascend() {
let v = unsafe { ptr::read(kv.reborrow().into_kv().1) };
self.front = kv.right_edge();
return Some((k, v));
- },
+ }
Err(last_edge) => unsafe {
unwrap_unchecked(last_edge.into_node().deallocate_and_ascend())
- }
+ },
};
loop {
let v = unsafe { ptr::read(kv.reborrow().into_kv().1) };
self.front = first_leaf_edge(kv.right_edge().descend());
return Some((k, v));
- },
+ }
Err(last_edge) => unsafe {
cur_handle = unwrap_unchecked(last_edge.into_node().deallocate_and_ascend());
- }
+ },
}
}
}
let v = unsafe { ptr::read(kv.reborrow().into_kv().1) };
self.back = kv.left_edge();
return Some((k, v));
- },
+ }
Err(last_edge) => unsafe {
unwrap_unchecked(last_edge.into_node().deallocate_and_ascend())
- }
+ },
};
loop {
let v = unsafe { ptr::read(kv.reborrow().into_kv().1) };
self.back = last_leaf_edge(kv.left_edge().descend());
return Some((k, v));
- },
+ }
Err(last_edge) => unsafe {
cur_handle = unwrap_unchecked(last_edge.into_node().deallocate_and_ascend());
- }
+ },
}
}
}
}
impl<K, V> ExactSizeIterator for IntoIter<K, V> {
- fn len(&self) -> usize { self.length }
+ fn len(&self) -> usize {
+ self.length
+ }
}
impl<'a, K, V> Iterator for Keys<'a, K, V> {
impl<'a, K, V> Clone for Keys<'a, K, V> {
fn clone(&self) -> Keys<'a, K, V> {
- Keys {
- inner: self.inner.clone()
- }
+ Keys { inner: self.inner.clone() }
}
}
impl<'a, K, V> Clone for Values<'a, K, V> {
fn clone(&self) -> Values<'a, K, V> {
- Values {
- inner: self.inner.clone()
- }
+ Values { inner: self.inner.clone() }
}
}
let ret = kv.into_kv();
self.front = kv.right_edge();
return ret;
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
unwrap_unchecked(next_level)
let ret = kv.into_kv();
self.front = first_leaf_edge(kv.right_edge().descend());
return ret;
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
cur_handle = unwrap_unchecked(next_level);
let ret = kv.into_kv();
self.back = kv.left_edge();
return ret;
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
unwrap_unchecked(next_level)
let ret = kv.into_kv();
self.back = last_leaf_edge(kv.left_edge().descend());
return ret;
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
cur_handle = unwrap_unchecked(next_level);
fn clone(&self) -> Range<'a, K, V> {
Range {
front: self.front,
- back: self.back
+ back: self.back,
}
}
}
if self.front == self.back {
None
} else {
- unsafe { Some (self.next_unchecked()) }
+ unsafe { Some(self.next_unchecked()) }
}
}
}
let (k, v) = ptr::read(&kv).into_kv_mut();
self.front = kv.right_edge();
return (k, v);
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
unwrap_unchecked(next_level)
let (k, v) = ptr::read(&kv).into_kv_mut();
self.front = first_leaf_edge(kv.right_edge().descend());
return (k, v);
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
cur_handle = unwrap_unchecked(next_level);
let (k, v) = ptr::read(&kv).into_kv_mut();
self.back = kv.left_edge();
return (k, v);
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
unwrap_unchecked(next_level)
let (k, v) = ptr::read(&kv).into_kv_mut();
self.back = last_leaf_edge(kv.left_edge().descend());
return (k, v);
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
cur_handle = unwrap_unchecked(next_level);
}
impl<K: Ord, V> FromIterator<(K, V)> for BTreeMap<K, V> {
- fn from_iter<T: IntoIterator<Item=(K, V)>>(iter: T) -> BTreeMap<K, V> {
+ fn from_iter<T: IntoIterator<Item = (K, V)>>(iter: T) -> BTreeMap<K, V> {
let mut map = BTreeMap::new();
map.extend(iter);
map
impl<K: Ord, V> Extend<(K, V)> for BTreeMap<K, V> {
#[inline]
- fn extend<T: IntoIterator<Item=(K, V)>>(&mut self, iter: T) {
+ fn extend<T: IntoIterator<Item = (K, V)>>(&mut self, iter: T) {
for (k, v) in iter {
self.insert(k, v);
}
}
impl<'a, K: Ord + Copy, V: Copy> Extend<(&'a K, &'a V)> for BTreeMap<K, V> {
- fn extend<I: IntoIterator<Item=(&'a K, &'a V)>>(&mut self, iter: I) {
+ fn extend<I: IntoIterator<Item = (&'a K, &'a V)>>(&mut self, iter: I) {
self.extend(iter.into_iter().map(|(&key, &value)| (key, value)));
}
}
impl<K: PartialEq, V: PartialEq> PartialEq for BTreeMap<K, V> {
fn eq(&self, other: &BTreeMap<K, V>) -> bool {
- self.len() == other.len() &&
- self.iter().zip(other).all(|(a, b)| a == b)
+ self.len() == other.len() && self.iter().zip(other).all(|(a, b)| a == b)
}
}
}
impl<'a, K: Ord, Q: ?Sized, V> Index<&'a Q> for BTreeMap<K, V>
- where K: Borrow<Q>, Q: Ord
+ where K: Borrow<Q>,
+ Q: Ord
{
type Output = V;
}
}
-fn first_leaf_edge<BorrowType, K, V>(
- mut node: NodeRef<BorrowType,
- K, V,
- marker::LeafOrInternal>
- ) -> Handle<NodeRef<BorrowType, K, V, marker::Leaf>, marker::Edge> {
+fn first_leaf_edge<BorrowType, K, V>
+ (mut node: NodeRef<BorrowType, K, V, marker::LeafOrInternal>)
+ -> Handle<NodeRef<BorrowType, K, V, marker::Leaf>, marker::Edge> {
loop {
match node.force() {
Leaf(leaf) => return leaf.first_edge(),
}
}
-fn last_leaf_edge<BorrowType, K, V>(
- mut node: NodeRef<BorrowType,
- K, V,
- marker::LeafOrInternal>
- ) -> Handle<NodeRef<BorrowType, K, V, marker::Leaf>, marker::Edge> {
+fn last_leaf_edge<BorrowType, K, V>
+ (mut node: NodeRef<BorrowType, K, V, marker::LeafOrInternal>)
+ -> Handle<NodeRef<BorrowType, K, V, marker::Leaf>, marker::Edge> {
loop {
match node.force() {
Leaf(leaf) => return leaf.last_edge(),
Iter {
range: Range {
front: first_leaf_edge(self.root.as_ref()),
- back: last_leaf_edge(self.root.as_ref())
+ back: last_leaf_edge(self.root.as_ref()),
},
- length: self.length
+ length: self.length,
}
}
back: last_leaf_edge(root2),
_marker: PhantomData,
},
- length: self.length
+ length: self.length,
}
}
loop {
match cur_parent {
- Ok(parent) => match parent.insert(ins_k, ins_v, ins_edge) {
- Fit(_) => return unsafe { &mut *out_ptr },
- Split(left, k, v, right) => {
- ins_k = k;
- ins_v = v;
- ins_edge = right;
- cur_parent = left.ascend().map_err(|n| n.into_root_mut());
+ Ok(parent) => {
+ match parent.insert(ins_k, ins_v, ins_edge) {
+ Fit(_) => return unsafe { &mut *out_ptr },
+ Split(left, k, v, right) => {
+ ins_k = k;
+ ins_v = v;
+ ins_edge = right;
+ cur_parent = left.ascend().map_err(|n| n.into_root_mut());
+ }
}
- },
+ }
Err(root) => {
root.push_level().push(ins_k, ins_v, ins_edge);
return unsafe { &mut *out_ptr };
Leaf(leaf) => {
let (hole, old_key, old_val) = leaf.remove();
(hole.into_node(), old_key, old_val)
- },
+ }
Internal(mut internal) => {
let key_loc = internal.kv_mut().0 as *mut K;
let val_loc = internal.kv_mut().1 as *mut V;
let (hole, key, val) = to_remove.remove();
- let old_key = unsafe {
- mem::replace(&mut *key_loc, key)
- };
- let old_val = unsafe {
- mem::replace(&mut *val_loc, val)
- };
+ let old_key = unsafe { mem::replace(&mut *key_loc, key) };
+ let old_val = unsafe { mem::replace(&mut *val_loc, val) };
(hole.into_node(), old_key, old_val)
}
match handle_underfull_node(cur_node) {
AtRoot => break,
EmptyParent(_) => unreachable!(),
- Merged(parent) => if parent.len() == 0 {
- // We must be at the root
- parent.into_root_mut().pop_level();
- break;
- } else {
- cur_node = parent.forget_type();
- },
- Stole(_) => break
+ Merged(parent) => {
+ if parent.len() == 0 {
+ // We must be at the root
+ parent.into_root_mut().pop_level();
+ break;
+ } else {
+ cur_node = parent.forget_type();
+ }
+ }
+ Stole(_) => break,
}
}
AtRoot,
EmptyParent(NodeRef<marker::Mut<'a>, K, V, marker::Internal>),
Merged(NodeRef<marker::Mut<'a>, K, V, marker::Internal>),
- Stole(NodeRef<marker::Mut<'a>, K, V, marker::Internal>)
+ Stole(NodeRef<marker::Mut<'a>, K, V, marker::Internal>),
}
-fn handle_underfull_node<'a, K, V>(node: NodeRef<marker::Mut<'a>,
- K, V,
- marker::LeafOrInternal>)
- -> UnderflowResult<'a, K, V> {
+fn handle_underfull_node<'a, K, V>(node: NodeRef<marker::Mut<'a>, K, V, marker::LeafOrInternal>)
+ -> UnderflowResult<'a, K, V> {
let parent = if let Ok(parent) = node.ascend() {
parent
} else {
let (is_left, mut handle) = match parent.left_kv() {
Ok(left) => (true, left),
- Err(parent) => match parent.right_kv() {
- Ok(right) => (false, right),
- Err(parent) => {
- return EmptyParent(parent.into_node());
+ Err(parent) => {
+ match parent.right_kv() {
+ Ok(right) => (false, right),
+ Err(parent) => {
+ return EmptyParent(parent.into_node());
+ }
}
}
};
}
}
-impl<K: Ord, V, I: Iterator<Item=(K, V)>> Iterator for MergeIter<K, V, I> {
+impl<K: Ord, V, I: Iterator<Item = (K, V)>> Iterator for MergeIter<K, V, I> {
type Item = (K, V);
fn next(&mut self) -> Option<(K, V)> {
// Check which elements comes first and only advance the corresponding iterator.
// If two keys are equal, take the value from `right`.
match res {
- Ordering::Less => {
- self.left.next()
- },
- Ordering::Greater => {
- self.right.next()
- },
+ Ordering::Less => self.left.next(),
+ Ordering::Greater => self.right.next(),
Ordering::Equal => {
self.left.next();
self.right.next()
- },
+ }
}
}
}
fn test_concat_for_different_types() {
test_concat!("ab", vec![s("a"), s("b")]);
test_concat!("ab", vec!["a", "b"]);
- test_concat!("ab", vec!["a", "b"]);
- test_concat!("ab", vec![s("a"), s("b")]);
}
#[test]
#[test]
fn test_starts_with() {
- assert!(("".starts_with("")));
- assert!(("abc".starts_with("")));
- assert!(("abc".starts_with("a")));
- assert!((!"a".starts_with("abc")));
- assert!((!"".starts_with("abc")));
- assert!((!"ödd".starts_with("-")));
- assert!(("ödd".starts_with("öd")));
+ assert!("".starts_with(""));
+ assert!("abc".starts_with(""));
+ assert!("abc".starts_with("a"));
+ assert!(!"a".starts_with("abc"));
+ assert!(!"".starts_with("abc"));
+ assert!(!"ödd".starts_with("-"));
+ assert!("ödd".starts_with("öd"));
}
#[test]
fn test_ends_with() {
- assert!(("".ends_with("")));
- assert!(("abc".ends_with("")));
- assert!(("abc".ends_with("c")));
- assert!((!"a".ends_with("abc")));
- assert!((!"".ends_with("abc")));
- assert!((!"ddö".ends_with("-")));
- assert!(("ddö".ends_with("dö")));
+ assert!("".ends_with(""));
+ assert!("abc".ends_with(""));
+ assert!("abc".ends_with("c"));
+ assert!(!"a".ends_with("abc"));
+ assert!(!"".ends_with("abc"));
+ assert!(!"ddö".ends_with("-"));
+ assert!("ddö".ends_with("dö"));
}
#[test]
/// A type to emulate dynamic typing.
///
-/// Every type with no non-`'static` references implements `Any`.
+/// Most types implement `Any`. However, any type which contains a non-`'static` reference does not.
/// See the [module-level documentation][mod] for more details.
///
/// [mod]: index.html
/// Stores a value into the `bool` if the current value is the same as the `current` value.
///
/// The return value is a result indicating whether the new value was written and containing
- /// the previous value. On success this value is guaranteed to be equal to `new`.
+ /// the previous value. On success this value is guaranteed to be equal to `current`.
///
/// `compare_exchange` takes two `Ordering` arguments to describe the memory ordering of this
/// operation. The first describes the required ordering if the operation succeeds while the
/// Stores a value into the pointer if the current value is the same as the `current` value.
///
/// The return value is a result indicating whether the new value was written and containing
- /// the previous value. On success this value is guaranteed to be equal to `new`.
+ /// the previous value. On success this value is guaranteed to be equal to `current`.
///
/// `compare_exchange` takes two `Ordering` arguments to describe the memory ordering of this
/// operation. The first describes the required ordering if the operation succeeds while the
///
/// The return value is a result indicating whether the new value was written and
/// containing the previous value. On success this value is guaranteed to be equal to
- /// `new`.
+ /// `current`.
///
/// `compare_exchange` takes two `Ordering` arguments to describe the memory ordering of
/// this operation. The first describes the required ordering if the operation succeeds
extern crate libc;
-use libc::{c_void, size_t, c_int};
+use libc::{c_int, c_void, size_t};
use std::fmt;
use std::ops::Deref;
use std::ptr::Unique;
#[link(name = "miniz", kind = "static")]
#[cfg(not(cargobuild))]
-extern {}
+extern "C" {}
-extern {
+extern "C" {
/// Raw miniz compression function.
fn tdefl_compress_mem_to_heap(psrc_buf: *const c_void,
src_buf_len: size_t,
#[cfg(test)]
mod tests {
#![allow(deprecated)]
- use super::{inflate_bytes, deflate_bytes};
- use std::__rand::{thread_rng, Rng};
+ use super::{deflate_bytes, inflate_bytes};
+ use std::__rand::{Rng, thread_rng};
#[test]
fn test_flate_round_trip() {
return *self.loop_scopes.last().unwrap();
}
- match self.tcx.def_map.borrow().get(&expr.id).map(|d| d.full_def()) {
- Some(Def::Label(loop_id)) => {
+ match self.tcx.expect_def(expr.id) {
+ Def::Label(loop_id) => {
for l in &self.loop_scopes {
if l.loop_id == loop_id {
return *l;
}
impl PathResolution {
+ pub fn new(def: Def) -> PathResolution {
+ PathResolution { base_def: def, depth: 0 }
+ }
+
/// Get the definition, if fully resolved, otherwise panic.
pub fn full_def(&self) -> Def {
if self.depth != 0 {
self.base_def
}
- /// Get the DefId, if fully resolved, otherwise panic.
- pub fn def_id(&self) -> DefId {
- self.full_def().def_id()
- }
-
- pub fn new(base_def: Def,
- depth: usize)
- -> PathResolution {
- PathResolution {
- base_def: base_def,
- depth: depth,
+ pub fn kind_name(&self) -> &'static str {
+ if self.depth != 0 {
+ "associated item"
+ } else {
+ self.base_def.kind_name()
}
}
}
Def::Struct(..) => "struct",
Def::Trait(..) => "trait",
Def::Method(..) => "method",
- Def::Const(..) => "const",
- Def::AssociatedConst(..) => "associated const",
+ Def::Const(..) => "constant",
+ Def::AssociatedConst(..) => "associated constant",
Def::TyParam(..) => "type parameter",
Def::PrimTy(..) => "builtin type",
Def::Local(..) => "local variable",
PatKind::Wild => hir::PatKind::Wild,
PatKind::Ident(ref binding_mode, pth1, ref sub) => {
self.with_parent_def(p.id, |this| {
- match this.resolver.get_resolution(p.id).map(|d| d.full_def()) {
+ match this.resolver.get_resolution(p.id).map(|d| d.base_def) {
// `None` can occur in body-less function signatures
None | Some(Def::Local(..)) => {
hir::PatKind::Binding(this.lower_binding_mode(binding_mode),
position: position,
}
});
- let rename = if path.segments.len() == 1 {
- // Only local variables are renamed
- match self.resolver.get_resolution(e.id).map(|d| d.full_def()) {
- Some(Def::Local(..)) | Some(Def::Upvar(..)) => true,
- _ => false,
- }
- } else {
- false
+ // Only local variables are renamed
+ let rename = match self.resolver.get_resolution(e.id).map(|d| d.base_def) {
+ Some(Def::Local(..)) | Some(Def::Upvar(..)) => true,
+ _ => false,
};
hir::ExprPath(hir_qself, self.lower_path_full(path, rename))
}
}
}
-// This is used because same-named variables in alternative patterns need to
-// use the NodeId of their namesake in the first pattern.
-pub fn pat_id_map(pat: &hir::Pat) -> PatIdMap {
- let mut map = FnvHashMap();
- pat_bindings(pat, |_bm, p_id, _s, path1| {
- map.insert(path1.node, p_id);
- });
- map
-}
-
pub fn pat_is_refutable(dm: &DefMap, pat: &hir::Pat) -> bool {
match pat.node {
PatKind::Lit(_) | PatKind::Range(_, _) | PatKind::QPath(..) => true,
ty_queue.push(&mut_ty.ty);
}
hir::TyPath(ref maybe_qself, ref path) => {
- let a_def = match self.tcx.def_map.borrow().get(&cur_ty.id) {
- None => {
- self.tcx
- .sess
- .fatal(&format!(
- "unbound path {}",
- pprust::path_to_string(path)))
- }
- Some(d) => d.full_def()
- };
- match a_def {
+ match self.tcx.expect_def(cur_ty.id) {
Def::Enum(did) | Def::TyAlias(did) | Def::Struct(did) => {
let generics = self.tcx.lookup_item_type(did).generics;
#![feature(box_syntax)]
#![feature(collections)]
#![feature(const_fn)]
+#![feature(core_intrinsics)]
#![feature(enumset)]
#![feature(iter_arith)]
#![feature(libc)]
}
pub mod mir {
+ mod cache;
pub mod repr;
pub mod tcx;
pub mod visit;
pub mod transform;
+ pub mod traversal;
pub mod mir_map;
}
/// to it.
pub fn ast_ty_to_prim_ty(self, ast_ty: &ast::Ty) -> Option<Ty<'tcx>> {
if let ast::TyPath(None, ref path) = ast_ty.node {
- let def = match self.def_map.borrow().get(&ast_ty.id) {
- None => {
- span_bug!(ast_ty.span, "unbound path {:?}", path)
- }
- Some(d) => d.full_def()
- };
- if let Def::PrimTy(nty) = def {
+ if let Def::PrimTy(nty) = self.expect_def(ast_ty.id) {
Some(self.prim_ty_to_ty(&path.segments, nty))
} else {
None
}
}
- fn lookup_and_handle_definition(&mut self, id: &ast::NodeId) {
+ fn lookup_and_handle_definition(&mut self, id: ast::NodeId) {
use ty::TypeVariants::{TyEnum, TyStruct};
// If `bar` is a trait item, make sure to mark Foo as alive in `Foo::bar`
- self.tcx.tables.borrow().item_substs.get(id)
+ self.tcx.tables.borrow().item_substs.get(&id)
.and_then(|substs| substs.substs.self_ty())
.map(|ty| match ty.sty {
TyEnum(tyid, _) | TyStruct(tyid, _) => self.check_def_id(tyid.did),
_ => (),
});
- self.tcx.def_map.borrow().get(id).map(|def| {
- match def.full_def() {
- Def::Const(_) | Def::AssociatedConst(..) => {
- self.check_def_id(def.def_id());
- }
- _ if self.ignore_non_const_paths => (),
- Def::PrimTy(_) => (),
- Def::SelfTy(..) => (),
- Def::Variant(enum_id, variant_id) => {
- self.check_def_id(enum_id);
- if !self.ignore_variant_stack.contains(&variant_id) {
- self.check_def_id(variant_id);
- }
- }
- _ => {
- self.check_def_id(def.def_id());
+ let def = self.tcx.expect_def(id);
+ match def {
+ Def::Const(_) | Def::AssociatedConst(..) => {
+ self.check_def_id(def.def_id());
+ }
+ _ if self.ignore_non_const_paths => (),
+ Def::PrimTy(_) => (),
+ Def::SelfTy(..) => (),
+ Def::Variant(enum_id, variant_id) => {
+ self.check_def_id(enum_id);
+ if !self.ignore_variant_stack.contains(&variant_id) {
+ self.check_def_id(variant_id);
}
}
- });
+ _ => {
+ self.check_def_id(def.def_id());
+ }
+ }
}
fn lookup_and_handle_method(&mut self, id: ast::NodeId) {
fn handle_field_pattern_match(&mut self, lhs: &hir::Pat,
pats: &[codemap::Spanned<hir::FieldPat>]) {
- let def = self.tcx.def_map.borrow().get(&lhs.id).unwrap().full_def();
- let pat_ty = self.tcx.node_id_to_type(lhs.id);
- let variant = match pat_ty.sty {
- ty::TyStruct(adt, _) | ty::TyEnum(adt, _) => adt.variant_of_def(def),
+ let variant = match self.tcx.node_id_to_type(lhs.id).sty {
+ ty::TyStruct(adt, _) | ty::TyEnum(adt, _) => {
+ adt.variant_of_def(self.tcx.expect_def(lhs.id))
+ }
_ => span_bug!(lhs.span, "non-ADT in struct pattern")
};
for pat in pats {
}
_ if pat_util::pat_is_const(&def_map.borrow(), pat) => {
// it might be the only use of a const
- self.lookup_and_handle_definition(&pat.id)
+ self.lookup_and_handle_definition(pat.id)
}
_ => ()
}
}
fn visit_path(&mut self, path: &hir::Path, id: ast::NodeId) {
- self.lookup_and_handle_definition(&id);
+ self.lookup_and_handle_definition(id);
intravisit::walk_path(self, path);
}
fn visit_path_list_item(&mut self, path: &hir::Path, item: &hir::PathListItem) {
- self.lookup_and_handle_definition(&item.node.id());
+ self.lookup_and_handle_definition(item.node.id());
intravisit::walk_path_list_item(self, path, item);
}
}
self.require_unsafe(expr.span, "use of inline assembly");
}
hir::ExprPath(..) => {
- if let Def::Static(_, true) = self.tcx.resolve_expr(expr) {
+ if let Def::Static(_, true) = self.tcx.expect_def(expr.id) {
self.require_unsafe(expr.span, "use of mutable static");
}
}
debug!("walk_pat cmt_discr={:?} pat={:?}", cmt_discr,
pat);
+ let tcx = &self.tcx();
let mc = &self.mc;
let infcx = self.mc.infcx;
- let def_map = &self.tcx().def_map;
let delegate = &mut self.delegate;
return_if_err!(mc.cat_pattern(cmt_discr.clone(), pat, |mc, cmt_pat, pat| {
match pat.node {
// Each match binding is effectively an assignment to the
// binding being produced.
- let def = def_map.borrow().get(&pat.id).unwrap().full_def();
- if let Ok(binding_cmt) = mc.cat_def(pat.id, pat.span, pat_ty, def) {
+ if let Ok(binding_cmt) = mc.cat_def(pat.id, pat.span, pat_ty,
+ tcx.expect_def(pat.id)) {
delegate.mutate(pat.id, pat.span, binding_cmt, MutateMode::Init);
}
}
}
}
- PatKind::Vec(_, Some(ref slice_pat), _) => {
- // The `slice_pat` here creates a slice into
- // the original vector. This is effectively a
- // borrow of the elements of the vector being
- // matched.
-
- let (slice_cmt, slice_mutbl, slice_r) =
- return_if_err!(mc.cat_slice_pattern(cmt_pat, &slice_pat));
-
- // Note: We declare here that the borrow
- // occurs upon entering the `[...]`
- // pattern. This implies that something like
- // `[a; b]` where `a` is a move is illegal,
- // because the borrow is already in effect.
- // In fact such a move would be safe-ish, but
- // it effectively *requires* that we use the
- // nulling out semantics to indicate when a
- // value has been moved, which we are trying
- // to move away from. Otherwise, how can we
- // indicate that the first element in the
- // vector has been moved? Eventually, we
- // could perhaps modify this rule to permit
- // `[..a, b]` where `b` is a move, because in
- // that case we can adjust the length of the
- // original vec accordingly, but we'd have to
- // make trans do the right thing, and it would
- // only work for `Box<[T]>`s. It seems simpler
- // to just require that people call
- // `vec.pop()` or `vec.unshift()`.
- let slice_bk = ty::BorrowKind::from_mutbl(slice_mutbl);
- delegate.borrow(pat.id, pat.span,
- slice_cmt, slice_r,
- slice_bk, RefBinding);
- }
_ => {}
}
}));
// to the above loop's visit of than the bindings that form
// the leaves of the pattern tree structure.
return_if_err!(mc.cat_pattern(cmt_discr, pat, |mc, cmt_pat, pat| {
- let def_map = def_map.borrow();
- let tcx = infcx.tcx;
-
match pat.node {
PatKind::Struct(..) | PatKind::TupleStruct(..) |
PatKind::Path(..) | PatKind::QPath(..) => {
- match def_map.get(&pat.id).map(|d| d.full_def()) {
- Some(Def::Variant(enum_did, variant_did)) => {
+ match tcx.expect_def(pat.id) {
+ Def::Variant(enum_did, variant_did) => {
let downcast_cmt =
if tcx.lookup_adt_def(enum_did).is_univariant() {
cmt_pat
delegate.matched_pat(pat, downcast_cmt, match_mode);
}
- Some(Def::Struct(..)) | Some(Def::TyAlias(..)) => {
+ Def::Struct(..) | Def::TyAlias(..) => {
// A struct (in either the value or type
// namespace; we encounter the former on
// e.g. patterns for unit structs).
delegate.matched_pat(pat, cmt_pat, match_mode);
}
- Some(Def::Const(..)) |
- Some(Def::AssociatedConst(..)) => {
+ Def::Const(..) | Def::AssociatedConst(..) => {
// This is a leaf (i.e. identifier binding
// or constant value to match); thus no
// `matched_pat` call.
impl<'a, 'gcx, 'tcx, 'v> Visitor<'v> for ExprVisitor<'a, 'gcx, 'tcx> {
fn visit_expr(&mut self, expr: &hir::Expr) {
if let hir::ExprPath(..) = expr.node {
- match self.infcx.tcx.resolve_expr(expr) {
+ match self.infcx.tcx.expect_def(expr.id) {
Def::Fn(did) if self.def_id_is_transmute(did) => {
let typ = self.infcx.tcx.node_id_to_type(expr.id);
match typ.sty {
match expr.node {
// live nodes required for uses or definitions of variables:
hir::ExprPath(..) => {
- let def = ir.tcx.def_map.borrow().get(&expr.id).unwrap().full_def();
+ let def = ir.tcx.expect_def(expr.id);
debug!("expr {}: path that leads to {:?}", expr.id, def);
if let Def::Local(..) = def {
ir.add_live_node_for_node(expr.id, ExprNode(expr.span));
Some(_) => {
// Refers to a labeled loop. Use the results of resolve
// to find with one
- match self.ir.tcx.def_map.borrow().get(&id).map(|d| d.full_def()) {
- Some(Def::Label(loop_id)) => loop_id,
+ match self.ir.tcx.expect_def(id) {
+ Def::Label(loop_id) => loop_id,
_ => span_bug!(sp, "label on break/loop \
doesn't refer to a loop")
}
fn access_path(&mut self, expr: &Expr, succ: LiveNode, acc: u32)
-> LiveNode {
- match self.ir.tcx.def_map.borrow().get(&expr.id).unwrap().full_def() {
+ match self.ir.tcx.expect_def(expr.id) {
Def::Local(_, nid) => {
let ln = self.live_node(expr.id, expr.span);
if acc != 0 {
fn check_lvalue(&mut self, expr: &Expr) {
match expr.node {
hir::ExprPath(..) => {
- if let Def::Local(_, nid) = self.ir.tcx.def_map.borrow().get(&expr.id)
- .unwrap()
- .full_def() {
+ if let Def::Local(_, nid) = self.ir.tcx.expect_def(expr.id) {
// Assignment to an immutable variable or argument: only legal
// if there is no later assignment. If this local is actually
// mutable, then check for a reassignment to flag the mutability
Ok(deref_interior(InteriorField(PositionalField(0))))
}
- ty::TyArray(_, _) | ty::TySlice(_) | ty::TyStr => {
+ ty::TyArray(_, _) | ty::TySlice(_) => {
// no deref of indexed content without supplying InteriorOffsetKind
if let Some(context) = context {
- Ok(deref_interior(InteriorElement(context, element_kind(t))))
+ Ok(deref_interior(InteriorElement(context, ElementKind::VecElement)))
} else {
Err(())
}
}
hir::ExprPath(..) => {
- let def = self.tcx().def_map.borrow().get(&expr.id).unwrap().full_def();
- self.cat_def(expr.id, expr.span, expr_ty, def)
+ self.cat_def(expr.id, expr.span, expr_ty, self.tcx().expect_def(expr.id))
}
hir::ExprType(ref e, _) => {
let method_call = ty::MethodCall::expr(elt.id());
let method_ty = self.infcx.node_method_ty(method_call);
- let element_ty = match method_ty {
+ let (element_ty, element_kind) = match method_ty {
Some(method_ty) => {
let ref_ty = self.overloaded_method_return_ty(method_ty);
base_cmt = self.cat_rvalue_node(elt.id(), elt.span(), ref_ty);
// FIXME(#20649) -- why are we using the `self_ty` as the element type...?
let self_ty = method_ty.fn_sig().input(0);
- self.tcx().no_late_bound_regions(&self_ty).unwrap()
+ (self.tcx().no_late_bound_regions(&self_ty).unwrap(),
+ ElementKind::OtherElement)
}
None => {
match base_cmt.ty.builtin_index() {
- Some(ty) => ty,
+ Some(ty) => (ty, ElementKind::VecElement),
None => {
return Err(());
}
}
};
- let m = base_cmt.mutbl.inherit();
- let ret = interior(elt, base_cmt.clone(), base_cmt.ty,
- m, context, element_ty);
+ let interior_elem = InteriorElement(context, element_kind);
+ let ret =
+ self.cat_imm_interior(elt, base_cmt.clone(), element_ty, interior_elem);
debug!("cat_index ret {:?}", ret);
return Ok(ret);
-
- fn interior<'tcx, N: ast_node>(elt: &N,
- of_cmt: cmt<'tcx>,
- vec_ty: Ty<'tcx>,
- mutbl: MutabilityCategory,
- context: InteriorOffsetKind,
- element_ty: Ty<'tcx>) -> cmt<'tcx>
- {
- let interior_elem = InteriorElement(context, element_kind(vec_ty));
- Rc::new(cmt_ {
- id:elt.id(),
- span:elt.span(),
- cat:Categorization::Interior(of_cmt, interior_elem),
- mutbl:mutbl,
- ty:element_ty,
- note: NoteNone
- })
- }
- }
-
- // Takes either a vec or a reference to a vec and returns the cmt for the
- // underlying vec.
- fn deref_vec<N:ast_node>(&self,
- elt: &N,
- base_cmt: cmt<'tcx>,
- context: InteriorOffsetKind)
- -> McResult<cmt<'tcx>>
- {
- let ret = match deref_kind(base_cmt.ty, Some(context))? {
- deref_ptr(ptr) => {
- // for unique ptrs, we inherit mutability from the
- // owning reference.
- let m = MutabilityCategory::from_pointer_kind(base_cmt.mutbl, ptr);
-
- // the deref is explicit in the resulting cmt
- Rc::new(cmt_ {
- id:elt.id(),
- span:elt.span(),
- cat:Categorization::Deref(base_cmt.clone(), 0, ptr),
- mutbl:m,
- ty: match base_cmt.ty.builtin_deref(false, ty::NoPreference) {
- Some(mt) => mt.ty,
- None => bug!("Found non-derefable type")
- },
- note: NoteNone
- })
- }
-
- deref_interior(_) => {
- base_cmt
- }
- };
- debug!("deref_vec ret {:?}", ret);
- Ok(ret)
- }
-
- /// Given a pattern P like: `[_, ..Q, _]`, where `vec_cmt` is the cmt for `P`, `slice_pat` is
- /// the pattern `Q`, returns:
- ///
- /// * a cmt for `Q`
- /// * the mutability and region of the slice `Q`
- ///
- /// These last two bits of info happen to be things that borrowck needs.
- pub fn cat_slice_pattern(&self,
- vec_cmt: cmt<'tcx>,
- slice_pat: &hir::Pat)
- -> McResult<(cmt<'tcx>, hir::Mutability, ty::Region)> {
- let slice_ty = self.node_ty(slice_pat.id)?;
- let (slice_mutbl, slice_r) = vec_slice_info(slice_pat, slice_ty);
- let context = InteriorOffsetKind::Pattern;
- let cmt_vec = self.deref_vec(slice_pat, vec_cmt, context)?;
- let cmt_slice = self.cat_index(slice_pat, cmt_vec, context)?;
- return Ok((cmt_slice, slice_mutbl, slice_r));
-
- /// In a pattern like [a, b, ..c], normally `c` has slice type, but if you have [a, b,
- /// ..ref c], then the type of `ref c` will be `&&[]`, so to extract the slice details we
- /// have to recurse through rptrs.
- fn vec_slice_info(pat: &hir::Pat, slice_ty: Ty)
- -> (hir::Mutability, ty::Region) {
- match slice_ty.sty {
- ty::TyRef(r, ref mt) => match mt.ty.sty {
- ty::TySlice(_) => (mt.mutbl, *r),
- _ => vec_slice_info(pat, mt.ty),
- },
-
- _ => {
- span_bug!(pat.span,
- "type of slice pattern is not a slice");
- }
- }
- }
}
pub fn cat_imm_interior<N:ast_node>(&self,
(*op)(self, cmt.clone(), pat);
- let opt_def = if let Some(path_res) = self.tcx().def_map.borrow().get(&pat.id) {
- if path_res.depth != 0 || path_res.base_def == Def::Err {
- // Since patterns can be associated constants
- // which are resolved during typeck, we might have
- // some unresolved patterns reaching this stage
- // without aborting
- return Err(());
- }
- Some(path_res.full_def())
- } else {
- None
- };
+ let opt_def = self.tcx().expect_def_or_none(pat.id);
+ if opt_def == Some(Def::Err) {
+ return Err(());
+ }
// Note: This goes up here (rather than within the PatKind::TupleStruct arm
// alone) because struct patterns can refer to struct types or
PatKind::Vec(ref before, ref slice, ref after) => {
let context = InteriorOffsetKind::Pattern;
- let vec_cmt = self.deref_vec(pat, cmt, context)?;
- let elt_cmt = self.cat_index(pat, vec_cmt, context)?;
+ let elt_cmt = self.cat_index(pat, cmt, context)?;
for before_pat in before {
self.cat_pattern_(elt_cmt.clone(), &before_pat, op)?;
}
if let Some(ref slice_pat) = *slice {
- let slice_ty = self.pat_ty(&slice_pat)?;
- let slice_cmt = self.cat_rvalue_node(pat.id(), pat.span(), slice_ty);
- self.cat_pattern_(slice_cmt, &slice_pat, op)?;
+ self.cat_pattern_(elt_cmt.clone(), &slice_pat, op)?;
}
for after_pat in after {
self.cat_pattern_(elt_cmt.clone(), &after_pat, op)?;
}
}
-fn element_kind(t: Ty) -> ElementKind {
- match t.sty {
- ty::TyRef(_, ty::TypeAndMut{ty, ..}) |
- ty::TyBox(ty) => match ty.sty {
- ty::TySlice(_) => VecElement,
- _ => OtherElement
- },
- ty::TyArray(..) | ty::TySlice(_) => VecElement,
- _ => OtherElement
- }
-}
-
impl fmt::Debug for Upvar {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{:?}/{:?}", self.id, self.kind)
fn visit_expr(&mut self, expr: &hir::Expr) {
match expr.node {
hir::ExprPath(..) => {
- let def = match self.tcx.def_map.borrow().get(&expr.id) {
- Some(d) => d.full_def(),
- None => {
- span_bug!(expr.span, "def ID not in def map?!")
- }
- };
-
+ let def = self.tcx.expect_def(expr.id);
let def_id = def.def_id();
if let Some(node_id) = self.tcx.map.as_local_node_id(def_id) {
if self.def_id_represents_local_inlined_item(def_id) {
// individually as it's possible to have a stable trait with unstable
// items.
hir::ItemImpl(_, _, _, Some(ref t), _, ref impl_items) => {
- let trait_did = tcx.def_map.borrow().get(&t.ref_id).unwrap().def_id();
+ let trait_did = tcx.expect_def(t.ref_id).def_id();
let trait_items = tcx.trait_items(trait_did);
for impl_item in impl_items {
cb: &mut FnMut(DefId, Span,
&Option<&Stability>,
&Option<Deprecation>)) {
- match tcx.def_map.borrow().get(&id).map(|d| d.full_def()) {
+ // Paths in import prefixes may have no resolution.
+ match tcx.expect_def_or_none(id) {
Some(Def::PrimTy(..)) => {}
Some(Def::SelfTy(..)) => {}
Some(def) => {
cb: &mut FnMut(DefId, Span,
&Option<&Stability>,
&Option<Deprecation>)) {
- match tcx.def_map.borrow().get(&item.node.id()).map(|d| d.full_def()) {
- Some(Def::PrimTy(..)) => {}
- Some(def) => {
+ match tcx.expect_def(item.node.id()) {
+ Def::PrimTy(..) => {}
+ def => {
maybe_do_stability_check(tcx, def.def_id(), item.span, cb);
}
- None => {}
}
}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use std::cell::{Ref, RefCell};
+use rustc_data_structures::indexed_vec::IndexVec;
+
+use mir::repr::{Mir, BasicBlock};
+
+use rustc_serialize as serialize;
+
+#[derive(Clone)]
+pub struct Cache {
+ predecessors: RefCell<Option<IndexVec<BasicBlock, Vec<BasicBlock>>>>
+}
+
+
+impl serialize::Encodable for Cache {
+ fn encode<S: serialize::Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
+ serialize::Encodable::encode(&(), s)
+ }
+}
+
+impl serialize::Decodable for Cache {
+ fn decode<D: serialize::Decoder>(d: &mut D) -> Result<Self, D::Error> {
+ serialize::Decodable::decode(d).map(|_v: ()| Self::new())
+ }
+}
+
+
+impl Cache {
+ pub fn new() -> Self {
+ Cache {
+ predecessors: RefCell::new(None)
+ }
+ }
+
+ pub fn invalidate(&self) {
+ // FIXME: consider being more fine-grained
+ *self.predecessors.borrow_mut() = None;
+ }
+
+ pub fn predecessors(&self, mir: &Mir) -> Ref<IndexVec<BasicBlock, Vec<BasicBlock>>> {
+ if self.predecessors.borrow().is_none() {
+ *self.predecessors.borrow_mut() = Some(calculate_predecessors(mir));
+ }
+
+ Ref::map(self.predecessors.borrow(), |p| p.as_ref().unwrap())
+ }
+}
+
+fn calculate_predecessors(mir: &Mir) -> IndexVec<BasicBlock, Vec<BasicBlock>> {
+ let mut result = IndexVec::from_elem(vec![], mir.basic_blocks());
+ for (bb, data) in mir.basic_blocks().iter_enumerated() {
+ if let Some(ref term) = data.terminator {
+ for &tgt in term.successors().iter() {
+ result[tgt].push(bb);
+ }
+ }
+ }
+
+ result
+}
use graphviz::IntoCow;
use middle::const_val::ConstVal;
use rustc_const_math::{ConstUsize, ConstInt, ConstMathErr};
+use rustc_data_structures::indexed_vec::{IndexVec, Idx};
use hir::def_id::DefId;
use ty::subst::Substs;
use ty::{self, AdtDef, ClosureSubsts, FnOutput, Region, Ty};
use hir::InlineAsm;
use std::ascii;
use std::borrow::{Cow};
+use std::cell::Ref;
use std::fmt::{self, Debug, Formatter, Write};
use std::{iter, u32};
use std::ops::{Index, IndexMut};
use syntax::ast::{self, Name};
use syntax::codemap::Span;
+use super::cache::Cache;
+
+macro_rules! newtype_index {
+ ($name:ident, $debug_name:expr) => (
+ #[derive(Copy, Clone, PartialEq, Eq, Hash, PartialOrd, Ord,
+ RustcEncodable, RustcDecodable)]
+ pub struct $name(u32);
+
+ impl Idx for $name {
+ fn new(value: usize) -> Self {
+ assert!(value < (u32::MAX) as usize);
+ $name(value as u32)
+ }
+ fn index(self) -> usize {
+ self.0 as usize
+ }
+ }
+
+ impl Debug for $name {
+ fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
+ write!(fmt, "{}{}", $debug_name, self.0)
+ }
+ }
+ )
+}
+
/// Lowered representation of a single function.
#[derive(Clone, RustcEncodable, RustcDecodable)]
pub struct Mir<'tcx> {
/// List of basic blocks. References to basic block use a newtyped index type `BasicBlock`
/// that indexes into this vector.
- pub basic_blocks: Vec<BasicBlockData<'tcx>>,
+ basic_blocks: IndexVec<BasicBlock, BasicBlockData<'tcx>>,
/// List of visibility (lexical) scopes; these are referenced by statements
/// and used (eventually) for debuginfo. Indexed by a `VisibilityScope`.
- pub visibility_scopes: Vec<VisibilityScopeData>,
+ pub visibility_scopes: IndexVec<VisibilityScope, VisibilityScopeData>,
/// Rvalues promoted from this function, such as borrows of constants.
/// Each of them is the Mir of a constant with the fn's type parameters
/// in scope, but no vars or args and a separate set of temps.
- pub promoted: Vec<Mir<'tcx>>,
+ pub promoted: IndexVec<Promoted, Mir<'tcx>>,
/// Return type of the function.
pub return_ty: FnOutput<'tcx>,
/// Variables: these are stack slots corresponding to user variables. They may be
/// assigned many times.
- pub var_decls: Vec<VarDecl<'tcx>>,
+ pub var_decls: IndexVec<Var, VarDecl<'tcx>>,
/// Args: these are stack slots corresponding to the input arguments.
- pub arg_decls: Vec<ArgDecl<'tcx>>,
+ pub arg_decls: IndexVec<Arg, ArgDecl<'tcx>>,
/// Temp declarations: stack slots that for temporaries created by
/// the compiler. These are assigned once, but they are not SSA
/// values in that it is possible to borrow them and mutate them
/// through the resulting reference.
- pub temp_decls: Vec<TempDecl<'tcx>>,
+ pub temp_decls: IndexVec<Temp, TempDecl<'tcx>>,
/// Names and capture modes of all the closure upvars, assuming
/// the first argument is either the closure or a reference to it.
/// A span representing this MIR, for error reporting
pub span: Span,
+
+ /// A cache for various calculations
+ cache: Cache
}
/// where execution begins
pub const START_BLOCK: BasicBlock = BasicBlock(0);
impl<'tcx> Mir<'tcx> {
- pub fn all_basic_blocks(&self) -> Vec<BasicBlock> {
- (0..self.basic_blocks.len())
- .map(|i| BasicBlock::new(i))
- .collect()
+ pub fn new(basic_blocks: IndexVec<BasicBlock, BasicBlockData<'tcx>>,
+ visibility_scopes: IndexVec<VisibilityScope, VisibilityScopeData>,
+ promoted: IndexVec<Promoted, Mir<'tcx>>,
+ return_ty: FnOutput<'tcx>,
+ var_decls: IndexVec<Var, VarDecl<'tcx>>,
+ arg_decls: IndexVec<Arg, ArgDecl<'tcx>>,
+ temp_decls: IndexVec<Temp, TempDecl<'tcx>>,
+ upvar_decls: Vec<UpvarDecl>,
+ span: Span) -> Self
+ {
+ Mir {
+ basic_blocks: basic_blocks,
+ visibility_scopes: visibility_scopes,
+ promoted: promoted,
+ return_ty: return_ty,
+ var_decls: var_decls,
+ arg_decls: arg_decls,
+ temp_decls: temp_decls,
+ upvar_decls: upvar_decls,
+ span: span,
+ cache: Cache::new()
+ }
}
- pub fn basic_block_data(&self, bb: BasicBlock) -> &BasicBlockData<'tcx> {
- &self.basic_blocks[bb.index()]
+ #[inline]
+ pub fn basic_blocks(&self) -> &IndexVec<BasicBlock, BasicBlockData<'tcx>> {
+ &self.basic_blocks
}
- pub fn basic_block_data_mut(&mut self, bb: BasicBlock) -> &mut BasicBlockData<'tcx> {
- &mut self.basic_blocks[bb.index()]
+ #[inline]
+ pub fn basic_blocks_mut(&mut self) -> &mut IndexVec<BasicBlock, BasicBlockData<'tcx>> {
+ self.cache.invalidate();
+ &mut self.basic_blocks
+ }
+
+ #[inline]
+ pub fn predecessors(&self) -> Ref<IndexVec<BasicBlock, Vec<BasicBlock>>> {
+ self.cache.predecessors(self)
+ }
+
+ #[inline]
+ pub fn predecessors_for(&self, bb: BasicBlock) -> Ref<Vec<BasicBlock>> {
+ Ref::map(self.predecessors(), |p| &p[bb])
}
}
#[inline]
fn index(&self, index: BasicBlock) -> &BasicBlockData<'tcx> {
- self.basic_block_data(index)
+ &self.basic_blocks()[index]
}
}
impl<'tcx> IndexMut<BasicBlock> for Mir<'tcx> {
#[inline]
fn index_mut(&mut self, index: BasicBlock) -> &mut BasicBlockData<'tcx> {
- self.basic_block_data_mut(index)
+ &mut self.basic_blocks_mut()[index]
}
}
///////////////////////////////////////////////////////////////////////////
// BasicBlock
-/// The index of a particular basic block. The index is into the `basic_blocks`
-/// list of the `Mir`.
-///
-/// (We use a `u32` internally just to save memory.)
-#[derive(Copy, Clone, PartialEq, Eq, Hash, PartialOrd, Ord,
- RustcEncodable, RustcDecodable)]
-pub struct BasicBlock(u32);
-
-impl BasicBlock {
- pub fn new(index: usize) -> BasicBlock {
- assert!(index < (u32::MAX as usize));
- BasicBlock(index as u32)
- }
-
- /// Extract the index.
- pub fn index(self) -> usize {
- self.0 as usize
- }
-}
-
-impl Debug for BasicBlock {
- fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
- write!(fmt, "bb{}", self.0)
- }
-}
+newtype_index!(BasicBlock, "bb");
///////////////////////////////////////////////////////////////////////////
// BasicBlockData and Terminator
/// have been filled in by now. This should occur at most once.
Return,
+ /// Indicates a terminator that can never be reached.
+ Unreachable,
+
/// Drop the Lvalue
Drop {
location: Lvalue<'tcx>,
SwitchInt { targets: ref b, .. } => b[..].into_cow(),
Resume => (&[]).into_cow(),
Return => (&[]).into_cow(),
+ Unreachable => (&[]).into_cow(),
Call { destination: Some((_, t)), cleanup: Some(c), .. } => vec![t, c].into_cow(),
Call { destination: Some((_, ref t)), cleanup: None, .. } =>
slice::ref_slice(t).into_cow(),
SwitchInt { targets: ref mut b, .. } => b.iter_mut().collect(),
Resume => Vec::new(),
Return => Vec::new(),
+ Unreachable => Vec::new(),
Call { destination: Some((_, ref mut t)), cleanup: Some(ref mut c), .. } => vec![t, c],
Call { destination: Some((_, ref mut t)), cleanup: None, .. } => vec![t],
Call { destination: None, cleanup: Some(ref mut c), .. } => vec![c],
SwitchInt { discr: ref lv, .. } => write!(fmt, "switchInt({:?})", lv),
Return => write!(fmt, "return"),
Resume => write!(fmt, "resume"),
+ Unreachable => write!(fmt, "unreachable"),
Drop { ref location, .. } => write!(fmt, "drop({:?})", location),
DropAndReplace { ref location, ref value, .. } =>
write!(fmt, "replace({:?} <- {:?})", location, value),
pub fn fmt_successor_labels(&self) -> Vec<Cow<'static, str>> {
use self::TerminatorKind::*;
match *self {
- Return | Resume => vec![],
+ Return | Resume | Unreachable => vec![],
Goto { .. } => vec!["".into()],
If { .. } => vec!["true".into(), "false".into()],
Switch { ref adt_def, .. } => {
///////////////////////////////////////////////////////////////////////////
// Lvalues
+newtype_index!(Var, "var");
+newtype_index!(Temp, "tmp");
+newtype_index!(Arg, "arg");
+
/// A path to a value; something that can be evaluated without
/// changing or disturbing program state.
#[derive(Clone, PartialEq, RustcEncodable, RustcDecodable)]
pub enum Lvalue<'tcx> {
/// local variable declared by the user
- Var(u32),
+ Var(Var),
/// temporary introduced during lowering into MIR
- Temp(u32),
+ Temp(Temp),
/// formal parameter of the function; note that these are NOT the
/// bindings that the user declares, which are vars
- Arg(u32),
+ Arg(Arg),
/// static or static mut variable
Static(DefId),
from_end: bool,
},
+ /// These indices are generated by slice patterns.
+ ///
+ /// slice[from:-to] in Python terms.
+ Subslice {
+ from: u32,
+ to: u32,
+ },
+
/// "Downcast" to a variant of an ADT. Currently, we only introduce
/// this for ADTs with more than one variant. It may be better to
/// just introduce it always, or always for enums.
/// and the index is an operand.
pub type LvalueElem<'tcx> = ProjectionElem<'tcx, Operand<'tcx>>;
-/// Index into the list of fields found in a `VariantDef`
-#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, RustcEncodable, RustcDecodable)]
-pub struct Field(u32);
-
-impl Field {
- pub fn new(value: usize) -> Field {
- assert!(value < (u32::MAX) as usize);
- Field(value as u32)
- }
-
- pub fn index(self) -> usize {
- self.0 as usize
- }
-}
+newtype_index!(Field, "field");
impl<'tcx> Lvalue<'tcx> {
pub fn field(self, f: Field, ty: Ty<'tcx>) -> Lvalue<'tcx> {
use self::Lvalue::*;
match *self {
- Var(id) =>
- write!(fmt, "var{:?}", id),
- Arg(id) =>
- write!(fmt, "arg{:?}", id),
- Temp(id) =>
- write!(fmt, "tmp{:?}", id),
+ Var(id) => write!(fmt, "{:?}", id),
+ Arg(id) => write!(fmt, "{:?}", id),
+ Temp(id) => write!(fmt, "{:?}", id),
Static(def_id) =>
write!(fmt, "{}", ty::tls::with(|tcx| tcx.item_path_str(def_id))),
ReturnPointer =>
write!(fmt, "{:?}[{:?} of {:?}]", data.base, offset, min_length),
ProjectionElem::ConstantIndex { offset, min_length, from_end: true } =>
write!(fmt, "{:?}[-{:?} of {:?}]", data.base, offset, min_length),
+ ProjectionElem::Subslice { from, to } if to == 0 =>
+ write!(fmt, "{:?}[{:?}:", data.base, from),
+ ProjectionElem::Subslice { from, to } if from == 0 =>
+ write!(fmt, "{:?}[:-{:?}]", data.base, to),
+ ProjectionElem::Subslice { from, to } =>
+ write!(fmt, "{:?}[{:?}:-{:?}]", data.base,
+ from, to),
+
},
}
}
///////////////////////////////////////////////////////////////////////////
// Scopes
-impl Index<VisibilityScope> for Vec<VisibilityScopeData> {
- type Output = VisibilityScopeData;
-
- #[inline]
- fn index(&self, index: VisibilityScope) -> &VisibilityScopeData {
- &self[index.index()]
- }
-}
-
-impl IndexMut<VisibilityScope> for Vec<VisibilityScopeData> {
- #[inline]
- fn index_mut(&mut self, index: VisibilityScope) -> &mut VisibilityScopeData {
- &mut self[index.index()]
- }
-}
-
-#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, RustcEncodable, RustcDecodable)]
-pub struct VisibilityScope(u32);
-
-/// The visibility scope all arguments go into.
-pub const ARGUMENT_VISIBILITY_SCOPE: VisibilityScope = VisibilityScope(0);
-
-impl VisibilityScope {
- pub fn new(index: usize) -> VisibilityScope {
- assert!(index < (u32::MAX as usize));
- VisibilityScope(index as u32)
- }
-
- pub fn index(self) -> usize {
- self.0 as usize
- }
-}
+newtype_index!(VisibilityScope, "scope");
+pub const ARGUMENT_VISIBILITY_SCOPE : VisibilityScope = VisibilityScope(0);
#[derive(Clone, Debug, RustcEncodable, RustcDecodable)]
pub struct VisibilityScopeData {
/// away after type-checking and before lowering.
Aggregate(AggregateKind<'tcx>, Vec<Operand<'tcx>>),
- /// Generates a slice of the form `&input[from_start..L-from_end]`
- /// where `L` is the length of the slice. This is only created by
- /// slice pattern matching, so e.g. a pattern of the form `[x, y,
- /// .., z]` might create a slice with `from_start=2` and
- /// `from_end=1`.
- Slice {
- input: Lvalue<'tcx>,
- from_start: usize,
- from_end: usize,
- },
-
InlineAsm {
asm: InlineAsm,
outputs: Vec<Lvalue<'tcx>>,
InlineAsm { ref asm, ref outputs, ref inputs } => {
write!(fmt, "asm!({:?} : {:?} : {:?})", asm, outputs, inputs)
}
- Slice { ref input, from_start, from_end } =>
- write!(fmt, "{:?}[{:?}..-{:?}]", input, from_start, from_end),
Ref(_, borrow_kind, ref lv) => {
let kind_str = match borrow_kind {
}
}
+newtype_index!(Promoted, "promoted");
+
#[derive(Clone, PartialEq, Eq, Hash, RustcEncodable, RustcDecodable)]
pub enum Literal<'tcx> {
Item {
},
Promoted {
// Index into the `promoted` vector of `Mir`.
- index: usize
+ index: Promoted
},
}
fmt_const_val(fmt, value)
}
Promoted { index } => {
- write!(fmt, "promoted{}", index)
+ write!(fmt, "{:?}", index)
}
}
}
LvalueTy::Ty {
ty: self.to_ty(tcx).builtin_index().unwrap()
},
+ ProjectionElem::Subslice { from, to } => {
+ let ty = self.to_ty(tcx);
+ LvalueTy::Ty {
+ ty: match ty.sty {
+ ty::TyArray(inner, size) => {
+ tcx.mk_array(inner, size-(from as usize)-(to as usize))
+ }
+ ty::TySlice(..) => ty,
+ _ => {
+ bug!("cannot subslice non-array type: `{:?}`", self)
+ }
+ }
+ }
+ }
ProjectionElem::Downcast(adt_def1, index) =>
match self.to_ty(tcx).sty {
ty::TyEnum(adt_def, substs) => {
{
match *lvalue {
Lvalue::Var(index) =>
- LvalueTy::Ty { ty: self.var_decls[index as usize].ty },
+ LvalueTy::Ty { ty: self.var_decls[index].ty },
Lvalue::Temp(index) =>
- LvalueTy::Ty { ty: self.temp_decls[index as usize].ty },
+ LvalueTy::Ty { ty: self.temp_decls[index].ty },
Lvalue::Arg(index) =>
- LvalueTy::Ty { ty: self.arg_decls[index as usize].ty },
+ LvalueTy::Ty { ty: self.arg_decls[index].ty },
Lvalue::Static(def_id) =>
LvalueTy::Ty { ty: tcx.lookup_item_type(def_id).ty },
Lvalue::ReturnPointer =>
}
}
}
- Rvalue::Slice { .. } => None,
Rvalue::InlineAsm { .. } => None
}
}
use ty::TyCtxt;
use syntax::ast::NodeId;
+use std::fmt;
+
/// Where a specific Mir comes from.
#[derive(Debug, Copy, Clone)]
pub enum MirSource {
/// Various information about pass.
pub trait Pass {
- // fn name() for printouts of various sorts?
// fn should_run(Session) to check if pass should run?
fn dep_node(&self, def_id: DefId) -> DepNode<DefId> {
DepNode::MirPass(def_id)
}
+ fn name(&self) -> &str {
+ unsafe { ::std::intrinsics::type_name::<Self>() }
+ }
+ fn disambiguator<'a>(&'a self) -> Option<Box<fmt::Display+'a>> { None }
}
/// A pass which inspects the whole MirMap.
pub trait MirMapPass<'tcx>: Pass {
- fn run_pass<'a>(&mut self, tcx: TyCtxt<'a, 'tcx, 'tcx>, map: &mut MirMap<'tcx>);
+ fn run_pass<'a>(
+ &mut self,
+ tcx: TyCtxt<'a, 'tcx, 'tcx>,
+ map: &mut MirMap<'tcx>,
+ hooks: &mut [Box<for<'s> MirPassHook<'s>>]);
+}
+
+pub trait MirPassHook<'tcx>: Pass {
+ fn on_mir_pass<'a>(
+ &mut self,
+ tcx: TyCtxt<'a, 'tcx, 'tcx>,
+ src: MirSource,
+ mir: &Mir<'tcx>,
+ pass: &Pass,
+ is_after: bool
+ );
}
/// A pass which inspects Mir of functions in isolation.
}
impl<'tcx, T: MirPass<'tcx>> MirMapPass<'tcx> for T {
- fn run_pass<'a>(&mut self, tcx: TyCtxt<'a, 'tcx, 'tcx>, map: &mut MirMap<'tcx>) {
+ fn run_pass<'a>(&mut self,
+ tcx: TyCtxt<'a, 'tcx, 'tcx>,
+ map: &mut MirMap<'tcx>,
+ hooks: &mut [Box<for<'s> MirPassHook<'s>>])
+ {
for (&id, mir) in &mut map.map {
let def_id = tcx.map.local_def_id(id);
let _task = tcx.dep_graph.in_task(self.dep_node(def_id));
let src = MirSource::from_node(tcx, id);
+
+ for hook in &mut *hooks {
+ hook.on_mir_pass(tcx, src, mir, self, false);
+ }
MirPass::run_pass(self, tcx, src, mir);
+ for hook in &mut *hooks {
+ hook.on_mir_pass(tcx, src, mir, self, true);
+ }
for (i, mir) in mir.promoted.iter_mut().enumerate() {
+ for hook in &mut *hooks {
+ hook.on_mir_pass(tcx, src, mir, self, false);
+ }
self.run_pass_on_promoted(tcx, id, i, mir);
+ for hook in &mut *hooks {
+ hook.on_mir_pass(tcx, src, mir, self, true);
+ }
}
}
}
/// A manager for MIR passes.
pub struct Passes {
passes: Vec<Box<for<'tcx> MirMapPass<'tcx>>>,
+ pass_hooks: Vec<Box<for<'tcx> MirPassHook<'tcx>>>,
plugin_passes: Vec<Box<for<'tcx> MirMapPass<'tcx>>>
}
pub fn new() -> Passes {
let passes = Passes {
passes: Vec::new(),
+ pass_hooks: Vec::new(),
plugin_passes: Vec::new()
};
passes
pub fn run_passes(&mut self, tcx: TyCtxt<'a, 'tcx, 'tcx>, map: &mut MirMap<'tcx>) {
for pass in &mut self.plugin_passes {
- pass.run_pass(tcx, map);
+ pass.run_pass(tcx, map, &mut self.pass_hooks);
}
for pass in &mut self.passes {
- pass.run_pass(tcx, map);
+ pass.run_pass(tcx, map, &mut self.pass_hooks);
}
}
pub fn push_pass(&mut self, pass: Box<for<'b> MirMapPass<'b>>) {
self.passes.push(pass);
}
+
+ /// Pushes a pass hook.
+ pub fn push_hook(&mut self, hook: Box<for<'b> MirPassHook<'b>>) {
+ self.pass_hooks.push(hook);
+ }
}
/// Copies the plugin passes.
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use std::vec;
+
+use rustc_data_structures::bitvec::BitVector;
+use rustc_data_structures::indexed_vec::Idx;
+
+use super::repr::*;
+
+/// Preorder traversal of a graph.
+///
+/// Preorder traversal is when each node is visited before an of it's
+/// successors
+///
+/// ```text
+///
+/// A
+/// / \
+/// / \
+/// B C
+/// \ /
+/// \ /
+/// D
+/// ```
+///
+/// A preorder traversal of this graph is either `A B D C` or `A C D B`
+#[derive(Clone)]
+pub struct Preorder<'a, 'tcx: 'a> {
+ mir: &'a Mir<'tcx>,
+ visited: BitVector,
+ worklist: Vec<BasicBlock>,
+}
+
+impl<'a, 'tcx> Preorder<'a, 'tcx> {
+ pub fn new(mir: &'a Mir<'tcx>, root: BasicBlock) -> Preorder<'a, 'tcx> {
+ let worklist = vec![root];
+
+ Preorder {
+ mir: mir,
+ visited: BitVector::new(mir.basic_blocks().len()),
+ worklist: worklist
+ }
+ }
+}
+
+pub fn preorder<'a, 'tcx>(mir: &'a Mir<'tcx>) -> Preorder<'a, 'tcx> {
+ Preorder::new(mir, START_BLOCK)
+}
+
+impl<'a, 'tcx> Iterator for Preorder<'a, 'tcx> {
+ type Item = (BasicBlock, &'a BasicBlockData<'tcx>);
+
+ fn next(&mut self) -> Option<(BasicBlock, &'a BasicBlockData<'tcx>)> {
+ while let Some(idx) = self.worklist.pop() {
+ if !self.visited.insert(idx.index()) {
+ continue;
+ }
+
+ let data = &self.mir[idx];
+
+ if let Some(ref term) = data.terminator {
+ for &succ in term.successors().iter() {
+ self.worklist.push(succ);
+ }
+ }
+
+ return Some((idx, data));
+ }
+
+ None
+ }
+}
+
+/// Postorder traversal of a graph.
+///
+/// Postorder traversal is when each node is visited after all of it's
+/// successors, except when the successor is only reachable by a back-edge
+///
+///
+/// ```text
+///
+/// A
+/// / \
+/// / \
+/// B C
+/// \ /
+/// \ /
+/// D
+/// ```
+///
+/// A Postorder traversal of this graph is `D B C A` or `D C B A`
+pub struct Postorder<'a, 'tcx: 'a> {
+ mir: &'a Mir<'tcx>,
+ visited: BitVector,
+ visit_stack: Vec<(BasicBlock, vec::IntoIter<BasicBlock>)>
+}
+
+impl<'a, 'tcx> Postorder<'a, 'tcx> {
+ pub fn new(mir: &'a Mir<'tcx>, root: BasicBlock) -> Postorder<'a, 'tcx> {
+ let mut po = Postorder {
+ mir: mir,
+ visited: BitVector::new(mir.basic_blocks().len()),
+ visit_stack: Vec::new()
+ };
+
+
+ let data = &po.mir[root];
+
+ if let Some(ref term) = data.terminator {
+ po.visited.insert(root.index());
+
+ let succs = term.successors().into_owned().into_iter();
+
+ po.visit_stack.push((root, succs));
+ po.traverse_successor();
+ }
+
+ po
+ }
+
+ fn traverse_successor(&mut self) {
+ // This is quite a complex loop due to 1. the borrow checker not liking it much
+ // and 2. what exactly is going on is not clear
+ //
+ // It does the actual traversal of the graph, while the `next` method on the iterator
+ // just pops off of the stack. `visit_stack` is a stack containing pairs of nodes and
+ // iterators over the sucessors of those nodes. Each iteration attempts to get the next
+ // node from the top of the stack, then pushes that node and an iterator over the
+ // successors to the top of the stack. This loop only grows `visit_stack`, stopping when
+ // we reach a child that has no children that we haven't already visited.
+ //
+ // For a graph that looks like this:
+ //
+ // A
+ // / \
+ // / \
+ // B C
+ // | |
+ // | |
+ // D |
+ // \ /
+ // \ /
+ // E
+ //
+ // The state of the stack starts out with just the root node (`A` in this case);
+ // [(A, [B, C])]
+ //
+ // When the first call to `traverse_sucessor` happens, the following happens:
+ //
+ // [(B, [D]), // `B` taken from the successors of `A`, pushed to the
+ // // top of the stack along with the successors of `B`
+ // (A, [C])]
+ //
+ // [(D, [E]), // `D` taken from successors of `B`, pushed to stack
+ // (B, []),
+ // (A, [C])]
+ //
+ // [(E, []), // `E` taken from successors of `D`, pushed to stack
+ // (D, []),
+ // (B, []),
+ // (A, [C])]
+ //
+ // Now that the top of the stack has no successors we can traverse, each item will
+ // be popped off during iteration until we get back to `A`. This yeilds [E, D, B].
+ //
+ // When we yield `B` and call `traverse_successor`, we push `C` to the stack, but
+ // since we've already visited `E`, that child isn't added to the stack. The last
+ // two iterations yield `C` and finally `A` for a final traversal of [E, D, B, C, A]
+ loop {
+ let bb = if let Some(&mut (_, ref mut iter)) = self.visit_stack.last_mut() {
+ if let Some(bb) = iter.next() {
+ bb
+ } else {
+ break;
+ }
+ } else {
+ break;
+ };
+
+ if self.visited.insert(bb.index()) {
+ if let Some(ref term) = self.mir[bb].terminator {
+ let succs = term.successors().into_owned().into_iter();
+ self.visit_stack.push((bb, succs));
+ }
+ }
+ }
+ }
+}
+
+pub fn postorder<'a, 'tcx>(mir: &'a Mir<'tcx>) -> Postorder<'a, 'tcx> {
+ Postorder::new(mir, START_BLOCK)
+}
+
+impl<'a, 'tcx> Iterator for Postorder<'a, 'tcx> {
+ type Item = (BasicBlock, &'a BasicBlockData<'tcx>);
+
+ fn next(&mut self) -> Option<(BasicBlock, &'a BasicBlockData<'tcx>)> {
+ let next = self.visit_stack.pop();
+ if next.is_some() {
+ self.traverse_successor();
+ }
+
+ next.map(|(bb, _)| (bb, &self.mir[bb]))
+ }
+}
+
+/// Reverse postorder traversal of a graph
+///
+/// Reverse postorder is the reverse order of a postorder traversal.
+/// This is different to a preorder traversal and represents a natural
+/// linearisation of control-flow.
+///
+/// ```text
+///
+/// A
+/// / \
+/// / \
+/// B C
+/// \ /
+/// \ /
+/// D
+/// ```
+///
+/// A reverse postorder traversal of this graph is either `A B C D` or `A C B D`
+/// Note that for a graph containing no loops (i.e. A DAG), this is equivalent to
+/// a topological sort.
+///
+/// Construction of a `ReversePostorder` traversal requires doing a full
+/// postorder traversal of the graph, therefore this traversal should be
+/// constructed as few times as possible. Use the `reset` method to be able
+/// to re-use the traversal
+#[derive(Clone)]
+pub struct ReversePostorder<'a, 'tcx: 'a> {
+ mir: &'a Mir<'tcx>,
+ blocks: Vec<BasicBlock>,
+ idx: usize
+}
+
+impl<'a, 'tcx> ReversePostorder<'a, 'tcx> {
+ pub fn new(mir: &'a Mir<'tcx>, root: BasicBlock) -> ReversePostorder<'a, 'tcx> {
+ let blocks : Vec<_> = Postorder::new(mir, root).map(|(bb, _)| bb).collect();
+
+ let len = blocks.len();
+
+ ReversePostorder {
+ mir: mir,
+ blocks: blocks,
+ idx: len
+ }
+ }
+
+ pub fn reset(&mut self) {
+ self.idx = self.blocks.len();
+ }
+}
+
+
+pub fn reverse_postorder<'a, 'tcx>(mir: &'a Mir<'tcx>) -> ReversePostorder<'a, 'tcx> {
+ ReversePostorder::new(mir, START_BLOCK)
+}
+
+impl<'a, 'tcx> Iterator for ReversePostorder<'a, 'tcx> {
+ type Item = (BasicBlock, &'a BasicBlockData<'tcx>);
+
+ fn next(&mut self) -> Option<(BasicBlock, &'a BasicBlockData<'tcx>)> {
+ if self.idx == 0 { return None; }
+ self.idx -= 1;
+
+ self.blocks.get(self.idx).map(|&bb| (bb, &self.mir[bb]))
+ }
+}
use mir::repr::*;
use rustc_const_math::ConstUsize;
use rustc_data_structures::tuple_slice::TupleSlice;
+use rustc_data_structures::indexed_vec::Idx;
use syntax::codemap::Span;
// # The MIR Visitor
fn super_mir(&mut self,
mir: & $($mutability)* Mir<'tcx>) {
- let Mir {
- ref $($mutability)* basic_blocks,
- ref $($mutability)* visibility_scopes,
- promoted: _, // Visited by passes separately.
- ref $($mutability)* return_ty,
- ref $($mutability)* var_decls,
- ref $($mutability)* arg_decls,
- ref $($mutability)* temp_decls,
- upvar_decls: _,
- ref $($mutability)* span,
- } = *mir;
-
- for (index, data) in basic_blocks.into_iter().enumerate() {
+ for index in 0..mir.basic_blocks().len() {
let block = BasicBlock::new(index);
- self.visit_basic_block_data(block, data);
+ self.visit_basic_block_data(block, &$($mutability)* mir[block]);
}
- for scope in visibility_scopes {
+ for scope in &$($mutability)* mir.visibility_scopes {
self.visit_visibility_scope_data(scope);
}
- self.visit_fn_output(return_ty);
+ self.visit_fn_output(&$($mutability)* mir.return_ty);
- for var_decl in var_decls {
+ for var_decl in &$($mutability)* mir.var_decls {
self.visit_var_decl(var_decl);
}
- for arg_decl in arg_decls {
+ for arg_decl in &$($mutability)* mir.arg_decls {
self.visit_arg_decl(arg_decl);
}
- for temp_decl in temp_decls {
+ for temp_decl in &$($mutability)* mir.temp_decls {
self.visit_temp_decl(temp_decl);
}
- self.visit_span(span);
+ self.visit_span(&$($mutability)* mir.span);
}
fn super_basic_block_data(&mut self,
}
TerminatorKind::Resume |
- TerminatorKind::Return => {
+ TerminatorKind::Return |
+ TerminatorKind::Unreachable => {
}
TerminatorKind::Drop { ref $($mutability)* location,
}
}
- Rvalue::Slice { ref $($mutability)* input,
- from_start,
- from_end } => {
- self.visit_lvalue(input, LvalueContext::Slice {
- from_start: from_start,
- from_end: from_end,
- });
- }
-
Rvalue::InlineAsm { ref $($mutability)* outputs,
ref $($mutability)* inputs,
asm: _ } => {
match *proj {
ProjectionElem::Deref => {
}
+ ProjectionElem::Subslice { from: _, to: _ } => {
+ }
ProjectionElem::Field(_field, ref $($mutability)* ty) => {
self.visit_ty(ty);
}
use traits;
use ty::{self, Ty, TyCtxt, TypeFoldable};
+use util::common::slice_pat;
+
use syntax::ast::{FloatTy, IntTy, UintTy};
use syntax::attr;
use syntax::codemap::DUMMY_SP;
let mut dl = TargetDataLayout::default();
for spec in sess.target.target.data_layout.split("-") {
- match &spec.split(":").collect::<Vec<_>>()[..] {
- ["e"] => dl.endian = Endian::Little,
- ["E"] => dl.endian = Endian::Big,
- ["a", a..] => dl.aggregate_align = align(a, "a"),
- ["f32", a..] => dl.f32_align = align(a, "f32"),
- ["f64", a..] => dl.f64_align = align(a, "f64"),
- [p @ "p", s, a..] | [p @ "p0", s, a..] => {
+ match slice_pat(&&spec.split(":").collect::<Vec<_>>()[..]) {
+ &["e"] => dl.endian = Endian::Little,
+ &["E"] => dl.endian = Endian::Big,
+ &["a", ref a..] => dl.aggregate_align = align(a, "a"),
+ &["f32", ref a..] => dl.f32_align = align(a, "f32"),
+ &["f64", ref a..] => dl.f64_align = align(a, "f64"),
+ &[p @ "p", s, ref a..] | &[p @ "p0", s, ref a..] => {
dl.pointer_size = size(s, p);
dl.pointer_align = align(a, p);
}
- [s, a..] if s.starts_with("i") => {
+ &[s, ref a..] if s.starts_with("i") => {
let ty_align = match s[1..].parse::<u64>() {
Ok(1) => &mut dl.i8_align,
Ok(8) => &mut dl.i8_align,
};
*ty_align = align(a, s);
}
- [s, a..] if s.starts_with("v") => {
+ &[s, ref a..] if s.starts_with("v") => {
let v_size = size(&s[1..], "v");
let a = align(a, s);
if let Some(v) = dl.vector_align.iter_mut().find(|v| v.0 == v_size) {
use hir::map as ast_map;
use middle;
use middle::cstore::{self, LOCAL_CRATE};
-use hir::def::{self, Def, ExportMap};
+use hir::def::{Def, PathResolution, ExportMap};
use hir::def_id::DefId;
use middle::lang_items::{FnTraitLangItem, FnMutTraitLangItem, FnOnceTraitLangItem};
use middle::region::{CodeExtent, ROOT_CODE_EXTENT};
match *visibility {
hir::Public => Visibility::Public,
hir::Visibility::Crate => Visibility::Restricted(ast::CRATE_NODE_ID),
- hir::Visibility::Restricted { id, .. } => match tcx.def_map.borrow().get(&id) {
- Some(resolution) => Visibility::Restricted({
- tcx.map.as_local_node_id(resolution.base_def.def_id()).unwrap()
- }),
+ hir::Visibility::Restricted { id, .. } => match tcx.expect_def(id) {
// If there is no resolution, `resolve` will have already reported an error, so
// assume that the visibility is public to avoid reporting more privacy errors.
- None => Visibility::Public,
+ Def::Err => Visibility::Public,
+ def => Visibility::Restricted(tcx.map.as_local_node_id(def.def_id()).unwrap()),
},
hir::Inherited => Visibility::Restricted(tcx.map.get_module_parent(id)),
}
}
}
- pub fn resolve_expr(self, expr: &hir::Expr) -> Def {
- match self.def_map.borrow().get(&expr.id) {
- Some(def) => def.full_def(),
- None => {
- span_bug!(expr.span, "no def-map entry for expr {}", expr.id);
- }
- }
- }
-
pub fn expr_is_lval(self, expr: &hir::Expr) -> bool {
match expr.node {
hir::ExprPath(..) => {
- // We can't use resolve_expr here, as this needs to run on broken
- // programs. We don't need to through - associated items are all
- // rvalues.
- match self.def_map.borrow().get(&expr.id) {
- Some(&def::PathResolution {
- base_def: Def::Static(..), ..
- }) | Some(&def::PathResolution {
- base_def: Def::Upvar(..), ..
- }) | Some(&def::PathResolution {
- base_def: Def::Local(..), ..
- }) => {
- true
- }
- Some(&def::PathResolution { base_def: Def::Err, .. })=> true,
- Some(..) => false,
- None => span_bug!(expr.span, "no def for path {}", expr.id)
+ // This function can be used during type checking when not all paths are
+ // fully resolved. Partially resolved paths in expressions can only legally
+ // refer to associated items which are always rvalues.
+ match self.expect_resolution(expr.id).base_def {
+ Def::Local(..) | Def::Upvar(..) | Def::Static(..) | Def::Err => true,
+ _ => false,
}
}
}
}
- pub fn trait_ref_to_def_id(self, tr: &hir::TraitRef) -> DefId {
- self.def_map.borrow().get(&tr.ref_id).expect("no def-map entry for trait").def_id()
+ /// Returns a path resolution for node id if it exists, panics otherwise.
+ pub fn expect_resolution(self, id: NodeId) -> PathResolution {
+ *self.def_map.borrow().get(&id).expect("no def-map entry for node id")
+ }
+
+ /// Returns a fully resolved definition for node id if it exists, panics otherwise.
+ pub fn expect_def(self, id: NodeId) -> Def {
+ self.expect_resolution(id).full_def()
+ }
+
+ /// Returns a fully resolved definition for node id if it exists, or none if no
+ /// definition exists, panics on partial resolutions to catch errors.
+ pub fn expect_def_or_none(self, id: NodeId) -> Option<Def> {
+ self.def_map.borrow().get(&id).map(|resolution| resolution.full_def())
}
pub fn def_key(self, id: DefId) -> ast_map::DefKey {
pub fn path2cstr(p: &Path) -> CString {
CString::new(p.to_str().unwrap()).unwrap()
}
+
+// FIXME(stage0): remove this
+// HACK: this is needed because the interpretation of slice
+// patterns changed between stage0 and now.
+#[cfg(stage0)]
+pub fn slice_pat<'a, 'b, T>(t: &'a &'b [T]) -> &'a &'b [T] {
+ t
+}
+#[cfg(not(stage0))]
+pub fn slice_pat<'a, 'b, T>(t: &'a &'b [T]) -> &'b [T] {
+ *t
+}
#[cfg(test)]
#[allow(non_upper_case_globals)]
mod tests {
- use std::hash::{Hasher, Hash, SipHasher};
- use std::option::Option::{Some, None};
+ use std::hash::{Hash, Hasher, SipHasher};
+ use std::option::Option::{None, Some};
bitflags! {
#[doc = "> The first principle is that you must not fool yourself — and"]
syntax = { path = "../libsyntax" }
graphviz = { path = "../libgraphviz" }
rustc = { path = "../librustc" }
+rustc_data_structures = { path = "../librustc_data_structures" }
rustc_mir = { path = "../librustc_mir" }
ProjectionElem::Field(f.clone(), ty.clone()),
ProjectionElem::Index(ref i) =>
ProjectionElem::Index(i.lift()),
+ ProjectionElem::Subslice {from, to} =>
+ ProjectionElem::Subslice { from: from, to: to },
ProjectionElem::ConstantIndex {offset,min_length,from_end} =>
ProjectionElem::ConstantIndex {
offset: offset,
use syntax::ast::NodeId;
use rustc::mir::repr::{BasicBlock, Mir};
+use rustc_data_structures::indexed_vec::Idx;
use dot;
use dot::IntoCow;
use super::super::MoveDataParamEnv;
use super::super::MirBorrowckCtxtPreDataflow;
use bitslice::bits_to_string;
-use indexed_set::{Idx, IdxSet};
+use indexed_set::{IdxSet};
use super::{BitDenotation, DataflowState};
impl<O: BitDenotation> DataflowState<O> {
pub struct Edge { source: BasicBlock, index: usize }
fn outgoing(mir: &Mir, bb: BasicBlock) -> Vec<Edge> {
- let succ_len = mir.basic_block_data(bb).terminator().successors().len();
+ let succ_len = mir[bb].terminator().successors().len();
(0..succ_len).map(|index| Edge { source: bb, index: index}).collect()
}
type Node = Node;
type Edge = Edge;
fn nodes(&self) -> dot::Nodes<Node> {
- self.mbcx.mir().all_basic_blocks().into_cow()
+ self.mbcx.mir()
+ .basic_blocks()
+ .indices()
+ .collect::<Vec<_>>()
+ .into_cow()
}
fn edges(&self) -> dot::Edges<Edge> {
let mir = self.mbcx.mir();
- let blocks = mir.all_basic_blocks();
// base initial capacity on assumption every block has at
// least one outgoing edge (Which should be true for all
// blocks but one, the exit-block).
- let mut edges = Vec::with_capacity(blocks.len());
- for bb in blocks {
+ let mut edges = Vec::with_capacity(mir.basic_blocks().len());
+ for bb in mir.basic_blocks().indices() {
let outgoing = outgoing(mir, bb);
edges.extend(outgoing.into_iter());
}
fn target(&self, edge: &Edge) -> Node {
let mir = self.mbcx.mir();
- mir.basic_block_data(edge.source).terminator().successors()[edge.index]
+ mir[edge.source].terminator().successors()[edge.index]
}
}
use rustc::ty::TyCtxt;
use rustc::mir::repr::{self, Mir};
+use rustc_data_structures::indexed_vec::Idx;
use super::super::gather_moves::{Location};
use super::super::gather_moves::{MoveOutIndex, MovePathIndex};
use bitslice::BitSlice; // adds set_bit/get_bit to &[usize] bitvector rep.
use bitslice::{BitwiseOperator};
-use indexed_set::{Idx, IdxSet};
+use indexed_set::{IdxSet};
// Dataflow analyses are built upon some interpretation of the
// bitvectors attached to each basic block, represented via a
bb: repr::BasicBlock,
idx: usize) {
let (tcx, mir, move_data) = (self.tcx, self.mir, &ctxt.move_data);
- let stmt = &mir.basic_block_data(bb).statements[idx];
+ let stmt = &mir[bb].statements[idx];
let loc_map = &move_data.loc_map;
let path_map = &move_data.path_map;
let rev_lookup = &move_data.rev_lookup;
move_data,
move_path_index,
|mpi| for moi in &path_map[mpi] {
- assert!(moi.idx() < bits_per_block);
+ assert!(moi.index() < bits_per_block);
sets.kill_set.add(&moi);
});
}
statements_len: usize)
{
let (mir, move_data) = (self.mir, &ctxt.move_data);
- let term = mir.basic_block_data(bb).terminator.as_ref().unwrap();
+ let term = mir[bb].terminator();
let loc_map = &move_data.loc_map;
let loc = Location { block: bb, index: statements_len };
debug!("terminator {:?} at loc {:?} moves out of move_indexes {:?}",
term, loc, &loc_map[loc]);
let bits_per_block = self.bits_per_block(ctxt);
for move_index in &loc_map[loc] {
- assert!(move_index.idx() < bits_per_block);
+ assert!(move_index.index() < bits_per_block);
zero_to_one(sets.gen_set.words_mut(), *move_index);
}
}
move_data,
move_path_index,
|mpi| for moi in &path_map[mpi] {
- assert!(moi.idx() < bits_per_block);
+ assert!(moi.index() < bits_per_block);
in_out.remove(&moi);
});
}
}
fn zero_to_one(bitvec: &mut [usize], move_index: MoveOutIndex) {
- let retval = bitvec.set_bit(move_index.idx());
+ let retval = bitvec.set_bit(move_index.index());
assert!(retval);
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+use rustc_data_structures::indexed_vec::Idx;
+
use rustc::ty::TyCtxt;
use rustc::mir::repr::{self, Mir};
use super::MoveDataParamEnv;
use bitslice::{bitwise, BitwiseOperator};
-use indexed_set::{Idx, IdxSet, IdxSetBuf};
+use indexed_set::{IdxSet, IdxSetBuf};
pub use self::sanity_check::sanity_check_via_rustc_peek;
pub use self::impls::{MaybeInitializedLvals, MaybeUninitializedLvals};
self.flow_state.operator.start_block_effect(&self.ctxt, sets);
}
- for bb in self.mir.all_basic_blocks() {
+ for (bb, data) in self.mir.basic_blocks().iter_enumerated() {
let &repr::BasicBlockData { ref statements,
ref terminator,
- is_cleanup: _ } =
- self.mir.basic_block_data(bb);
+ is_cleanup: _ } = data;
let sets = &mut self.flow_state.sets.for_block(bb.index());
for j_stmt in 0..statements.len() {
fn walk_cfg(&mut self, in_out: &mut IdxSet<BD::Idx>) {
let mir = self.builder.mir;
- for (bb_idx, bb_data) in mir.basic_blocks.iter().enumerate() {
+ for (bb_idx, bb_data) in mir.basic_blocks().iter().enumerate() {
let builder = &mut self.builder;
{
let sets = builder.flow_state.sets.for_block(bb_idx);
// (now rounded up to multiple of word size)
let bits_per_block = words_per_block * usize_bits;
- let num_blocks = mir.basic_blocks.len();
+ let num_blocks = mir.basic_blocks().len();
let num_overall = num_blocks * bits_per_block;
let zeroes = Bits::new(IdxSetBuf::new_empty(num_overall));
{
match bb_data.terminator().kind {
repr::TerminatorKind::Return |
- repr::TerminatorKind::Resume => {}
+ repr::TerminatorKind::Resume |
+ repr::TerminatorKind::Unreachable => {}
repr::TerminatorKind::Goto { ref target } |
repr::TerminatorKind::Assert { ref target, cleanup: None, .. } |
repr::TerminatorKind::Drop { ref target, location: _, unwind: None } |
use rustc::ty::{self, TyCtxt};
use rustc::mir::repr::{self, Mir};
+use rustc_data_structures::indexed_vec::Idx;
use super::super::gather_moves::{MovePathIndex};
use super::super::MoveDataParamEnv;
// `dataflow::build_sets`. (But note it is doing non-standard
// stuff, so such generalization may not be realistic.)
- let blocks = mir.all_basic_blocks();
- 'next_block: for bb in blocks {
+ for bb in mir.basic_blocks().indices() {
each_block(tcx, mir, flow_ctxt, results, bb);
}
}
O: BitDenotation<Ctxt=MoveDataParamEnv<'tcx>, Idx=MovePathIndex>
{
let move_data = &ctxt.move_data;
- let bb_data = mir.basic_block_data(bb);
- let &repr::BasicBlockData { ref statements,
- ref terminator,
- is_cleanup: _ } = bb_data;
+ let repr::BasicBlockData { ref statements,
+ ref terminator,
+ is_cleanup: _ } = mir[bb];
let (args, span) = match is_rustc_peek(tcx, terminator) {
Some(args_and_span) => args_and_span,
use rustc::middle::const_val::ConstVal;
use rustc::middle::lang_items;
use rustc::util::nodemap::FnvHashMap;
-use rustc_mir::pretty;
+use rustc_data_structures::indexed_vec::Idx;
use syntax::codemap::Span;
use std::fmt;
patch: MirPatch::new(mir),
}.elaborate()
};
- pretty::dump_mir(tcx, "elaborate_drops", &0, src, mir, None);
elaborate_patch.apply(mir);
- pretty::dump_mir(tcx, "elaborate_drops", &1, src, mir, None);
}
}
env: &'a MoveDataParamEnv<'tcx>,
flow_inits: DataflowResults<MaybeInitializedLvals<'a, 'tcx>>,
flow_uninits: DataflowResults<MaybeUninitializedLvals<'a, 'tcx>>,
- drop_flags: FnvHashMap<MovePathIndex, u32>,
+ drop_flags: FnvHashMap<MovePathIndex, Temp>,
patch: MirPatch<'tcx>,
}
}
/// Returns whether this lvalue is tracked by drop elaboration. This
- /// includes all lvalues, except these behind references or arrays.
- ///
- /// Lvalues behind references or arrays are not tracked by elaboration
- /// and are always assumed to be initialized when accessible. As
- /// references and indexes can be reseated, trying to track them
- /// can only lead to trouble.
+ /// includes all lvalues, except these (1.) behind references or arrays,
+ /// or (2.) behind ADT's with a Drop impl.
fn lvalue_is_tracked(&self, lv: &Lvalue<'tcx>) -> bool
{
+ // `lvalue_contents_drop_state_cannot_differ` only compares
+ // the `lv` to its immediate contents, while this recursively
+ // follows parent chain formed by `base` of each projection.
if let &Lvalue::Projection(ref data) = lv {
- self.lvalue_contents_are_tracked(&data.base)
+ !super::lvalue_contents_drop_state_cannot_differ(self.tcx, self.mir, &data.base) &&
+ self.lvalue_is_tracked(&data.base)
} else {
true
}
}
- fn lvalue_contents_are_tracked(&self, lv: &Lvalue<'tcx>) -> bool {
- let ty = self.mir.lvalue_ty(self.tcx, lv).to_ty(self.tcx);
- match ty.sty {
- ty::TyArray(..) | ty::TySlice(..) | ty::TyRef(..) | ty::TyRawPtr(..) => {
- false
- }
- _ => self.lvalue_is_tracked(lv)
- }
- }
-
fn collect_drop_flags(&mut self)
{
- for bb in self.mir.all_basic_blocks() {
- let data = self.mir.basic_block_data(bb);
+ for (bb, data) in self.mir.basic_blocks().iter_enumerated() {
let terminator = data.terminator();
let location = match terminator.kind {
TerminatorKind::Drop { ref location, .. } |
fn elaborate_drops(&mut self)
{
- for bb in self.mir.all_basic_blocks() {
- let data = self.mir.basic_block_data(bb);
+ for (bb, data) in self.mir.basic_blocks().iter_enumerated() {
let loc = Location { block: bb, index: data.statements.len() };
let terminator = data.terminator();
unwind: Option<BasicBlock>)
{
let bb = loc.block;
- let data = self.mir.basic_block_data(bb);
+ let data = &self.mir[bb];
let terminator = data.terminator();
let assign = Statement {
}
fn drop_flags_for_fn_rets(&mut self) {
- for bb in self.mir.all_basic_blocks() {
- let data = self.mir.basic_block_data(bb);
+ for (bb, data) in self.mir.basic_blocks().iter_enumerated() {
if let TerminatorKind::Call {
destination: Some((ref lv, tgt)), cleanup: Some(_), ..
} = data.terminator().kind {
// drop flags by themselves, to avoid the drop flags being
// clobbered before they are read.
- for bb in self.mir.all_basic_blocks() {
- let data = self.mir.basic_block_data(bb);
+ for (bb, data) in self.mir.basic_blocks().iter_enumerated() {
debug!("drop_flags_for_locs({:?})", data);
for i in 0..(data.statements.len()+1) {
debug!("drop_flag_for_locs: stmt {}", i);
use rustc::ty::{FnOutput, TyCtxt};
use rustc::mir::repr::*;
use rustc::util::nodemap::FnvHashMap;
+use rustc_data_structures::indexed_vec::{Idx, IndexVec};
use std::cell::{Cell};
use std::collections::hash_map::Entry;
use std::ops::Index;
use super::abs_domain::{AbstractElem, Lift};
-use indexed_set::{Idx};
// This submodule holds some newtype'd Index wrappers that are using
// NonZero to ensure that Option<Index> occupies only a single word.
// (which is likely to yield a subtle off-by-one error).
mod indexes {
use core::nonzero::NonZero;
- use indexed_set::Idx;
+ use rustc_data_structures::indexed_vec::Idx;
macro_rules! new_index {
($Index:ident) => {
fn new(idx: usize) -> Self {
unsafe { $Index(NonZero::new(idx + 1)) }
}
- fn idx(&self) -> usize {
+ fn index(self) -> usize {
*self.0 - 1
}
}
impl self::indexes::MoveOutIndex {
pub fn move_path_index(&self, move_data: &MoveData) -> MovePathIndex {
- move_data.moves[self.idx()].path
+ move_data.moves[self.index()].path
}
}
impl Index<MovePathIndex> for PathMap {
type Output = [MoveOutIndex];
fn index(&self, index: MovePathIndex) -> &Self::Output {
- &self.map[index.idx()]
+ &self.map[index.index()]
}
}
impl fmt::Debug for MoveOut {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
- write!(fmt, "p{}@{:?}", self.path.idx(), self.source)
+ write!(fmt, "p{}@{:?}", self.path.index(), self.source)
}
}
impl<'tcx> Index<MovePathIndex> for MovePathData<'tcx> {
type Output = MovePath<'tcx>;
fn index(&self, i: MovePathIndex) -> &MovePath<'tcx> {
- &self.move_paths[i.idx()]
+ &self.move_paths[i.index()]
}
}
-/// MovePathInverseMap maps from a uint in an lvalue-category to the
-/// MovePathIndex for the MovePath for that lvalue.
-type MovePathInverseMap = Vec<Option<MovePathIndex>>;
-
struct MovePathDataBuilder<'a, 'tcx: 'a> {
mir: &'a Mir<'tcx>,
pre_move_paths: Vec<PreMovePath<'tcx>>,
/// Tables mapping from an l-value to its MovePathIndex.
#[derive(Debug)]
pub struct MovePathLookup<'tcx> {
- vars: MovePathInverseMap,
- temps: MovePathInverseMap,
- args: MovePathInverseMap,
+ vars: IndexVec<Var, Option<MovePathIndex>>,
+ temps: IndexVec<Temp, Option<MovePathIndex>>,
+ args: IndexVec<Arg, Option<MovePathIndex>>,
/// The move path representing the return value is constructed
/// lazily when we first encounter it in the input MIR.
struct Lookup<T>(LookupKind, T);
impl Lookup<MovePathIndex> {
- fn idx(&self) -> usize { (self.1).idx() }
+ fn index(&self) -> usize { (self.1).index() }
}
impl<'tcx> MovePathLookup<'tcx> {
- fn new() -> Self {
+ fn new(mir: &Mir) -> Self {
MovePathLookup {
- vars: vec![],
- temps: vec![],
- args: vec![],
+ vars: IndexVec::from_elem(None, &mir.var_decls),
+ temps: IndexVec::from_elem(None, &mir.temp_decls),
+ args: IndexVec::from_elem(None, &mir.arg_decls),
statics: None,
return_ptr: None,
projections: vec![],
fn next_index(next: &mut MovePathIndex) -> MovePathIndex {
let i = *next;
- *next = MovePathIndex::new(i.idx() + 1);
+ *next = MovePathIndex::new(i.index() + 1);
i
}
- fn lookup_or_generate(vec: &mut Vec<Option<MovePathIndex>>,
- idx: u32,
- next_index: &mut MovePathIndex) -> Lookup<MovePathIndex> {
- let idx = idx as usize;
- vec.fill_to_with(idx, None);
+ fn lookup_or_generate<I: Idx>(vec: &mut IndexVec<I, Option<MovePathIndex>>,
+ idx: I,
+ next_index: &mut MovePathIndex)
+ -> Lookup<MovePathIndex> {
let entry = &mut vec[idx];
match *entry {
None => {
}
}
- fn lookup_var(&mut self, var_idx: u32) -> Lookup<MovePathIndex> {
+ fn lookup_var(&mut self, var_idx: Var) -> Lookup<MovePathIndex> {
Self::lookup_or_generate(&mut self.vars,
var_idx,
&mut self.next_index)
}
- fn lookup_temp(&mut self, temp_idx: u32) -> Lookup<MovePathIndex> {
+ fn lookup_temp(&mut self, temp_idx: Temp) -> Lookup<MovePathIndex> {
Self::lookup_or_generate(&mut self.temps,
temp_idx,
&mut self.next_index)
}
- fn lookup_arg(&mut self, arg_idx: u32) -> Lookup<MovePathIndex> {
+ fn lookup_arg(&mut self, arg_idx: Arg) -> Lookup<MovePathIndex> {
Self::lookup_or_generate(&mut self.args,
arg_idx,
&mut self.next_index)
base: MovePathIndex) -> Lookup<MovePathIndex> {
let MovePathLookup { ref mut projections,
ref mut next_index, .. } = *self;
- projections.fill_to(base.idx());
- match projections[base.idx()].entry(proj.elem.lift()) {
+ projections.fill_to(base.index());
+ match projections[base.index()].entry(proj.elem.lift()) {
Entry::Occupied(ent) => {
Lookup(LookupKind::Reuse, *ent.get())
}
// unknown l-value; it will simply panic.
pub fn find(&self, lval: &Lvalue<'tcx>) -> MovePathIndex {
match *lval {
- Lvalue::Var(var_idx) => self.vars[var_idx as usize].unwrap(),
- Lvalue::Temp(temp_idx) => self.temps[temp_idx as usize].unwrap(),
- Lvalue::Arg(arg_idx) => self.args[arg_idx as usize].unwrap(),
+ Lvalue::Var(var) => self.vars[var].unwrap(),
+ Lvalue::Temp(temp) => self.temps[temp].unwrap(),
+ Lvalue::Arg(arg) => self.args[arg].unwrap(),
Lvalue::Static(ref _def_id) => self.statics.unwrap(),
Lvalue::ReturnPointer => self.return_ptr.unwrap(),
Lvalue::Projection(ref proj) => {
let base_index = self.find(&proj.base);
- self.projections[base_index.idx()][&proj.elem.lift()]
+ self.projections[base_index.index()][&proj.elem.lift()]
}
}
}
// `lookup` is either the previously assigned index or a
// newly-allocated one.
- debug_assert!(lookup.idx() <= self.pre_move_paths.len());
+ debug_assert!(lookup.index() <= self.pre_move_paths.len());
if let Lookup(LookupKind::Generate, mpi) = lookup {
let parent;
let idx = self.move_path_for(&proj.base);
parent = Some(idx);
- let parent_move_path = &mut self.pre_move_paths[idx.idx()];
+ let parent_move_path = &mut self.pre_move_paths[idx.index()];
// At last: Swap in the new first_child.
sibling = parent_move_path.first_child.get();
fn gather_moves<'a, 'tcx>(mir: &Mir<'tcx>, tcx: TyCtxt<'a, 'tcx, 'tcx>) -> MoveData<'tcx> {
use self::StmtKind as SK;
- let bbs = mir.all_basic_blocks();
- let mut moves = Vec::with_capacity(bbs.len());
- let mut loc_map: Vec<_> = iter::repeat(Vec::new()).take(bbs.len()).collect();
+ let bb_count = mir.basic_blocks().len();
+ let mut moves = vec![];
+ let mut loc_map: Vec<_> = iter::repeat(Vec::new()).take(bb_count).collect();
let mut path_map = Vec::new();
// this is mutable only because we will move it to and fro' the
let mut builder = MovePathDataBuilder {
mir: mir,
pre_move_paths: Vec::new(),
- rev_lookup: MovePathLookup::new(),
+ rev_lookup: MovePathLookup::new(mir),
};
// Before we analyze the program text, we create the MovePath's
assert!(mir.var_decls.len() <= ::std::u32::MAX as usize);
assert!(mir.arg_decls.len() <= ::std::u32::MAX as usize);
assert!(mir.temp_decls.len() <= ::std::u32::MAX as usize);
- for var_idx in 0..mir.var_decls.len() {
- let path_idx = builder.move_path_for(&Lvalue::Var(var_idx as u32));
- path_map.fill_to(path_idx.idx());
+ for var in mir.var_decls.indices() {
+ let path_idx = builder.move_path_for(&Lvalue::Var(var));
+ path_map.fill_to(path_idx.index());
}
- for arg_idx in 0..mir.arg_decls.len() {
- let path_idx = builder.move_path_for(&Lvalue::Arg(arg_idx as u32));
- path_map.fill_to(path_idx.idx());
+ for arg in mir.arg_decls.indices() {
+ let path_idx = builder.move_path_for(&Lvalue::Arg(arg));
+ path_map.fill_to(path_idx.index());
}
- for temp_idx in 0..mir.temp_decls.len() {
- let path_idx = builder.move_path_for(&Lvalue::Temp(temp_idx as u32));
- path_map.fill_to(path_idx.idx());
+ for temp in mir.temp_decls.indices() {
+ let path_idx = builder.move_path_for(&Lvalue::Temp(temp));
+ path_map.fill_to(path_idx.index());
}
- for bb in bbs {
+ for (bb, bb_data) in mir.basic_blocks().iter_enumerated() {
let loc_map_bb = &mut loc_map[bb.index()];
- let bb_data = mir.basic_block_data(bb);
debug_assert!(loc_map_bb.len() == 0);
let len = bb_data.statements.len();
// Ensure that the path_map contains entries even
// if the lvalue is assigned and never read.
let assigned_path = bb_ctxt.builder.move_path_for(lval);
- bb_ctxt.path_map.fill_to(assigned_path.idx());
+ bb_ctxt.path_map.fill_to(assigned_path.index());
match *rval {
Rvalue::Use(ref operand) => {
Rvalue::Ref(..) |
Rvalue::Len(..) |
Rvalue::InlineAsm { .. } => {}
-
- Rvalue::Slice {..} => {
- // A slice pattern `x..` binds `x` to a
- // reference; thus no move occurs.
- //
- // FIXME: I recall arielb1 questioning
- // whether this is even a legal thing to
- // have as an R-value. The particular
- // example where I am seeing this arise is
- // `TargetDataLayout::parse(&Session)` in
- // `rustc::ty::layout`.
- //
- // this should be removed soon.
- debug!("encountered Rvalue::Slice as RHS of Assign, source: {:?}",
- source);
- }
}
}
}
debug!("gather_moves({:?})", bb_data.terminator());
match bb_data.terminator().kind {
- TerminatorKind::Goto { target: _ } | TerminatorKind::Resume => { }
+ TerminatorKind::Goto { target: _ } |
+ TerminatorKind::Resume |
+ TerminatorKind::Unreachable => { }
TerminatorKind::Return => {
let source = Location { block: bb,
}
TerminatorKind::DropAndReplace { ref location, ref value, .. } => {
let assigned_path = bb_ctxt.builder.move_path_for(location);
- bb_ctxt.path_map.fill_to(assigned_path.idx());
+ bb_ctxt.path_map.fill_to(assigned_path.index());
let source = Location { block: bb,
index: bb_data.statements.len() };
// Ensure that the path_map contains entries even
// if the lvalue is assigned and never read.
let assigned_path = bb_ctxt.builder.move_path_for(destination);
- bb_ctxt.path_map.fill_to(assigned_path.idx());
+ bb_ctxt.path_map.fill_to(assigned_path.index());
bb_ctxt.builder.create_move_path(destination);
}
let mut seen: Vec<_> = move_paths.iter().map(|_| false).collect();
for (j, &MoveOut { ref path, ref source }) in moves.iter().enumerate() {
debug!("MovePathData moves[{}]: MoveOut {{ path: {:?} = {:?}, source: {:?} }}",
- j, path, move_paths[path.idx()], source);
- seen[path.idx()] = true;
+ j, path, move_paths[path.index()], source);
+ seen[path.index()] = true;
}
for (j, path) in move_paths.iter().enumerate() {
if !seen[j] {
let path = self.builder.move_path_for(lval);
self.moves.push(MoveOut { path: path, source: source.clone() });
- self.path_map.fill_to(path.idx());
+ self.path_map.fill_to(path.index());
debug!("ctxt: {:?} add consume of lval: {:?} \
at index: {:?} \
to loc_map for loc: {:?}",
stmt_kind, lval, index, path, source);
- debug_assert!(path.idx() < self.path_map.len());
+ debug_assert!(path.index() < self.path_map.len());
// this is actually a questionable assert; at the very
// least, incorrect input code can probably cause it to
// fire.
- assert!(self.path_map[path.idx()].iter().find(|idx| **idx == index).is_none());
- self.path_map[path.idx()].push(index);
+ assert!(self.path_map[path.index()].iter().find(|idx| **idx == index).is_none());
+ self.path_map[path.index()].push(index);
debug_assert!(i < self.loc_map_bb.len());
debug_assert!(self.loc_map_bb[i].iter().find(|idx| **idx == index).is_none());
flow_uninits: flow_uninits,
};
- for bb in mir.all_basic_blocks() {
+ for bb in mir.basic_blocks().indices() {
mbcx.process_basic_block(bb);
}
impl<'b, 'a: 'b, 'tcx: 'a> MirBorrowckCtxt<'b, 'a, 'tcx> {
fn process_basic_block(&mut self, bb: BasicBlock) {
- let &BasicBlockData { ref statements, ref terminator, is_cleanup: _ } =
- self.mir.basic_block_data(bb);
+ let BasicBlockData { ref statements, ref terminator, is_cleanup: _ } =
+ self.mir[bb];
for stmt in statements {
self.process_statement(bb, stmt);
}
None
}
+/// When enumerating the child fragments of a path, don't recurse into
+/// paths (1.) past arrays, slices, and pointers, nor (2.) into a type
+/// that implements `Drop`.
+///
+/// Lvalues behind references or arrays are not tracked by elaboration
+/// and are always assumed to be initialized when accessible. As
+/// references and indexes can be reseated, trying to track them can
+/// only lead to trouble.
+///
+/// Lvalues behind ADT's with a Drop impl are not tracked by
+/// elaboration since they can never have a drop-flag state that
+/// differs from that of the parent with the Drop impl.
+///
+/// In both cases, the contents can only be accessed if and only if
+/// their parents are initialized. This implies for example that there
+/// is no need to maintain separate drop flags to track such state.
+///
+/// FIXME: we have to do something for moving slice patterns.
+fn lvalue_contents_drop_state_cannot_differ<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
+ mir: &Mir<'tcx>,
+ lv: &repr::Lvalue<'tcx>) -> bool {
+ let ty = mir.lvalue_ty(tcx, lv).to_ty(tcx);
+ match ty.sty {
+ ty::TyArray(..) | ty::TySlice(..) | ty::TyRef(..) | ty::TyRawPtr(..) => {
+ debug!("lvalue_contents_drop_state_cannot_differ lv: {:?} ty: {:?} refd => false",
+ lv, ty);
+ true
+ }
+ ty::TyStruct(def, _) | ty::TyEnum(def, _) if def.has_dtor() => {
+ debug!("lvalue_contents_drop_state_cannot_differ lv: {:?} ty: {:?} Drop => false",
+ lv, ty);
+ true
+ }
+ _ => {
+ false
+ }
+ }
+}
+
fn on_all_children_bits<'a, 'tcx, F>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
mir: &Mir<'tcx>,
{
match move_data.move_paths[path].content {
MovePathContent::Lvalue(ref lvalue) => {
- match mir.lvalue_ty(tcx, lvalue).to_ty(tcx).sty {
- // don't trace paths past arrays, slices, and
- // pointers. They can only be accessed while
- // their parents are initialized.
- //
- // FIXME: we have to do something for moving
- // slice patterns.
- ty::TyArray(..) | ty::TySlice(..) |
- ty::TyRef(..) | ty::TyRawPtr(..) => true,
- _ => false
- }
+ lvalue_contents_drop_state_cannot_differ(tcx, mir, lvalue)
}
_ => true
}
where F: FnMut(MovePathIndex, DropFlagState)
{
let move_data = &ctxt.move_data;
- for i in 0..(mir.arg_decls.len() as u32) {
- let lvalue = repr::Lvalue::Arg(i);
+ for (arg, _) in mir.arg_decls.iter_enumerated() {
+ let lvalue = repr::Lvalue::Arg(arg);
let move_path_index = move_data.rev_lookup.find(&lvalue);
on_all_children_bits(tcx, mir, move_data,
move_path_index,
|moi| callback(moi, DropFlagState::Absent))
}
- let bb = mir.basic_block_data(loc.block);
- match bb.statements.get(loc.index) {
+ let block = &mir[loc.block];
+ match block.statements.get(loc.index) {
Some(stmt) => match stmt.kind {
repr::StatementKind::Assign(ref lvalue, _) => {
debug!("drop_flag_effects: assignment {:?}", stmt);
}
},
None => {
- debug!("drop_flag_effects: replace {:?}", bb.terminator());
- match bb.terminator().kind {
+ debug!("drop_flag_effects: replace {:?}", block.terminator());
+ match block.terminator().kind {
repr::TerminatorKind::DropAndReplace { ref location, .. } => {
on_all_children_bits(tcx, mir, move_data,
move_data.rev_lookup.find(location),
use super::gather_moves::Location;
use rustc::ty::Ty;
use rustc::mir::repr::*;
-
-use std::iter;
-use std::u32;
+use rustc_data_structures::indexed_vec::{IndexVec, Idx};
/// This struct represents a patch to MIR, which can add
/// new statements and basic blocks and patch over block
/// terminators.
pub struct MirPatch<'tcx> {
- patch_map: Vec<Option<TerminatorKind<'tcx>>>,
+ patch_map: IndexVec<BasicBlock, Option<TerminatorKind<'tcx>>>,
new_blocks: Vec<BasicBlockData<'tcx>>,
new_statements: Vec<(Location, StatementKind<'tcx>)>,
new_temps: Vec<TempDecl<'tcx>>,
resume_block: BasicBlock,
- next_temp: u32,
+ next_temp: usize,
}
impl<'tcx> MirPatch<'tcx> {
pub fn new(mir: &Mir<'tcx>) -> Self {
let mut result = MirPatch {
- patch_map: iter::repeat(None)
- .take(mir.basic_blocks.len()).collect(),
+ patch_map: IndexVec::from_elem(None, mir.basic_blocks()),
new_blocks: vec![],
new_temps: vec![],
new_statements: vec![],
- next_temp: mir.temp_decls.len() as u32,
+ next_temp: mir.temp_decls.len(),
resume_block: START_BLOCK
};
let mut resume_block = None;
let mut resume_stmt_block = None;
- for block in mir.all_basic_blocks() {
- let data = mir.basic_block_data(block);
- if let TerminatorKind::Resume = data.terminator().kind {
- if data.statements.len() > 0 {
- resume_stmt_block = Some(block);
+ for (bb, block) in mir.basic_blocks().iter_enumerated() {
+ if let TerminatorKind::Resume = block.terminator().kind {
+ if block.statements.len() > 0 {
+ resume_stmt_block = Some(bb);
} else {
- resume_block = Some(block);
+ resume_block = Some(bb);
}
break
}
}
pub fn is_patched(&self, bb: BasicBlock) -> bool {
- self.patch_map[bb.index()].is_some()
+ self.patch_map[bb].is_some()
}
pub fn terminator_loc(&self, mir: &Mir<'tcx>, bb: BasicBlock) -> Location {
- let offset = match bb.index().checked_sub(mir.basic_blocks.len()) {
+ let offset = match bb.index().checked_sub(mir.basic_blocks().len()) {
Some(index) => self.new_blocks[index].statements.len(),
- None => mir.basic_block_data(bb).statements.len()
+ None => mir[bb].statements.len()
};
Location {
block: bb,
}
}
- pub fn new_temp(&mut self, ty: Ty<'tcx>) -> u32 {
+ pub fn new_temp(&mut self, ty: Ty<'tcx>) -> Temp {
let index = self.next_temp;
- assert!(self.next_temp < u32::MAX);
self.next_temp += 1;
self.new_temps.push(TempDecl { ty: ty });
- index
+ Temp::new(index as usize)
}
pub fn new_block(&mut self, data: BasicBlockData<'tcx>) -> BasicBlock {
}
pub fn patch_terminator(&mut self, block: BasicBlock, new: TerminatorKind<'tcx>) {
- assert!(self.patch_map[block.index()].is_none());
+ assert!(self.patch_map[block].is_none());
debug!("MirPatch: patch_terminator({:?}, {:?})", block, new);
- self.patch_map[block.index()] = Some(new);
+ self.patch_map[block] = Some(new);
}
pub fn add_statement(&mut self, loc: Location, stmt: StatementKind<'tcx>) {
debug!("MirPatch: {:?} new temps, starting from index {}: {:?}",
self.new_temps.len(), mir.temp_decls.len(), self.new_temps);
debug!("MirPatch: {} new blocks, starting from index {}",
- self.new_blocks.len(), mir.basic_blocks.len());
- mir.basic_blocks.extend(self.new_blocks);
+ self.new_blocks.len(), mir.basic_blocks().len());
+ mir.basic_blocks_mut().extend(self.new_blocks);
mir.temp_decls.extend(self.new_temps);
- for (src, patch) in self.patch_map.into_iter().enumerate() {
+ for (src, patch) in self.patch_map.into_iter_enumerated() {
if let Some(patch) = patch {
debug!("MirPatch: patching block {:?}", src);
- mir.basic_blocks[src].terminator_mut().kind = patch;
+ mir[src].terminator_mut().kind = patch;
}
}
stmt, loc, delta);
loc.index += delta;
let source_info = Self::source_info_for_index(
- mir.basic_block_data(loc.block), loc
+ &mir[loc.block], loc
);
- mir.basic_block_data_mut(loc.block).statements.insert(
+ mir[loc.block].statements.insert(
loc.index, Statement {
source_info: source_info,
kind: stmt
}
pub fn source_info_for_location(&self, mir: &Mir, loc: Location) -> SourceInfo {
- let data = match loc.block.index().checked_sub(mir.basic_blocks.len()) {
+ let data = match loc.block.index().checked_sub(mir.basic_blocks().len()) {
Some(new) => &self.new_blocks[new],
- None => mir.basic_block_data(loc.block)
+ None => &mir[loc.block]
};
Self::source_info_for_index(data, loc)
}
use bitslice::{BitSlice, Word};
use bitslice::{bitwise, Union, Subtract};
-/// Represents some newtyped `usize` wrapper.
-///
-/// (purpose: avoid mixing indexes for different bitvector domains.)
-pub trait Idx: 'static {
- fn new(usize) -> Self;
- fn idx(&self) -> usize;
-}
+use rustc_data_structures::indexed_vec::Idx;
/// Represents a set (or packed family of sets), of some element type
/// E, where each E is identified by some unique index type `T`.
/// Removes `elem` from the set `self`; returns true iff this changed `self`.
pub fn remove(&mut self, elem: &T) -> bool {
- self.bits.clear_bit(elem.idx())
+ self.bits.clear_bit(elem.index())
}
/// Adds `elem` to the set `self`; returns true iff this changed `self`.
pub fn add(&mut self, elem: &T) -> bool {
- self.bits.set_bit(elem.idx())
+ self.bits.set_bit(elem.index())
}
pub fn range(&self, elems: &Range<T>) -> &Self {
- let elems = elems.start.idx()..elems.end.idx();
+ let elems = elems.start.index()..elems.end.index();
unsafe { Self::from_slice(&self.bits[elems]) }
}
pub fn range_mut(&mut self, elems: &Range<T>) -> &mut Self {
- let elems = elems.start.idx()..elems.end.idx();
+ let elems = elems.start.index()..elems.end.index();
unsafe { Self::from_slice_mut(&mut self.bits[elems]) }
}
/// Returns true iff set `self` contains `elem`.
pub fn contains(&self, elem: &T) -> bool {
- self.bits.get_bit(elem.idx())
+ self.bits.get_bit(elem.index())
}
pub fn words(&self) -> &[Word] {
extern crate graphviz as dot;
#[macro_use]
extern crate rustc;
+extern crate rustc_data_structures;
extern crate rustc_mir;
extern crate core; // for NonZero
use rustc::hir::print::pat_to_string;
use syntax::ptr::P;
use rustc::util::nodemap::FnvHashMap;
+use rustc::util::common::slice_pat;
pub const DUMMY_WILD_PAT: &'static Pat = &Pat {
id: DUMMY_NODE_ID,
span: DUMMY_SP
};
-struct Matrix<'a>(Vec<Vec<&'a Pat>>);
+struct Matrix<'a, 'tcx>(Vec<Vec<(&'a Pat, Option<Ty<'tcx>>)>>);
/// Pretty-printer for matrices of patterns, example:
/// ++++++++++++++++++++++++++
/// ++++++++++++++++++++++++++
/// + _ + [_, _, ..tail] +
/// ++++++++++++++++++++++++++
-impl<'a> fmt::Debug for Matrix<'a> {
+impl<'a, 'tcx> fmt::Debug for Matrix<'a, 'tcx> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "\n")?;
let &Matrix(ref m) = self;
let pretty_printed_matrix: Vec<Vec<String>> = m.iter().map(|row| {
row.iter()
- .map(|&pat| pat_to_string(&pat))
+ .map(|&(pat,ty)| format!("{}: {:?}", pat_to_string(&pat), ty))
.collect::<Vec<String>>()
}).collect();
}
}
-impl<'a> FromIterator<Vec<&'a Pat>> for Matrix<'a> {
- fn from_iter<T: IntoIterator<Item=Vec<&'a Pat>>>(iter: T) -> Matrix<'a> {
+impl<'a, 'tcx> FromIterator<Vec<(&'a Pat, Option<Ty<'tcx>>)>> for Matrix<'a, 'tcx> {
+ fn from_iter<T: IntoIterator<Item=Vec<(&'a Pat, Option<Ty<'tcx>>)>>>(iter: T)
+ -> Self
+ {
Matrix(iter.into_iter().collect())
}
}
pub param_env: ParameterEnvironment<'tcx>,
}
-#[derive(Clone, PartialEq)]
+#[derive(Clone, Debug, PartialEq)]
pub enum Constructor {
/// The constructor of all patterns that don't vary by constructor,
/// e.g. struct patterns and fixed-length arrays.
.iter()
.filter(|&&(_, guard)| guard.is_none())
.flat_map(|arm| &arm.0)
- .map(|pat| vec![&**pat])
+ .map(|pat| vec![wrap_pat(cx, &pat)])
.collect();
check_exhaustive(cx, ex.span, &matrix, source);
},
if let PatKind::Binding(hir::BindByValue(hir::MutImmutable), name, None) = p.node {
let pat_ty = cx.tcx.pat_ty(p);
if let ty::TyEnum(edef, _) = pat_ty.sty {
- let def = cx.tcx.def_map.borrow().get(&p.id).map(|d| d.full_def());
- if let Some(Def::Local(..)) = def {
+ if let Def::Local(..) = cx.tcx.expect_def(p.id) {
if edef.variants.iter().any(|variant|
variant.name == name.node.unhygienize()
&& variant.kind() == VariantKind::Unit
let mut printed_if_let_err = false;
for &(ref pats, guard) in arms {
for pat in pats {
- let v = vec![&**pat];
+ let v = vec![wrap_pat(cx, &pat)];
match is_useful(cx, &seen, &v[..], LeaveOutWitness) {
NotUseful => {
"unreachable pattern");
// if we had a catchall pattern, hint at that
for row in &seen.0 {
- if pat_is_catchall(&cx.tcx.def_map.borrow(), row[0]) {
- span_note!(err, row[0].span, "this pattern matches any value");
+ if pat_is_catchall(&cx.tcx.def_map.borrow(), row[0].0) {
+ span_note!(err, row[0].0.span,
+ "this pattern matches any value");
}
}
err.emit();
}
}
-fn check_exhaustive(cx: &MatchCheckCtxt, sp: Span, matrix: &Matrix, source: hir::MatchSource) {
- match is_useful(cx, matrix, &[DUMMY_WILD_PAT], ConstructWitness) {
+fn check_exhaustive<'a, 'tcx>(cx: &MatchCheckCtxt<'a, 'tcx>,
+ sp: Span,
+ matrix: &Matrix<'a, 'tcx>,
+ source: hir::MatchSource) {
+ match is_useful(cx, matrix, &[(DUMMY_WILD_PAT, None)], ConstructWitness) {
UsefulWithWitness(pats) => {
let witnesses = if pats.is_empty() {
vec![DUMMY_WILD_PAT]
} else {
- pats.iter().map(|w| &**w ).collect()
+ pats.iter().map(|w| &**w).collect()
};
match source {
hir::MatchSource::ForLoopDesugar => {
// `witnesses[0]` has the form `Some(<head>)`, peel off the `Some`
let witness = match witnesses[0].node {
- PatKind::TupleStruct(_, ref pats, _) => match &pats[..] {
- [ref pat] => &**pat,
+ PatKind::TupleStruct(_, ref pats, _) => match slice_pat(&&pats[..]) {
+ &[ref pat] => &**pat,
_ => bug!(),
},
_ => bug!(),
fn fold_pat(&mut self, pat: P<Pat>) -> P<Pat> {
return match pat.node {
PatKind::Path(..) | PatKind::QPath(..) => {
- let def = self.tcx.def_map.borrow().get(&pat.id).map(|d| d.full_def());
- match def {
- Some(Def::AssociatedConst(did)) | Some(Def::Const(did)) => {
+ match self.tcx.expect_def(pat.id) {
+ Def::AssociatedConst(did) | Def::Const(did) => {
let substs = Some(self.tcx.node_id_item_substs(pat.id).substs);
if let Some((const_expr, _)) = lookup_const_by_id(self.tcx, did, substs) {
match const_expr_to_pat(self.tcx, const_expr, pat.id, pat.span) {
}
}
- ty::TyRef(_, ty::TypeAndMut { ty, mutbl }) => {
- match ty.sty {
- ty::TyArray(_, n) => match ctor {
- &Single => {
- assert_eq!(pats_len, n);
- PatKind::Vec(pats.collect(), None, hir::HirVec::new())
- },
- _ => bug!()
- },
- ty::TySlice(_) => match ctor {
- &Slice(n) => {
- assert_eq!(pats_len, n);
- PatKind::Vec(pats.collect(), None, hir::HirVec::new())
- },
- _ => bug!()
- },
- ty::TyStr => PatKind::Wild,
-
- _ => {
- assert_eq!(pats_len, 1);
- PatKind::Ref(pats.nth(0).unwrap(), mutbl)
- }
- }
+ ty::TyRef(_, ty::TypeAndMut { mutbl, .. }) => {
+ assert_eq!(pats_len, 1);
+ PatKind::Ref(pats.nth(0).unwrap(), mutbl)
}
+ ty::TySlice(_) => match ctor {
+ &Slice(n) => {
+ assert_eq!(pats_len, n);
+ PatKind::Vec(pats.collect(), None, hir::HirVec::new())
+ },
+ _ => unreachable!()
+ },
+
ty::TyArray(_, len) => {
assert_eq!(pats_len, len);
PatKind::Vec(pats.collect(), None, hir::HirVec::new())
fn missing_constructors(cx: &MatchCheckCtxt, &Matrix(ref rows): &Matrix,
left_ty: Ty, max_slice_length: usize) -> Vec<Constructor> {
let used_constructors: Vec<Constructor> = rows.iter()
- .flat_map(|row| pat_constructors(cx, row[0], left_ty, max_slice_length))
+ .flat_map(|row| pat_constructors(cx, row[0].0, left_ty, max_slice_length))
.collect();
all_constructors(cx, left_ty, max_slice_length)
.into_iter()
match left_ty.sty {
ty::TyBool =>
[true, false].iter().map(|b| ConstantValue(ConstVal::Bool(*b))).collect(),
-
- ty::TyRef(_, ty::TypeAndMut { ty, .. }) => match ty.sty {
- ty::TySlice(_) =>
- (0..max_slice_length+1).map(|length| Slice(length)).collect(),
- _ => vec![Single]
- },
-
+ ty::TySlice(_) =>
+ (0..max_slice_length+1).map(|length| Slice(length)).collect(),
ty::TyEnum(def, _) => def.variants.iter().map(|v| Variant(v.did)).collect(),
_ => vec![Single]
}
// Note: is_useful doesn't work on empty types, as the paper notes.
// So it assumes that v is non-empty.
-fn is_useful(cx: &MatchCheckCtxt,
- matrix: &Matrix,
- v: &[&Pat],
- witness: WitnessPreference)
- -> Usefulness {
+fn is_useful<'a, 'tcx>(cx: &MatchCheckCtxt<'a, 'tcx>,
+ matrix: &Matrix<'a, 'tcx>,
+ v: &[(&Pat, Option<Ty<'tcx>>)],
+ witness: WitnessPreference)
+ -> Usefulness {
let &Matrix(ref rows) = matrix;
- debug!("{:?}", matrix);
+ debug!("is_useful({:?}, {:?})", matrix, v);
if rows.is_empty() {
return match witness {
ConstructWitness => UsefulWithWitness(vec!()),
return NotUseful;
}
assert!(rows.iter().all(|r| r.len() == v.len()));
- let real_pat = match rows.iter().find(|r| (*r)[0].id != DUMMY_NODE_ID) {
- Some(r) => raw_pat(r[0]),
- None if v.is_empty() => return NotUseful,
- None => v[0]
- };
- let left_ty = if real_pat.id == DUMMY_NODE_ID {
- cx.tcx.mk_nil()
- } else {
- let left_ty = cx.tcx.pat_ty(&real_pat);
-
- match real_pat.node {
- PatKind::Binding(hir::BindByRef(..), _, _) => {
- left_ty.builtin_deref(false, NoPreference).unwrap().ty
- }
- _ => left_ty,
+ let left_ty = match rows.iter().filter_map(|r| r[0].1).next().or_else(|| v[0].1) {
+ Some(ty) => ty,
+ None => {
+ // all patterns are wildcards - we can pick any type we want
+ cx.tcx.types.bool
}
};
- let max_slice_length = rows.iter().filter_map(|row| match row[0].node {
+ let max_slice_length = rows.iter().filter_map(|row| match row[0].0.node {
PatKind::Vec(ref before, _, ref after) => Some(before.len() + after.len()),
_ => None
}).max().map_or(0, |v| v + 1);
- let constructors = pat_constructors(cx, v[0], left_ty, max_slice_length);
+ let constructors = pat_constructors(cx, v[0].0, left_ty, max_slice_length);
+ debug!("is_useful - pat_constructors = {:?} left_ty = {:?}", constructors,
+ left_ty);
if constructors.is_empty() {
let constructors = missing_constructors(cx, matrix, left_ty, max_slice_length);
+ debug!("is_useful - missing_constructors = {:?}", constructors);
if constructors.is_empty() {
all_constructors(cx, left_ty, max_slice_length).into_iter().map(|c| {
match is_useful_specialized(cx, matrix, v, c.clone(), left_ty, witness) {
}).find(|result| result != &NotUseful).unwrap_or(NotUseful)
} else {
let matrix = rows.iter().filter_map(|r| {
- match raw_pat(r[0]).node {
+ match raw_pat(r[0].0).node {
PatKind::Binding(..) | PatKind::Wild => Some(r[1..].to_vec()),
_ => None,
}
}
}
-fn is_useful_specialized(cx: &MatchCheckCtxt, &Matrix(ref m): &Matrix,
- v: &[&Pat], ctor: Constructor, lty: Ty,
- witness: WitnessPreference) -> Usefulness {
+fn is_useful_specialized<'a, 'tcx>(
+ cx: &MatchCheckCtxt<'a, 'tcx>,
+ &Matrix(ref m): &Matrix<'a, 'tcx>,
+ v: &[(&Pat, Option<Ty<'tcx>>)],
+ ctor: Constructor,
+ lty: Ty<'tcx>,
+ witness: WitnessPreference) -> Usefulness
+{
let arity = constructor_arity(cx, &ctor, lty);
let matrix = Matrix(m.iter().filter_map(|r| {
specialize(cx, &r[..], &ctor, 0, arity)
let pat = raw_pat(p);
match pat.node {
PatKind::Struct(..) | PatKind::TupleStruct(..) | PatKind::Path(..) =>
- match cx.tcx.def_map.borrow().get(&pat.id).unwrap().full_def() {
+ match cx.tcx.expect_def(pat.id) {
Def::Const(..) | Def::AssociatedConst(..) =>
span_bug!(pat.span, "const pattern should've \
been rewritten"),
PatKind::Vec(ref before, ref slice, ref after) =>
match left_ty.sty {
ty::TyArray(_, _) => vec![Single],
- _ => if slice.is_some() {
+ ty::TySlice(_) if slice.is_some() => {
(before.len() + after.len()..max_slice_length+1)
.map(|length| Slice(length))
.collect()
- } else {
- vec![Slice(before.len() + after.len())]
}
+ ty::TySlice(_) => vec!(Slice(before.len() + after.len())),
+ _ => span_bug!(pat.span, "pat_constructors: unexpected \
+ slice pattern type {:?}", left_ty)
},
PatKind::Box(..) | PatKind::Tuple(..) | PatKind::Ref(..) =>
vec![Single],
/// For instance, a tuple pattern (_, 42, Some([])) has the arity of 3.
/// A struct pattern's arity is the number of fields it contains, etc.
pub fn constructor_arity(_cx: &MatchCheckCtxt, ctor: &Constructor, ty: Ty) -> usize {
+ debug!("constructor_arity({:?}, {:?})", ctor, ty);
match ty.sty {
ty::TyTuple(ref fs) => fs.len(),
ty::TyBox(_) => 1,
- ty::TyRef(_, ty::TypeAndMut { ty, .. }) => match ty.sty {
- ty::TySlice(_) => match *ctor {
- Slice(length) => length,
- ConstantValue(_) => 0,
- _ => bug!()
- },
- ty::TyStr => 0,
- _ => 1
+ ty::TySlice(_) => match *ctor {
+ Slice(length) => length,
+ ConstantValue(_) => 0,
+ _ => bug!()
},
+ ty::TyRef(..) => 1,
ty::TyEnum(adt, _) | ty::TyStruct(adt, _) => {
ctor.variant_for_adt(adt).fields.len()
}
}
}
+fn wrap_pat<'a, 'b, 'tcx>(cx: &MatchCheckCtxt<'b, 'tcx>,
+ pat: &'a Pat)
+ -> (&'a Pat, Option<Ty<'tcx>>)
+{
+ let pat_ty = cx.tcx.pat_ty(pat);
+ (pat, Some(match pat.node {
+ PatKind::Binding(hir::BindByRef(..), _, _) => {
+ pat_ty.builtin_deref(false, NoPreference).unwrap().ty
+ }
+ _ => pat_ty
+ }))
+}
+
/// This is the main specialization step. It expands the first pattern in the given row
/// into `arity` patterns based on the constructor. For most patterns, the step is trivial,
/// for instance tuple patterns are flattened and box patterns expand into their inner pattern.
/// different patterns.
/// Structure patterns with a partial wild pattern (Foo { a: 42, .. }) have their missing
/// fields filled with wild patterns.
-pub fn specialize<'a>(cx: &MatchCheckCtxt, r: &[&'a Pat],
- constructor: &Constructor, col: usize, arity: usize) -> Option<Vec<&'a Pat>> {
+pub fn specialize<'a, 'b, 'tcx>(
+ cx: &MatchCheckCtxt<'b, 'tcx>,
+ r: &[(&'a Pat, Option<Ty<'tcx>>)],
+ constructor: &Constructor, col: usize, arity: usize)
+ -> Option<Vec<(&'a Pat, Option<Ty<'tcx>>)>>
+{
+ let pat = raw_pat(r[col].0);
let &Pat {
id: pat_id, ref node, span: pat_span
- } = raw_pat(r[col]);
- let head: Option<Vec<&Pat>> = match *node {
+ } = pat;
+ let wpat = |pat: &'a Pat| wrap_pat(cx, pat);
+ let dummy_pat = (DUMMY_WILD_PAT, None);
+
+ let head: Option<Vec<(&Pat, Option<Ty>)>> = match *node {
PatKind::Binding(..) | PatKind::Wild =>
- Some(vec![DUMMY_WILD_PAT; arity]),
+ Some(vec![dummy_pat; arity]),
PatKind::Path(..) => {
- let def = cx.tcx.def_map.borrow().get(&pat_id).unwrap().full_def();
- match def {
+ match cx.tcx.expect_def(pat_id) {
Def::Const(..) | Def::AssociatedConst(..) =>
span_bug!(pat_span, "const pattern should've \
been rewritten"),
Def::Variant(_, id) if *constructor != Variant(id) => None,
Def::Variant(..) | Def::Struct(..) => Some(Vec::new()),
- _ => span_bug!(pat_span, "specialize: unexpected \
+ def => span_bug!(pat_span, "specialize: unexpected \
definition {:?}", def),
}
}
PatKind::TupleStruct(_, ref args, ddpos) => {
- let def = cx.tcx.def_map.borrow().get(&pat_id).unwrap().full_def();
- match def {
+ match cx.tcx.expect_def(pat_id) {
Def::Const(..) | Def::AssociatedConst(..) =>
span_bug!(pat_span, "const pattern should've \
been rewritten"),
Def::Variant(..) | Def::Struct(..) => {
match ddpos {
Some(ddpos) => {
- let mut pats: Vec<_> = args[..ddpos].iter().map(|p| &**p).collect();
- pats.extend(repeat(DUMMY_WILD_PAT).take(arity - args.len()));
- pats.extend(args[ddpos..].iter().map(|p| &**p));
+ let mut pats: Vec<_> = args[..ddpos].iter().map(|p| {
+ wpat(p)
+ }).collect();
+ pats.extend(repeat((DUMMY_WILD_PAT, None)).take(arity - args.len()));
+ pats.extend(args[ddpos..].iter().map(|p| wpat(p)));
Some(pats)
}
- None => Some(args.iter().map(|p| &**p).collect())
+ None => Some(args.iter().map(|p| wpat(p)).collect())
}
}
_ => None
}
PatKind::Struct(_, ref pattern_fields, _) => {
- let def = cx.tcx.def_map.borrow().get(&pat_id).unwrap().full_def();
let adt = cx.tcx.node_id_to_type(pat_id).ty_adt_def().unwrap();
let variant = constructor.variant_for_adt(adt);
- let def_variant = adt.variant_of_def(def);
+ let def_variant = adt.variant_of_def(cx.tcx.expect_def(pat_id));
if variant.did == def_variant.did {
Some(variant.fields.iter().map(|sf| {
match pattern_fields.iter().find(|f| f.node.name == sf.name) {
- Some(ref f) => &*f.node.pat,
- _ => DUMMY_WILD_PAT
+ Some(ref f) => wpat(&f.node.pat),
+ _ => dummy_pat
}
}).collect())
} else {
}
PatKind::Tuple(ref args, Some(ddpos)) => {
- let mut pats: Vec<_> = args[..ddpos].iter().map(|p| &**p).collect();
- pats.extend(repeat(DUMMY_WILD_PAT).take(arity - args.len()));
- pats.extend(args[ddpos..].iter().map(|p| &**p));
+ let mut pats: Vec<_> = args[..ddpos].iter().map(|p| wpat(p)).collect();
+ pats.extend(repeat(dummy_pat).take(arity - args.len()));
+ pats.extend(args[ddpos..].iter().map(|p| wpat(p)));
Some(pats)
}
PatKind::Tuple(ref args, None) =>
- Some(args.iter().map(|p| &**p).collect()),
+ Some(args.iter().map(|p| wpat(&**p)).collect()),
PatKind::Box(ref inner) | PatKind::Ref(ref inner, _) =>
- Some(vec![&**inner]),
+ Some(vec![wpat(&**inner)]),
PatKind::Lit(ref expr) => {
- let expr_value = eval_const_expr(cx.tcx, &expr);
- match range_covered_by_constructor(constructor, &expr_value, &expr_value) {
- Some(true) => Some(vec![]),
- Some(false) => None,
- None => {
- span_err!(cx.tcx.sess, pat_span, E0298, "mismatched types between arms");
- None
+ if let Some(&ty::TyS { sty: ty::TyRef(_, mt), .. }) = r[col].1 {
+ // HACK: handle string literals. A string literal pattern
+ // serves both as an unary reference pattern and as a
+ // nullary value pattern, depending on the type.
+ Some(vec![(pat, Some(mt.ty))])
+ } else {
+ let expr_value = eval_const_expr(cx.tcx, &expr);
+ match range_covered_by_constructor(constructor, &expr_value, &expr_value) {
+ Some(true) => Some(vec![]),
+ Some(false) => None,
+ None => {
+ span_err!(cx.tcx.sess, pat_span, E0298, "mismatched types between arms");
+ None
+ }
}
}
}
}
PatKind::Vec(ref before, ref slice, ref after) => {
+ let pat_len = before.len() + after.len();
match *constructor {
- // Fixed-length vectors.
Single => {
- let mut pats: Vec<&Pat> = before.iter().map(|p| &**p).collect();
- pats.extend(repeat(DUMMY_WILD_PAT).take(arity - before.len() - after.len()));
- pats.extend(after.iter().map(|p| &**p));
- Some(pats)
- },
- Slice(length) if before.len() + after.len() <= length && slice.is_some() => {
- let mut pats: Vec<&Pat> = before.iter().map(|p| &**p).collect();
- pats.extend(repeat(DUMMY_WILD_PAT).take(arity - before.len() - after.len()));
- pats.extend(after.iter().map(|p| &**p));
- Some(pats)
- },
- Slice(length) if before.len() + after.len() == length => {
- let mut pats: Vec<&Pat> = before.iter().map(|p| &**p).collect();
- pats.extend(after.iter().map(|p| &**p));
- Some(pats)
+ // Fixed-length vectors.
+ Some(
+ before.iter().map(|p| wpat(p)).chain(
+ repeat(dummy_pat).take(arity - pat_len).chain(
+ after.iter().map(|p| wpat(p))
+ )).collect())
},
+ Slice(length) if pat_len <= length && slice.is_some() => {
+ Some(
+ before.iter().map(|p| wpat(p)).chain(
+ repeat(dummy_pat).take(arity - pat_len).chain(
+ after.iter().map(|p| wpat(p))
+ )).collect())
+ }
+ Slice(length) if pat_len == length => {
+ Some(
+ before.iter().map(|p| wpat(p)).chain(
+ after.iter().map(|p| wpat(p))
+ ).collect())
+ }
SliceWithSubslice(prefix, suffix)
if before.len() == prefix
&& after.len() == suffix
&& slice.is_some() => {
- let mut pats: Vec<&Pat> = before.iter().map(|p| &**p).collect();
- pats.extend(after.iter().map(|p| &**p));
+ // this is used by trans::_match only
+ let mut pats: Vec<_> = before.iter()
+ .map(|p| (&**p, None)).collect();
+ pats.extend(after.iter().map(|p| (&**p, None)));
Some(pats)
}
_ => None
}
}
};
+ debug!("specialize({:?}, {:?}) = {:?}", r[col], arity, head);
+
head.map(|mut head| {
head.extend_from_slice(&r[..col]);
head.extend_from_slice(&r[col + 1..]);
fn is_refutable<A, F>(cx: &MatchCheckCtxt, pat: &Pat, refutable: F) -> Option<A> where
F: FnOnce(&Pat) -> A,
{
- let pats = Matrix(vec!(vec!(pat)));
- match is_useful(cx, &pats, &[DUMMY_WILD_PAT], ConstructWitness) {
+ let pats = Matrix(vec!(vec!(wrap_pat(cx, pat))));
+ match is_useful(cx, &pats, &[(DUMMY_WILD_PAT, None)], ConstructWitness) {
UsefulWithWitness(pats) => Some(refutable(&pats[0])),
NotUseful => None,
Useful => bug!()
use rustc::hir::map::blocks::FnLikeNode;
use rustc::middle::cstore::{self, InlinedItem};
use rustc::traits;
-use rustc::hir::def::Def;
+use rustc::hir::def::{Def, PathResolution};
use rustc::hir::def_id::DefId;
use rustc::hir::pat_util::def_to_path;
use rustc::ty::{self, Ty, TyCtxt, subst};
.collect()), None),
hir::ExprCall(ref callee, ref args) => {
- let def = *tcx.def_map.borrow().get(&callee.id).unwrap();
+ let def = tcx.expect_def(callee.id);
if let Vacant(entry) = tcx.def_map.borrow_mut().entry(expr.id) {
- entry.insert(def);
+ entry.insert(PathResolution::new(def));
}
- let path = match def.full_def() {
+ let path = match def {
Def::Struct(def_id) => def_to_path(tcx, def_id),
Def::Variant(_, variant_did) => def_to_path(tcx, variant_did),
Def::Fn(..) | Def::Method(..) => return Ok(P(hir::Pat {
}
hir::ExprPath(_, ref path) => {
- let opt_def = tcx.def_map.borrow().get(&expr.id).map(|d| d.full_def());
- match opt_def {
- Some(Def::Struct(..)) | Some(Def::Variant(..)) =>
- PatKind::Path(path.clone()),
- Some(Def::Const(def_id)) |
- Some(Def::AssociatedConst(def_id)) => {
+ match tcx.expect_def(expr.id) {
+ Def::Struct(..) | Def::Variant(..) => PatKind::Path(path.clone()),
+ Def::Const(def_id) | Def::AssociatedConst(def_id) => {
let substs = Some(tcx.node_id_item_substs(expr.id).substs);
let (expr, _ty) = lookup_const_by_id(tcx, def_id, substs).unwrap();
return const_expr_to_pat(tcx, expr, pat_id, span);
}
}
hir::ExprPath(..) => {
- let opt_def = if let Some(def) = tcx.def_map.borrow().get(&e.id) {
- // After type-checking, def_map contains definition of the
- // item referred to by the path. During type-checking, it
- // can contain the raw output of path resolution, which
- // might be a partially resolved path.
- // FIXME: There's probably a better way to make sure we don't
- // panic here.
- if def.depth != 0 {
- signal!(e, UnresolvedPath);
- }
- def.full_def()
- } else {
- signal!(e, NonConstPath);
- };
- match opt_def {
+ // This function can be used before type checking when not all paths are fully resolved.
+ // FIXME: There's probably a better way to make sure we don't panic here.
+ let resolution = tcx.expect_resolution(e.id);
+ if resolution.depth != 0 {
+ signal!(e, UnresolvedPath);
+ }
+ match resolution.base_def {
Def::Const(def_id) |
Def::AssociatedConst(def_id) => {
let substs = if let ExprTypeChecked = ty_hint {
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use std::iter::{self, FromIterator};
+use std::slice;
+use std::marker::PhantomData;
+use std::ops::{Index, IndexMut, Range};
+use std::fmt;
+use std::vec;
+
+use rustc_serialize as serialize;
+
+/// Represents some newtyped `usize` wrapper.
+///
+/// (purpose: avoid mixing indexes for different bitvector domains.)
+pub trait Idx: Copy + 'static {
+ fn new(usize) -> Self;
+ fn index(self) -> usize;
+}
+
+impl Idx for usize {
+ fn new(idx: usize) -> Self { idx }
+ fn index(self) -> usize { self }
+}
+
+#[derive(Clone)]
+pub struct IndexVec<I: Idx, T> {
+ pub raw: Vec<T>,
+ _marker: PhantomData<Fn(&I)>
+}
+
+impl<I: Idx, T: serialize::Encodable> serialize::Encodable for IndexVec<I, T> {
+ fn encode<S: serialize::Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
+ serialize::Encodable::encode(&self.raw, s)
+ }
+}
+
+impl<I: Idx, T: serialize::Decodable> serialize::Decodable for IndexVec<I, T> {
+ fn decode<D: serialize::Decoder>(d: &mut D) -> Result<Self, D::Error> {
+ serialize::Decodable::decode(d).map(|v| {
+ IndexVec { raw: v, _marker: PhantomData }
+ })
+ }
+}
+
+impl<I: Idx, T: fmt::Debug> fmt::Debug for IndexVec<I, T> {
+ fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
+ fmt::Debug::fmt(&self.raw, fmt)
+ }
+}
+
+pub type Enumerated<I, J> = iter::Map<iter::Enumerate<J>, IntoIdx<I>>;
+
+impl<I: Idx, T> IndexVec<I, T> {
+ #[inline]
+ pub fn new() -> Self {
+ IndexVec { raw: Vec::new(), _marker: PhantomData }
+ }
+
+ #[inline]
+ pub fn with_capacity(capacity: usize) -> Self {
+ IndexVec { raw: Vec::with_capacity(capacity), _marker: PhantomData }
+ }
+
+ #[inline]
+ pub fn from_elem<S>(elem: T, universe: &IndexVec<I, S>) -> Self
+ where T: Clone
+ {
+ IndexVec { raw: vec![elem; universe.len()], _marker: PhantomData }
+ }
+
+ #[inline]
+ pub fn push(&mut self, d: T) -> I {
+ let idx = I::new(self.len());
+ self.raw.push(d);
+ idx
+ }
+
+ #[inline]
+ pub fn len(&self) -> usize {
+ self.raw.len()
+ }
+
+ #[inline]
+ pub fn is_empty(&self) -> bool {
+ self.raw.is_empty()
+ }
+
+ #[inline]
+ pub fn into_iter(self) -> vec::IntoIter<T> {
+ self.raw.into_iter()
+ }
+
+ #[inline]
+ pub fn into_iter_enumerated(self) -> Enumerated<I, vec::IntoIter<T>>
+ {
+ self.raw.into_iter().enumerate().map(IntoIdx { _marker: PhantomData })
+ }
+
+ #[inline]
+ pub fn iter(&self) -> slice::Iter<T> {
+ self.raw.iter()
+ }
+
+ #[inline]
+ pub fn iter_enumerated(&self) -> Enumerated<I, slice::Iter<T>>
+ {
+ self.raw.iter().enumerate().map(IntoIdx { _marker: PhantomData })
+ }
+
+ #[inline]
+ pub fn indices(&self) -> iter::Map<Range<usize>, IntoIdx<I>> {
+ (0..self.len()).map(IntoIdx { _marker: PhantomData })
+ }
+
+ #[inline]
+ pub fn iter_mut(&mut self) -> slice::IterMut<T> {
+ self.raw.iter_mut()
+ }
+
+ #[inline]
+ pub fn iter_enumerated_mut(&mut self) -> Enumerated<I, slice::IterMut<T>>
+ {
+ self.raw.iter_mut().enumerate().map(IntoIdx { _marker: PhantomData })
+ }
+
+ #[inline]
+ pub fn last(&self) -> Option<I> {
+ self.len().checked_sub(1).map(I::new)
+ }
+}
+
+impl<I: Idx, T> Index<I> for IndexVec<I, T> {
+ type Output = T;
+
+ #[inline]
+ fn index(&self, index: I) -> &T {
+ &self.raw[index.index()]
+ }
+}
+
+impl<I: Idx, T> IndexMut<I> for IndexVec<I, T> {
+ #[inline]
+ fn index_mut(&mut self, index: I) -> &mut T {
+ &mut self.raw[index.index()]
+ }
+}
+
+impl<I: Idx, T> Extend<T> for IndexVec<I, T> {
+ #[inline]
+ fn extend<J: IntoIterator<Item = T>>(&mut self, iter: J) {
+ self.raw.extend(iter);
+ }
+}
+
+impl<I: Idx, T> FromIterator<T> for IndexVec<I, T> {
+ #[inline]
+ fn from_iter<J>(iter: J) -> Self where J: IntoIterator<Item=T> {
+ IndexVec { raw: FromIterator::from_iter(iter), _marker: PhantomData }
+ }
+}
+
+impl<I: Idx, T> IntoIterator for IndexVec<I, T> {
+ type Item = T;
+ type IntoIter = vec::IntoIter<T>;
+
+ #[inline]
+ fn into_iter(self) -> vec::IntoIter<T> {
+ self.raw.into_iter()
+ }
+
+}
+
+impl<'a, I: Idx, T> IntoIterator for &'a IndexVec<I, T> {
+ type Item = &'a T;
+ type IntoIter = slice::Iter<'a, T>;
+
+ #[inline]
+ fn into_iter(self) -> slice::Iter<'a, T> {
+ self.raw.iter()
+ }
+}
+
+impl<'a, I: Idx, T> IntoIterator for &'a mut IndexVec<I, T> {
+ type Item = &'a mut T;
+ type IntoIter = slice::IterMut<'a, T>;
+
+ #[inline]
+ fn into_iter(mut self) -> slice::IterMut<'a, T> {
+ self.raw.iter_mut()
+ }
+}
+
+pub struct IntoIdx<I: Idx> { _marker: PhantomData<fn(&I)> }
+impl<I: Idx, T> FnOnce<((usize, T),)> for IntoIdx<I> {
+ type Output = (I, T);
+
+ extern "rust-call" fn call_once(self, ((n, t),): ((usize, T),)) -> Self::Output {
+ (I::new(n), t)
+ }
+}
+
+impl<I: Idx, T> FnMut<((usize, T),)> for IntoIdx<I> {
+ extern "rust-call" fn call_mut(&mut self, ((n, t),): ((usize, T),)) -> Self::Output {
+ (I::new(n), t)
+ }
+}
+
+impl<I: Idx> FnOnce<(usize,)> for IntoIdx<I> {
+ type Output = I;
+
+ extern "rust-call" fn call_once(self, (n,): (usize,)) -> Self::Output {
+ I::new(n)
+ }
+}
+
+impl<I: Idx> FnMut<(usize,)> for IntoIdx<I> {
+ extern "rust-call" fn call_mut(&mut self, (n,): (usize,)) -> Self::Output {
+ I::new(n)
+ }
+}
pub mod bitvec;
pub mod graph;
pub mod ivar;
+pub mod indexed_vec;
pub mod obligation_forest;
pub mod snapshot_map;
pub mod snapshot_vec;
})?;
krate = time(time_passes, "crate injection", || {
- syntax::std_inject::maybe_inject_crates_ref(krate, sess.opts.alt_std_name.clone())
+ let alt_std_name = sess.opts.alt_std_name.clone();
+ syntax::std_inject::maybe_inject_crates_ref(&sess.parse_sess, krate, alt_std_name)
});
- let macros = time(time_passes,
- "macro loading",
- || macro_import::read_macro_defs(sess, &cstore, &krate, crate_name));
-
let mut addl_plugins = Some(addl_plugins);
let registrars = time(time_passes, "plugin loading", || {
plugin::load::load_plugins(sess,
recursion_limit: sess.recursion_limit.get(),
trace_mac: sess.opts.debugging_opts.trace_macros,
};
+ let mut loader = macro_import::MacroLoader::new(sess, &cstore, crate_name);
let mut ecx = syntax::ext::base::ExtCtxt::new(&sess.parse_sess,
krate.config.clone(),
cfg,
- &mut feature_gated_cfgs);
+ &mut feature_gated_cfgs,
+ &mut loader);
syntax_ext::register_builtins(&mut ecx.syntax_env);
let (ret, macro_names) = syntax::ext::expand::expand_crate(ecx,
- macros,
syntax_exts,
krate);
if cfg!(windows) {
sess.diagnostic())
});
- krate = time(time_passes,
- "prelude injection",
- || syntax::std_inject::maybe_inject_prelude(&sess.parse_sess, krate));
-
time(time_passes,
"checking for inline asm in case the target doesn't support it",
|| no_asm::check_crate(sess, &krate));
time(time_passes, "MIR passes", || {
let mut passes = sess.mir_passes.borrow_mut();
// Push all the built-in passes.
- passes.push_pass(box mir::transform::remove_dead_blocks::RemoveDeadBlocks);
+ passes.push_hook(box mir::transform::dump_mir::DumpMir);
+ passes.push_pass(box mir::transform::simplify_cfg::SimplifyCfg::new("initial"));
passes.push_pass(box mir::transform::qualify_consts::QualifyAndPromoteConstants);
passes.push_pass(box mir::transform::type_check::TypeckMir);
- passes.push_pass(box mir::transform::simplify_cfg::SimplifyCfg);
- passes.push_pass(box mir::transform::remove_dead_blocks::RemoveDeadBlocks);
+ passes.push_pass(
+ box mir::transform::simplify_branches::SimplifyBranches::new("initial"));
+ passes.push_pass(box mir::transform::simplify_cfg::SimplifyCfg::new("qualify-consts"));
// And run everything.
passes.run_passes(tcx, &mut mir_map);
});
// to LLVM code.
time(time_passes, "Prepare MIR codegen passes", || {
let mut passes = ::rustc::mir::transform::Passes::new();
+ passes.push_hook(box mir::transform::dump_mir::DumpMir);
passes.push_pass(box mir::transform::no_landing_pads::NoLandingPads);
- passes.push_pass(box mir::transform::remove_dead_blocks::RemoveDeadBlocks);
+ passes.push_pass(box mir::transform::simplify_cfg::SimplifyCfg::new("no-landing-pads"));
+
passes.push_pass(box mir::transform::erase_regions::EraseRegions);
+
passes.push_pass(box mir::transform::add_call_guards::AddCallGuards);
passes.push_pass(box borrowck::ElaborateDrops);
passes.push_pass(box mir::transform::no_landing_pads::NoLandingPads);
- passes.push_pass(box mir::transform::simplify_cfg::SimplifyCfg);
+ passes.push_pass(box mir::transform::simplify_cfg::SimplifyCfg::new("elaborate-drops"));
+
passes.push_pass(box mir::transform::add_call_guards::AddCallGuards);
- passes.push_pass(box mir::transform::dump_mir::DumpMir("pre_trans"));
+ passes.push_pass(box mir::transform::dump_mir::Marker("PreTrans"));
+
passes.run_passes(tcx, &mut mir_map);
});
fn check_pat(&mut self, cx: &LateContext, p: &hir::Pat) {
if let &PatKind::Binding(_, ref path1, _) = &p.node {
// Exclude parameter names from foreign functions (they have no `Def`)
- if cx.tcx.def_map.borrow().get(&p.id).map(|d| d.full_def()).is_some() {
+ if cx.tcx.expect_def_or_none(p.id).is_some() {
self.check_snake_case(cx, "variable", &path1.node.as_str(), Some(p.span));
}
}
// Lint for constants that look like binding identifiers (#7526)
if let PatKind::Path(ref path) = p.node {
if !path.global && path.segments.len() == 1 && path.segments[0].parameters.is_empty() {
- if let Some(Def::Const(..)) = cx.tcx.def_map.borrow().get(&p.id)
- .map(|d| d.full_def()) {
+ if let Def::Const(..) = cx.tcx.expect_def(p.id) {
NonUpperCaseGlobals::check_upper_case(cx, "constant in pattern",
path.segments[0].name, path.span);
}
impl LateLintPass for NonShorthandFieldPatterns {
fn check_pat(&mut self, cx: &LateContext, pat: &hir::Pat) {
- let def_map = cx.tcx.def_map.borrow();
- if let PatKind::Struct(_, ref v, _) = pat.node {
- let field_pats = v.iter().filter(|fieldpat| {
+ if let PatKind::Struct(_, ref field_pats, _) = pat.node {
+ for fieldpat in field_pats {
if fieldpat.node.is_shorthand {
- return false;
- }
- let def = def_map.get(&fieldpat.node.pat.id).map(|d| d.full_def());
- if let Some(def_id) = cx.tcx.map.opt_local_def_id(fieldpat.node.pat.id) {
- def == Some(Def::Local(def_id, fieldpat.node.pat.id))
- } else {
- false
+ continue;
}
- });
- for fieldpat in field_pats {
if let PatKind::Binding(_, ident, None) = fieldpat.node.pat.node {
if ident.node.unhygienize() == fieldpat.node.name {
cx.span_lint(NON_SHORTHAND_FIELD_PATTERNS, fieldpat.span,
hir::ItemImpl(_, _, _, Some(ref trait_ref), _, ref impl_items) => {
// If the trait is private, add the impl items to private_traits so they don't get
// reported for missing docs.
- let real_trait = cx.tcx.trait_ref_to_def_id(trait_ref);
+ let real_trait = cx.tcx.expect_def(trait_ref.ref_id).def_id();
if let Some(node_id) = cx.tcx.map.as_local_node_id(real_trait) {
match cx.tcx.map.find(node_id) {
Some(hir_map::NodeItem(item)) => if item.vis == hir::Visibility::Inherited {
id: ast::NodeId) -> bool {
match tcx.map.get(id) {
hir_map::NodeExpr(&hir::Expr { node: hir::ExprCall(ref callee, _), .. }) => {
- tcx.def_map
- .borrow()
- .get(&callee.id)
- .map_or(false,
- |def| def.def_id() == tcx.map.local_def_id(fn_id))
+ tcx.expect_def_or_none(callee.id).map_or(false, |def| {
+ def.def_id() == tcx.map.local_def_id(fn_id)
+ })
}
_ => false
}
// Check for calls to methods via explicit paths (e.g. `T::method()`).
match tcx.map.get(id) {
hir_map::NodeExpr(&hir::Expr { node: hir::ExprCall(ref callee, _), .. }) => {
- match tcx.def_map.borrow().get(&callee.id).map(|d| d.full_def()) {
+ // The callee is an arbitrary expression,
+ // it doesn't necessarily have a definition.
+ match tcx.expect_def_or_none(callee.id) {
Some(Def::Method(def_id)) => {
let item_substs = tcx.node_id_item_substs(callee.id);
method_call_refers_to_method(
hir::ExprPath(..) => (),
_ => return None
}
- if let Def::Fn(did) = cx.tcx.resolve_expr(expr) {
+ if let Def::Fn(did) = cx.tcx.expect_def(expr.id) {
if !def_id_is_transmute(cx, did) {
return None;
}
use middle::const_val::ConstVal;
use rustc_const_eval::eval_const_expr_partial;
use rustc_const_eval::EvalHint::ExprTypeChecked;
+use util::common::slice_pat;
use util::nodemap::{FnvHashSet};
use lint::{LateContext, LintContext, LintArray};
use lint::{LintPass, LateLintPass};
// Check for a repr() attribute to specify the size of the
// discriminant.
let repr_hints = cx.lookup_repr_hints(def.did);
- match &**repr_hints {
- [] => {
+ match slice_pat(&&**repr_hints) {
+ &[] => {
// Special-case types like `Option<extern fn()>`.
if !is_repr_nullable_ptr(cx, def, substs) {
return FfiUnsafe(
the type")
}
}
- [ref hint] => {
+ &[ref hint] => {
if !hint.is_ffi_safe() {
// FIXME: This shouldn't be reachable: we should check
// this earlier.
debug!("Encoding side tables for id {}", id);
- if let Some(def) = tcx.def_map.borrow().get(&id).map(|d| d.full_def()) {
+ if let Some(def) = tcx.expect_def_or_none(id) {
rbml_w.tag(c::tag_table_def, |rbml_w| {
rbml_w.id(id);
def.encode(rbml_w).unwrap();
match value {
c::tag_table_def => {
let def = decode_def(dcx, val_dsr);
- dcx.tcx.def_map.borrow_mut().insert(id, def::PathResolution {
- base_def: def,
- depth: 0
- });
+ dcx.tcx.def_map.borrow_mut().insert(id, def::PathResolution::new(def));
}
c::tag_table_node_type => {
let ty = val_dsr.read_ty(dcx);
use syntax::parse::token;
use syntax::ast;
use syntax::attr;
-use syntax::visit;
-use syntax::visit::Visitor;
use syntax::attr::AttrMetaMethods;
+use syntax::ext;
-struct MacroLoader<'a> {
+pub struct MacroLoader<'a> {
sess: &'a Session,
- span_whitelist: HashSet<Span>,
reader: CrateReader<'a>,
- macros: Vec<ast::MacroDef>,
}
impl<'a> MacroLoader<'a> {
- fn new(sess: &'a Session, cstore: &'a CStore, crate_name: &str) -> MacroLoader<'a> {
+ pub fn new(sess: &'a Session, cstore: &'a CStore, crate_name: &str) -> MacroLoader<'a> {
MacroLoader {
sess: sess,
- span_whitelist: HashSet::new(),
reader: CrateReader::new(sess, cstore, crate_name),
- macros: vec![],
}
}
}
span_err!(a, b, E0467, "bad macro reexport");
}
-/// Read exported macros.
-pub fn read_macro_defs(sess: &Session,
- cstore: &CStore,
- krate: &ast::Crate,
- crate_name: &str)
- -> Vec<ast::MacroDef>
-{
- let mut loader = MacroLoader::new(sess, cstore, crate_name);
-
- // We need to error on `#[macro_use] extern crate` when it isn't at the
- // crate root, because `$crate` won't work properly. Identify these by
- // spans, because the crate map isn't set up yet.
- for item in &krate.module.items {
- if let ast::ItemKind::ExternCrate(_) = item.node {
- loader.span_whitelist.insert(item.span);
- }
- }
-
- visit::walk_crate(&mut loader, krate);
-
- loader.macros
-}
-
pub type MacroSelection = HashMap<token::InternedString, Span>;
-// note that macros aren't expanded yet, and therefore macros can't add macro imports.
-impl<'a, 'v> Visitor<'v> for MacroLoader<'a> {
- fn visit_item(&mut self, item: &ast::Item) {
- // We're only interested in `extern crate`.
- match item.node {
- ast::ItemKind::ExternCrate(_) => {}
- _ => {
- visit::walk_item(self, item);
- return;
- }
- }
-
+impl<'a> ext::base::MacroLoader for MacroLoader<'a> {
+ fn load_crate(&mut self, extern_crate: &ast::Item, allows_macros: bool) -> Vec<ast::MacroDef> {
// Parse the attributes relating to macros.
let mut import = Some(HashMap::new()); // None => load all
let mut reexport = HashMap::new();
- for attr in &item.attrs {
+ for attr in &extern_crate.attrs {
let mut used = true;
match &attr.name()[..] {
"macro_use" => {
}
}
- self.load_macros(item, import, reexport)
- }
-
- fn visit_mac(&mut self, _: &ast::Mac) {
- // bummer... can't see macro imports inside macros.
- // do nothing.
+ self.load_macros(extern_crate, allows_macros, import, reexport)
}
}
impl<'a> MacroLoader<'a> {
fn load_macros<'b>(&mut self,
vi: &ast::Item,
+ allows_macros: bool,
import: Option<MacroSelection>,
- reexport: MacroSelection) {
+ reexport: MacroSelection)
+ -> Vec<ast::MacroDef> {
if let Some(sel) = import.as_ref() {
if sel.is_empty() && reexport.is_empty() {
- return;
+ return Vec::new();
}
}
- if !self.span_whitelist.contains(&vi.span) {
+ if !allows_macros {
span_err!(self.sess, vi.span, E0468,
"an `extern crate` loading macros must be at the crate root");
- return;
+ return Vec::new();
}
- let macros = self.reader.read_exported_macros(vi);
+ let mut macros = Vec::new();
let mut seen = HashSet::new();
- for mut def in macros {
+ for mut def in self.reader.read_exported_macros(vi) {
let name = def.ident.name.as_str();
def.use_locally = match import.as_ref() {
def.allow_internal_unstable = attr::contains_name(&def.attrs,
"allow_internal_unstable");
debug!("load_macros: loaded: {:?}", def);
- self.macros.push(def);
+ macros.push(def);
seen.insert(name);
}
"reexported macro not found");
}
}
+
+ macros
}
}
impl<'tcx> CFG<'tcx> {
pub fn block_data(&self, blk: BasicBlock) -> &BasicBlockData<'tcx> {
- &self.basic_blocks[blk.index()]
+ &self.basic_blocks[blk]
}
pub fn block_data_mut(&mut self, blk: BasicBlock) -> &mut BasicBlockData<'tcx> {
- &mut self.basic_blocks[blk.index()]
+ &mut self.basic_blocks[blk]
}
pub fn start_new_block(&mut self) -> BasicBlock {
- let node_index = self.basic_blocks.len();
- self.basic_blocks.push(BasicBlockData::new(None));
- BasicBlock::new(node_index)
+ self.basic_blocks.push(BasicBlockData::new(None))
}
pub fn start_new_cleanup_block(&mut self) -> BasicBlock {
block: BasicBlock,
source_info: SourceInfo,
kind: TerminatorKind<'tcx>) {
+ debug!("terminating block {:?} <- {:?}", block, kind);
debug_assert!(self.block_data(block).terminator.is_none(),
- "terminate: block {:?} already has a terminator set", block);
+ "terminate: block {:?}={:?} already has a terminator set",
+ block,
+ self.block_data(block));
self.block_data_mut(block).terminator = Some(Terminator {
source_info: source_info,
kind: kind,
use hair::*;
use rustc::mir::repr::*;
+use rustc_data_structures::indexed_vec::Idx;
+
impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> {
/// Compile `expr`, yielding an lvalue that we can move from etc.
pub fn as_lvalue<M>(&mut self,
success.and(slice.index(idx))
}
ExprKind::SelfRef => {
- block.and(Lvalue::Arg(0))
+ block.and(Lvalue::Arg(Arg::new(0)))
}
ExprKind::VarRef { id } => {
let index = this.var_indices[&id];
use rustc_const_math::{ConstMathErr, Op};
use rustc_data_structures::fnv::FnvHashMap;
+use rustc_data_structures::indexed_vec::Idx;
use build::{BlockAnd, BlockAndExtension, Builder};
use build::expr::category::{Category, RvalueFunc};
// branch to the appropriate arm block
let otherwise = self.match_candidates(span, &mut arm_blocks, candidates, block);
- // because all matches are exhaustive, in principle we expect
- // an empty vector to be returned here, but the algorithm is
- // not entirely precise
if !otherwise.is_empty() {
- let join_block = self.join_otherwise_blocks(span, otherwise);
- self.panic(join_block, "something about matches algorithm not being precise", span);
+ // All matches are exhaustive. However, because some matches
+ // only have exponentially-large exhaustive decision trees, we
+ // sometimes generate an inexhaustive decision tree.
+ //
+ // In that case, the inexhaustive tips of the decision tree
+ // can't be reached - terminate them with an `unreachable`.
+ let source_info = self.source_info(span);
+
+ let mut otherwise = otherwise;
+ otherwise.sort();
+ otherwise.dedup(); // variant switches can introduce duplicate target blocks
+ for block in otherwise {
+ self.cfg.terminate(block, source_info, TerminatorKind::Unreachable);
+ }
}
// all the arm blocks will rejoin here
/// simpler (and, in fact, irrefutable).
///
/// But there may also be candidates that the test just doesn't
- /// apply to. For example, consider the case of #29740:
+ /// apply to. The classical example involves wildcards:
///
/// ```rust,ignore
+ /// match (x, y, z) {
+ /// (true, _, true) => true, // (0)
+ /// (_, true, _) => true, // (1)
+ /// (false, false, _) => false, // (2)
+ /// (true, _, false) => false, // (3)
+ /// }
+ /// ```
+ ///
+ /// In that case, after we test on `x`, there are 2 overlapping candidate
+ /// sets:
+ ///
+ /// - If the outcome is that `x` is true, candidates 0, 1, and 3
+ /// - If the outcome is that `x` is false, candidates 1 and 2
+ ///
+ /// Here, the traditional "decision tree" method would generate 2
+ /// separate code-paths for the 2 separate cases.
+ ///
+ /// In some cases, this duplication can create an exponential amount of
+ /// code. This is most easily seen by noticing that this method terminates
+ /// with precisely the reachable arms being reachable - but that problem
+ /// is trivially NP-complete:
+ ///
+ /// ```rust
+ /// match (var0, var1, var2, var3, ..) {
+ /// (true, _, _, false, true, ...) => false,
+ /// (_, true, true, false, _, ...) => false,
+ /// (false, _, false, false, _, ...) => false,
+ /// ...
+ /// _ => true
+ /// }
+ /// ```
+ ///
+ /// Here the last arm is reachable only if there is an assignment to
+ /// the variables that does not match any of the literals. Therefore,
+ /// compilation would take an exponential amount of time in some cases.
+ ///
+ /// That kind of exponential worst-case might not occur in practice, but
+ /// our simplistic treatment of constants and guards would make it occur
+ /// in very common situations - for example #29740:
+ ///
+ /// ```rust
/// match x {
- /// "foo" => ...,
- /// "bar" => ...,
- /// "baz" => ...,
- /// _ => ...,
+ /// "foo" if foo_guard => ...,
+ /// "bar" if bar_guard => ...,
+ /// "baz" if baz_guard => ...,
+ /// ...
/// }
/// ```
///
- /// Here the match-pair we are testing will be `x @ "foo"`, and we
- /// will generate an `Eq` test. Because `"bar"` and `"baz"` are different
- /// constants, we will decide that these later candidates are just not
- /// informed by the eq test. So we'll wind up with three candidate sets:
+ /// Here we first test the match-pair `x @ "foo"`, which is an `Eq` test.
+ ///
+ /// It might seem that we would end up with 2 disjoint candidate
+ /// sets, consisting of the first candidate or the other 3, but our
+ /// algorithm doesn't reason about "foo" being distinct from the other
+ /// constants; it considers the latter arms to potentially match after
+ /// both outcomes, which obviously leads to an exponential amount
+ /// of tests.
///
- /// - If outcome is that `x == "foo"` (one candidate, derived from `x @ "foo"`)
- /// - If outcome is that `x != "foo"` (empty list of candidates)
- /// - Otherwise (three candidates, `x @ "bar"`, `x @ "baz"`, `x @
- /// _`). Here we have the invariant that everything in the
- /// otherwise list is of **lower priority** than the stuff in the
- /// other lists.
+ /// To avoid these kinds of problems, our algorithm tries to ensure
+ /// the amount of generated tests is linear. When we do a k-way test,
+ /// we return an additional "unmatched" set alongside the obvious `k`
+ /// sets. When we encounter a candidate that would be present in more
+ /// than one of the sets, we put it and all candidates below it into the
+ /// "unmatched" set. This ensures these `k+1` sets are disjoint.
///
- /// So we'll compile the test. For each outcome of the test, we
- /// recursively call `match_candidates` with the corresponding set
- /// of candidates. But note that this set is now inexhaustive: for
- /// example, in the case where the test returns false, there are
- /// NO candidates, even though there is stll a value to be
- /// matched. So we'll collect the return values from
- /// `match_candidates`, which are the blocks where control-flow
- /// goes if none of the candidates matched. At this point, we can
- /// continue with the "otherwise" list.
+ /// After we perform our test, we branch into the appropriate candidate
+ /// set and recurse with `match_candidates`. These sub-matches are
+ /// obviously inexhaustive - as we discarded our otherwise set - so
+ /// we set their continuation to do `match_candidates` on the
+ /// "unmatched" set (which is again inexhaustive).
///
/// If you apply this to the above test, you basically wind up
/// with an if-else-if chain, testing each candidate in turn,
/// which is precisely what we want.
+ ///
+ /// In addition to avoiding exponential-time blowups, this algorithm
+ /// also has nice property that each guard and arm is only generated
+ /// once.
fn test_candidates<'pat>(&mut self,
span: Span,
arm_blocks: &mut ArmBlocks,
name: Name,
var_id: NodeId,
var_ty: Ty<'tcx>)
- -> u32
+ -> Var
{
debug!("declare_binding(var_id={:?}, name={:?}, var_ty={:?}, source_info={:?})",
var_id, name, var_ty, source_info);
- let index = self.var_decls.len();
- self.var_decls.push(VarDecl::<'tcx> {
+ let var = self.var_decls.push(VarDecl::<'tcx> {
source_info: source_info,
mutability: mutability,
name: name,
ty: var_ty.clone(),
});
- let index = index as u32;
let extent = self.extent_of_innermost_scope();
- self.schedule_drop(source_info.span, extent, &Lvalue::Var(index), var_ty);
- self.var_indices.insert(var_id, index);
+ self.schedule_drop(source_info.span, extent, &Lvalue::Var(var), var_ty);
+ self.var_indices.insert(var_id, var);
- debug!("declare_binding: index={:?}", index);
+ debug!("declare_binding: var={:?}", var);
- index
+ var
}
}
impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> {
pub fn simplify_candidate<'pat>(&mut self,
- mut block: BasicBlock,
+ block: BasicBlock,
candidate: &mut Candidate<'pat, 'tcx>)
-> BlockAnd<()> {
// repeatedly simplify match pairs until fixed point is reached
let match_pairs = mem::replace(&mut candidate.match_pairs, vec![]);
let mut progress = match_pairs.len(); // count how many were simplified
for match_pair in match_pairs {
- match self.simplify_match_pair(block, match_pair, candidate) {
- Ok(b) => {
- block = b;
- }
+ match self.simplify_match_pair(match_pair, candidate) {
+ Ok(()) => {}
Err(match_pair) => {
candidate.match_pairs.push(match_pair);
progress -= 1; // this one was not simplified
/// possible, Err is returned and no changes are made to
/// candidate.
fn simplify_match_pair<'pat>(&mut self,
- mut block: BasicBlock,
match_pair: MatchPair<'pat, 'tcx>,
candidate: &mut Candidate<'pat, 'tcx>)
- -> Result<BasicBlock, MatchPair<'pat, 'tcx>> {
+ -> Result<(), MatchPair<'pat, 'tcx>> {
match *match_pair.pattern.kind {
PatternKind::Wild => {
// nothing left to do
- Ok(block)
+ Ok(())
}
PatternKind::Binding { name, mutability, mode, var, ty, ref subpattern } => {
candidate.match_pairs.push(MatchPair::new(match_pair.lvalue, subpattern));
}
- Ok(block)
+ Ok(())
}
PatternKind::Constant { .. } => {
}
PatternKind::Range { .. } |
- PatternKind::Variant { .. } => {
- // cannot simplify, test is required
- Err(match_pair)
- }
-
- PatternKind::Slice { .. } if !match_pair.slice_len_checked => {
+ PatternKind::Variant { .. } |
+ PatternKind::Slice { .. } => {
Err(match_pair)
}
- PatternKind::Array { ref prefix, ref slice, ref suffix } |
- PatternKind::Slice { ref prefix, ref slice, ref suffix } => {
- unpack!(block = self.prefix_suffix_slice(&mut candidate.match_pairs,
- block,
- match_pair.lvalue.clone(),
- prefix,
- slice.as_ref(),
- suffix));
- Ok(block)
+ PatternKind::Array { ref prefix, ref slice, ref suffix } => {
+ self.prefix_slice_suffix(&mut candidate.match_pairs,
+ &match_pair.lvalue,
+ prefix,
+ slice.as_ref(),
+ suffix);
+ Ok(())
}
PatternKind::Leaf { ref subpatterns } => {
// tuple struct, match subpats (if any)
candidate.match_pairs
.extend(self.field_match_pairs(match_pair.lvalue, subpatterns));
- Ok(block)
+ Ok(())
}
PatternKind::Deref { ref subpattern } => {
let lvalue = match_pair.lvalue.deref();
candidate.match_pairs.push(MatchPair::new(lvalue, subpattern));
- Ok(block)
+ Ok(())
}
}
}
use rustc::ty::{self, Ty};
use rustc::mir::repr::*;
use syntax::codemap::Span;
+use std::cmp::Ordering;
impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> {
/// Identifies what test is needed to decide if `match_pair` is applicable.
}
};
- match test.kind {
+ match (&test.kind, &*match_pair.pattern.kind) {
// If we are performing a variant switch, then this
// informs variant patterns, but nothing else.
- TestKind::Switch { adt_def: tested_adt_def , .. } => {
- match *match_pair.pattern.kind {
- PatternKind::Variant { adt_def, variant_index, ref subpatterns } => {
- assert_eq!(adt_def, tested_adt_def);
- let new_candidate =
- self.candidate_after_variant_switch(match_pair_index,
- adt_def,
- variant_index,
- subpatterns,
- candidate);
- resulting_candidates[variant_index].push(new_candidate);
- true
- }
- _ => {
- false
- }
- }
+ (&TestKind::Switch { adt_def: tested_adt_def, .. },
+ &PatternKind::Variant { adt_def, variant_index, ref subpatterns }) => {
+ assert_eq!(adt_def, tested_adt_def);
+ let new_candidate =
+ self.candidate_after_variant_switch(match_pair_index,
+ adt_def,
+ variant_index,
+ subpatterns,
+ candidate);
+ resulting_candidates[variant_index].push(new_candidate);
+ true
}
+ (&TestKind::Switch { .. }, _) => false,
// If we are performing a switch over integers, then this informs integer
// equality, but nothing else.
//
- // FIXME(#29623) we could use TestKind::Range to rule
+ // FIXME(#29623) we could use PatternKind::Range to rule
// things out here, in some cases.
- TestKind::SwitchInt { switch_ty: _, options: _, ref indices } => {
- match *match_pair.pattern.kind {
- PatternKind::Constant { ref value }
- if is_switch_ty(match_pair.pattern.ty) => {
- let index = indices[value];
- let new_candidate = self.candidate_without_match_pair(match_pair_index,
- candidate);
- resulting_candidates[index].push(new_candidate);
+ (&TestKind::SwitchInt { switch_ty: _, options: _, ref indices },
+ &PatternKind::Constant { ref value })
+ if is_switch_ty(match_pair.pattern.ty) => {
+ let index = indices[value];
+ let new_candidate = self.candidate_without_match_pair(match_pair_index,
+ candidate);
+ resulting_candidates[index].push(new_candidate);
+ true
+ }
+ (&TestKind::SwitchInt { .. }, _) => false,
+
+
+ (&TestKind::Len { len: test_len, op: BinOp::Eq },
+ &PatternKind::Slice { ref prefix, ref slice, ref suffix }) => {
+ let pat_len = (prefix.len() + suffix.len()) as u64;
+ match (test_len.cmp(&pat_len), slice) {
+ (Ordering::Equal, &None) => {
+ // on true, min_len = len = $actual_length,
+ // on false, len != $actual_length
+ resulting_candidates[0].push(
+ self.candidate_after_slice_test(match_pair_index,
+ candidate,
+ prefix,
+ slice.as_ref(),
+ suffix)
+ );
true
}
- _ => {
+ (Ordering::Less, _) => {
+ // test_len < pat_len. If $actual_len = test_len,
+ // then $actual_len < pat_len and we don't have
+ // enough elements.
+ resulting_candidates[1].push(candidate.clone());
+ true
+ }
+ (Ordering::Equal, &Some(_)) | (Ordering::Greater, &Some(_)) => {
+ // This can match both if $actual_len = test_len >= pat_len,
+ // and if $actual_len > test_len. We can't advance.
false
}
+ (Ordering::Greater, &None) => {
+ // test_len != pat_len, so if $actual_len = test_len, then
+ // $actual_len != pat_len.
+ resulting_candidates[1].push(candidate.clone());
+ true
+ }
}
}
- // If we are performing a length check, then this
- // informs slice patterns, but nothing else.
- TestKind::Len { .. } => {
- let pattern_test = self.test(&match_pair);
- match *match_pair.pattern.kind {
- PatternKind::Slice { .. } if pattern_test.kind == test.kind => {
- let mut new_candidate = candidate.clone();
-
- // Set up the MatchKind to simplify this like an array.
- new_candidate.match_pairs[match_pair_index]
- .slice_len_checked = true;
- resulting_candidates[0].push(new_candidate);
+ (&TestKind::Len { len: test_len, op: BinOp::Ge },
+ &PatternKind::Slice { ref prefix, ref slice, ref suffix }) => {
+ // the test is `$actual_len >= test_len`
+ let pat_len = (prefix.len() + suffix.len()) as u64;
+ match (test_len.cmp(&pat_len), slice) {
+ (Ordering::Equal, &Some(_)) => {
+ // $actual_len >= test_len = pat_len,
+ // so we can match.
+ resulting_candidates[0].push(
+ self.candidate_after_slice_test(match_pair_index,
+ candidate,
+ prefix,
+ slice.as_ref(),
+ suffix)
+ );
true
}
- _ => false
+ (Ordering::Less, _) | (Ordering::Equal, &None) => {
+ // test_len <= pat_len. If $actual_len < test_len,
+ // then it is also < pat_len, so the test passing is
+ // necessary (but insufficient).
+ resulting_candidates[0].push(candidate.clone());
+ true
+ }
+ (Ordering::Greater, &None) => {
+ // test_len > pat_len. If $actual_len >= test_len > pat_len,
+ // then we know we won't have a match.
+ resulting_candidates[1].push(candidate.clone());
+ true
+ }
+ (Ordering::Greater, &Some(_)) => {
+ // test_len < pat_len, and is therefore less
+ // strict. This can still go both ways.
+ false
+ }
}
}
- TestKind::Eq { .. } |
- TestKind::Range { .. } => {
+ (&TestKind::Eq { .. }, _) |
+ (&TestKind::Range { .. }, _) |
+ (&TestKind::Len { .. }, _) => {
// These are all binary tests.
//
// FIXME(#29623) we can be more clever here
}
}
+ fn candidate_after_slice_test<'pat>(&mut self,
+ match_pair_index: usize,
+ candidate: &Candidate<'pat, 'tcx>,
+ prefix: &'pat [Pattern<'tcx>],
+ opt_slice: Option<&'pat Pattern<'tcx>>,
+ suffix: &'pat [Pattern<'tcx>])
+ -> Candidate<'pat, 'tcx> {
+ let mut new_candidate =
+ self.candidate_without_match_pair(match_pair_index, candidate);
+ self.prefix_slice_suffix(
+ &mut new_candidate.match_pairs,
+ &candidate.match_pairs[match_pair_index].lvalue,
+ prefix,
+ opt_slice,
+ suffix);
+
+ new_candidate
+ }
+
fn candidate_after_variant_switch<'pat>(&mut self,
match_pair_index: usize,
adt_def: ty::AdtDef<'tcx>,
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-use build::{BlockAnd, BlockAndExtension, Builder};
+use build::Builder;
use build::matches::MatchPair;
use hair::*;
use rustc::mir::repr::*;
.collect()
}
- /// When processing an array/slice pattern like `lv @ [x, y, ..s, z]`,
- /// this function converts the prefix (`x`, `y`) and suffix (`z`) into
- /// distinct match pairs:
- ///
- /// ```rust,ignore
- /// lv[0 of 3] @ x // see ProjectionElem::ConstantIndex (and its Debug impl)
- /// lv[1 of 3] @ y // to explain the `[x of y]` notation
- /// lv[-1 of 3] @ z
- /// ```
- ///
- /// If a slice like `s` is present, then the function also creates
- /// a temporary like:
- ///
- /// ```rust,ignore
- /// tmp0 = lv[2..-1] // using the special Rvalue::Slice
- /// ```
- ///
- /// and creates a match pair `tmp0 @ s`
- pub fn prefix_suffix_slice<'pat>(&mut self,
+ pub fn prefix_slice_suffix<'pat>(&mut self,
match_pairs: &mut Vec<MatchPair<'pat, 'tcx>>,
- block: BasicBlock,
- lvalue: Lvalue<'tcx>,
+ lvalue: &Lvalue<'tcx>,
prefix: &'pat [Pattern<'tcx>],
opt_slice: Option<&'pat Pattern<'tcx>>,
- suffix: &'pat [Pattern<'tcx>])
- -> BlockAnd<()> {
- // If there is a `..P` pattern, create a temporary `t0` for
- // the slice and then a match pair `t0 @ P`:
- if let Some(slice) = opt_slice {
- let prefix_len = prefix.len();
- let suffix_len = suffix.len();
- let rvalue = Rvalue::Slice {
- input: lvalue.clone(),
- from_start: prefix_len,
- from_end: suffix_len,
- };
- let temp = self.temp(slice.ty.clone()); // no need to schedule drop, temp is always copy
- let source_info = self.source_info(slice.span);
- self.cfg.push_assign(block, source_info, &temp, rvalue);
- match_pairs.push(MatchPair::new(temp, slice));
- }
-
- self.prefix_suffix(match_pairs, lvalue, prefix, suffix);
-
- block.unit()
- }
-
- /// Helper for `prefix_suffix_slice` which just processes the prefix and suffix.
- fn prefix_suffix<'pat>(&mut self,
- match_pairs: &mut Vec<MatchPair<'pat, 'tcx>>,
- lvalue: Lvalue<'tcx>,
- prefix: &'pat [Pattern<'tcx>],
- suffix: &'pat [Pattern<'tcx>]) {
+ suffix: &'pat [Pattern<'tcx>]) {
let min_length = prefix.len() + suffix.len();
assert!(min_length < u32::MAX as usize);
let min_length = min_length as u32;
- let prefix_pairs: Vec<_> =
+ match_pairs.extend(
prefix.iter()
.enumerate()
.map(|(idx, subpattern)| {
let lvalue = lvalue.clone().elem(elem);
MatchPair::new(lvalue, subpattern)
})
- .collect();
+ );
+
+ if let Some(subslice_pat) = opt_slice {
+ let subslice = lvalue.clone().elem(ProjectionElem::Subslice {
+ from: prefix.len() as u32,
+ to: suffix.len() as u32
+ });
+ match_pairs.push(MatchPair::new(subslice, subslice_pat));
+ }
- let suffix_pairs: Vec<_> =
+ match_pairs.extend(
suffix.iter()
.rev()
.enumerate()
let lvalue = lvalue.clone().elem(elem);
MatchPair::new(lvalue, subpattern)
})
- .collect();
-
- match_pairs.extend(prefix_pairs.into_iter().chain(suffix_pairs));
+ );
}
}
use rustc::ty::{self, Ty};
use rustc::mir::repr::*;
-use std::u32;
use syntax::ast;
use syntax::codemap::Span;
/// NB: **No cleanup is scheduled for this temporary.** You should
/// call `schedule_drop` once the temporary is initialized.
pub fn temp(&mut self, ty: Ty<'tcx>) -> Lvalue<'tcx> {
- let index = self.temp_decls.len();
- self.temp_decls.push(TempDecl { ty: ty });
- assert!(index < (u32::MAX) as usize);
- let lvalue = Lvalue::Temp(index as u32);
+ let temp = self.temp_decls.push(TempDecl { ty: ty });
+ let lvalue = Lvalue::Temp(temp);
debug!("temp: created temp {:?} with type {:?}",
- lvalue, self.temp_decls.last().unwrap().ty);
+ lvalue, self.temp_decls[temp].ty);
lvalue
}
use rustc::middle::region::{CodeExtent, CodeExtentData, ROOT_CODE_EXTENT};
use rustc::ty::{self, Ty};
use rustc::mir::repr::*;
-use rustc_data_structures::fnv::FnvHashMap;
+use rustc::util::nodemap::NodeMap;
use rustc::hir;
-use std::ops::{Index, IndexMut};
-use std::u32;
use syntax::abi::Abi;
use syntax::ast;
use syntax::codemap::Span;
use syntax::parse::token::keywords;
+use rustc_data_structures::indexed_vec::{IndexVec, Idx};
+
+use std::u32;
+
pub struct Builder<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
hir: Cx<'a, 'gcx, 'tcx>,
cfg: CFG<'tcx>,
/// but these are liable to get out of date once optimization
/// begins. They are also hopefully temporary, and will be
/// no longer needed when we adopt graph-based regions.
- scope_auxiliary: ScopeAuxiliaryVec,
+ scope_auxiliary: IndexVec<ScopeId, ScopeAuxiliary>,
/// the current set of loops; see the `scope` module for more
/// details
/// the vector of all scopes that we have created thus far;
/// we track this for debuginfo later
- visibility_scopes: Vec<VisibilityScopeData>,
+ visibility_scopes: IndexVec<VisibilityScope, VisibilityScopeData>,
visibility_scope: VisibilityScope,
- var_decls: Vec<VarDecl<'tcx>>,
- var_indices: FnvHashMap<ast::NodeId, u32>,
- temp_decls: Vec<TempDecl<'tcx>>,
+ var_decls: IndexVec<Var, VarDecl<'tcx>>,
+ var_indices: NodeMap<Var>,
+ temp_decls: IndexVec<Temp, TempDecl<'tcx>>,
unit_temp: Option<Lvalue<'tcx>>,
/// cached block with the RESUME terminator; this is created
}
struct CFG<'tcx> {
- basic_blocks: Vec<BasicBlockData<'tcx>>,
+ basic_blocks: IndexVec<BasicBlock, BasicBlockData<'tcx>>,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub struct ScopeId(u32);
-impl ScopeId {
- pub fn new(index: usize) -> ScopeId {
+impl Idx for ScopeId {
+ fn new(index: usize) -> ScopeId {
assert!(index < (u32::MAX as usize));
ScopeId(index as u32)
}
- pub fn index(self) -> usize {
+ fn index(self) -> usize {
self.0 as usize
}
}
pub statement_index: usize,
}
-pub struct ScopeAuxiliaryVec {
- pub vec: Vec<ScopeAuxiliary>
-}
-
-impl Index<ScopeId> for ScopeAuxiliaryVec {
- type Output = ScopeAuxiliary;
-
- #[inline]
- fn index(&self, index: ScopeId) -> &ScopeAuxiliary {
- &self.vec[index.index()]
- }
-}
-
-impl IndexMut<ScopeId> for ScopeAuxiliaryVec {
- #[inline]
- fn index_mut(&mut self, index: ScopeId) -> &mut ScopeAuxiliary {
- &mut self.vec[index.index()]
- }
-}
+pub type ScopeAuxiliaryVec = IndexVec<ScopeId, ScopeAuxiliary>;
///////////////////////////////////////////////////////////////////////////
/// The `BlockAnd` "monad" packages up the new basic block along with a
match tcx.node_id_to_type(fn_id).sty {
ty::TyFnDef(_, _, f) if f.abi == Abi::RustCall => {
// RustCall pseudo-ABI untuples the last argument.
- if let Some(arg_decl) = arg_decls.last_mut() {
- arg_decl.spread = true;
+ if let Some(last_arg) = arg_decls.last() {
+ arg_decls[last_arg].spread = true;
}
}
_ => {}
});
let ty = tcx.expr_ty_adjusted(ast_expr);
- builder.finish(vec![], vec![], ty::FnConverging(ty))
+ builder.finish(vec![], IndexVec::new(), ty::FnConverging(ty))
}
impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> {
fn new(hir: Cx<'a, 'gcx, 'tcx>, span: Span) -> Builder<'a, 'gcx, 'tcx> {
let mut builder = Builder {
hir: hir,
- cfg: CFG { basic_blocks: vec![] },
+ cfg: CFG { basic_blocks: IndexVec::new() },
fn_span: span,
scopes: vec![],
- visibility_scopes: vec![],
+ visibility_scopes: IndexVec::new(),
visibility_scope: ARGUMENT_VISIBILITY_SCOPE,
- scope_auxiliary: ScopeAuxiliaryVec { vec: vec![] },
+ scope_auxiliary: IndexVec::new(),
loop_scopes: vec![],
- temp_decls: vec![],
- var_decls: vec![],
- var_indices: FnvHashMap(),
+ temp_decls: IndexVec::new(),
+ var_decls: IndexVec::new(),
+ var_indices: NodeMap(),
unit_temp: None,
cached_resume_block: None,
cached_return_block: None
fn finish(self,
upvar_decls: Vec<UpvarDecl>,
- arg_decls: Vec<ArgDecl<'tcx>>,
+ arg_decls: IndexVec<Arg, ArgDecl<'tcx>>,
return_ty: ty::FnOutput<'tcx>)
-> (Mir<'tcx>, ScopeAuxiliaryVec) {
for (index, block) in self.cfg.basic_blocks.iter().enumerate() {
}
}
- (Mir {
- basic_blocks: self.cfg.basic_blocks,
- visibility_scopes: self.visibility_scopes,
- promoted: vec![],
- var_decls: self.var_decls,
- arg_decls: arg_decls,
- temp_decls: self.temp_decls,
- upvar_decls: upvar_decls,
- return_ty: return_ty,
- span: self.fn_span
- }, self.scope_auxiliary)
+ (Mir::new(self.cfg.basic_blocks,
+ self.visibility_scopes,
+ IndexVec::new(),
+ return_ty,
+ self.var_decls,
+ arg_decls,
+ self.temp_decls,
+ upvar_decls,
+ self.fn_span
+ ), self.scope_auxiliary)
}
fn args_and_body<A>(&mut self,
arguments: A,
argument_extent: CodeExtent,
ast_block: &'gcx hir::Block)
- -> BlockAnd<Vec<ArgDecl<'tcx>>>
+ -> BlockAnd<IndexVec<Arg, ArgDecl<'tcx>>>
where A: Iterator<Item=(Ty<'gcx>, Option<&'gcx hir::Pat>)>
{
// to start, translate the argument patterns and collect the argument types.
let mut scope = None;
let arg_decls = arguments.enumerate().map(|(index, (ty, pattern))| {
- let lvalue = Lvalue::Arg(index as u32);
+ let lvalue = Lvalue::Arg(Arg::new(index));
if let Some(pattern) = pattern {
let pattern = self.hir.irrefutable_pat(pattern);
scope = self.declare_bindings(scope, ast_block.span, &pattern);
use rustc::middle::region::{CodeExtent, CodeExtentData};
use rustc::middle::lang_items;
use rustc::ty::subst::{Substs, Subst, VecPerParamSpace};
-use rustc::ty::{self, Ty, TyCtxt};
+use rustc::ty::{Ty, TyCtxt};
use rustc::mir::repr::*;
-use syntax::codemap::{Span, DUMMY_SP};
-use syntax::parse::token::intern_and_get_ident;
-use rustc::middle::const_val::ConstVal;
-use rustc_const_math::ConstInt;
+use syntax::codemap::Span;
+use rustc_data_structures::indexed_vec::Idx;
pub struct Scope<'tcx> {
/// the scope-id within the scope_auxiliary
/// wrapper maybe preferable.
pub fn push_scope(&mut self, extent: CodeExtent, entry: BasicBlock) {
debug!("push_scope({:?})", extent);
- let id = ScopeId::new(self.scope_auxiliary.vec.len());
+ let id = ScopeId::new(self.scope_auxiliary.len());
let vis_scope = self.visibility_scope;
self.scopes.push(Scope {
id: id,
free: None,
cached_block: None,
});
- self.scope_auxiliary.vec.push(ScopeAuxiliary {
+ self.scope_auxiliary.push(ScopeAuxiliary {
extent: extent,
dom: self.cfg.current_location(entry),
postdoms: vec![]
next_target.unit()
}
- /// Create diverge cleanup and branch to it from `block`.
- // FIXME: Remove this (used only for unreachable cases in match).
- pub fn panic(&mut self, block: BasicBlock, msg: &'static str, span: Span) {
- // fn(&(msg: &'static str filename: &'static str, line: u32)) -> !
- let region = ty::ReStatic; // FIXME(mir-borrowck): use a better region?
- let func = self.lang_function(lang_items::PanicFnLangItem);
- let args = self.hir.tcx().replace_late_bound_regions(&func.ty.fn_args(), |_| region).0;
-
- let ref_ty = args[0];
- let tup_ty = if let ty::TyRef(_, tyandmut) = ref_ty.sty {
- tyandmut.ty
- } else {
- span_bug!(span, "unexpected panic type: {:?}", func.ty);
- };
-
- let (tuple, tuple_ref) = (self.temp(tup_ty), self.temp(ref_ty));
- let (file, line) = self.span_to_fileline_args(span);
- let message = Constant {
- span: span,
- ty: self.hir.tcx().mk_static_str(),
- literal: self.hir.str_literal(intern_and_get_ident(msg))
- };
- let elems = vec![Operand::Constant(message),
- Operand::Constant(file),
- Operand::Constant(line)];
- let source_info = self.source_info(span);
- // FIXME: We should have this as a constant, rather than a stack variable (to not pollute
- // icache with cold branch code), however to achieve that we either have to rely on rvalue
- // promotion or have some way, in MIR, to create constants.
- self.cfg.push_assign(block, source_info, &tuple, // [1]
- Rvalue::Aggregate(AggregateKind::Tuple, elems));
- // [1] tuple = (message_arg, file_arg, line_arg);
- // FIXME: is this region really correct here?
- self.cfg.push_assign(block, source_info, &tuple_ref, // tuple_ref = &tuple;
- Rvalue::Ref(region, BorrowKind::Shared, tuple));
- let cleanup = self.diverge_cleanup();
- self.cfg.terminate(block, source_info, TerminatorKind::Call {
- func: Operand::Constant(func),
- args: vec![Operand::Consume(tuple_ref)],
- cleanup: cleanup,
- destination: None,
- });
- }
-
/// Create an Assert terminator and return the success block.
/// If the boolean condition operand is not the expected value,
/// a runtime panic will be caused with the given message.
success_block
}
-
- fn lang_function(&mut self, lang_item: lang_items::LangItem) -> Constant<'tcx> {
- let funcdid = match self.hir.tcx().lang_items.require(lang_item) {
- Ok(d) => d,
- Err(m) => {
- self.hir.tcx().sess.fatal(&m)
- }
- };
- Constant {
- span: DUMMY_SP,
- ty: self.hir.tcx().lookup_item_type(funcdid).ty,
- literal: Literal::Item {
- def_id: funcdid,
- substs: self.hir.tcx().mk_substs(Substs::empty())
- }
- }
- }
-
- fn span_to_fileline_args(&mut self, span: Span) -> (Constant<'tcx>, Constant<'tcx>) {
- let span_lines = self.hir.tcx().sess.codemap().lookup_char_pos(span.lo);
- (Constant {
- span: span,
- ty: self.hir.tcx().mk_static_str(),
- literal: self.hir.str_literal(intern_and_get_ident(&span_lines.file.name))
- }, Constant {
- span: span,
- ty: self.hir.tcx().types.u32,
- literal: Literal::Value {
- value: ConstVal::Integral(ConstInt::U32(span_lines.line as u32)),
- },
- })
- }
-
}
/// Builds drops for pop_scope and exit_scope.
use std::io::{self, Write};
use syntax::ast::NodeId;
+use rustc_data_structures::indexed_vec::Idx;
+
/// Write a graphviz DOT graph of a list of MIRs.
pub fn write_mir_graphviz<'a, 'b, 'tcx, W, I>(tcx: TyCtxt<'b, 'tcx, 'tcx>,
iter: I, w: &mut W)
write_graph_label(tcx, nodeid, mir, w)?;
// Nodes
- for block in mir.all_basic_blocks() {
+ for (block, _) in mir.basic_blocks().iter_enumerated() {
write_node(block, mir, w)?;
}
// Edges
- for source in mir.all_basic_blocks() {
+ for (source, _) in mir.basic_blocks().iter_enumerated() {
write_edges(source, mir, w)?;
}
writeln!(w, "}}")?
where INIT: Fn(&mut W) -> io::Result<()>,
FINI: Fn(&mut W) -> io::Result<()>
{
- let data = mir.basic_block_data(block);
+ let data = &mir[block];
write!(w, r#"<table border="0" cellborder="1" cellspacing="0">"#)?;
/// Write graphviz DOT edges with labels between the given basic block and all of its successors.
fn write_edges<W: Write>(source: BasicBlock, mir: &Mir, w: &mut W) -> io::Result<()> {
- let terminator = &mir.basic_block_data(source).terminator();
+ let terminator = mir[source].terminator();
let labels = terminator.kind.fmt_successor_labels();
for (&target, label) in terminator.successors().iter().zip(labels) {
if i > 0 {
write!(w, ", ")?;
}
- write!(w, "{:?}: {}", Lvalue::Arg(i as u32), escape(&arg.ty))?;
+ write!(w, "{:?}: {}", Lvalue::Arg(Arg::new(i)), escape(&arg.ty))?;
}
write!(w, ") -> ")?;
write!(w, "mut ")?;
}
write!(w, r#"{:?}: {}; // {}<br align="left"/>"#,
- Lvalue::Var(i as u32), escape(&var.ty), var.name)?;
+ Lvalue::Var(Var::new(i)), escape(&var.ty), var.name)?;
}
// Compiler-introduced temporary types.
for (i, temp) in mir.temp_decls.iter().enumerate() {
write!(w, r#"let mut {:?}: {};<br align="left"/>"#,
- Lvalue::Temp(i as u32), escape(&temp.ty))?;
+ Lvalue::Temp(Temp::new(i)), escape(&temp.ty))?;
}
writeln!(w, ">;")
// except according to those terms.
use hair::*;
-use rustc_data_structures::fnv::FnvHashMap;
+use rustc_data_structures::indexed_vec::Idx;
use rustc_const_math::ConstInt;
use hair::cx::Cx;
use hair::cx::block;
use rustc::middle::const_val::ConstVal;
use rustc_const_eval as const_eval;
use rustc::middle::region::CodeExtent;
-use rustc::hir::pat_util;
use rustc::ty::{self, VariantDef, Ty};
use rustc::ty::cast::CastKind as TyCastKind;
use rustc::mir::repr::*;
let adt_data = if let hir::ExprPath(..) = fun.node {
// Tuple-like ADTs are represented as ExprCall. We convert them here.
expr_ty.ty_adt_def().and_then(|adt_def|{
- match cx.tcx.def_map.borrow()[&fun.id].full_def() {
+ match cx.tcx.expect_def(fun.id) {
Def::Variant(_, variant_id) => {
Some((adt_def, adt_def.variant_index_with_id(variant_id)))
},
}
}
ty::TyEnum(adt, substs) => {
- match cx.tcx.def_map.borrow()[&expr.id].full_def() {
+ match cx.tcx.expect_def(expr.id) {
Def::Variant(enum_id, variant_id) => {
debug_assert!(adt.did == enum_id);
assert!(base.is_none());
fn convert_arm<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>,
arm: &'tcx hir::Arm) -> Arm<'tcx> {
- let mut map;
- let opt_map = if arm.pats.len() == 1 {
- None
- } else {
- map = FnvHashMap();
- pat_util::pat_bindings(&arm.pats[0], |_, p_id, _, path| {
- map.insert(path.node, p_id);
- });
- Some(&map)
- };
-
Arm {
- patterns: arm.pats.iter().map(|p| cx.refutable_pat(opt_map, p)).collect(),
+ patterns: arm.pats.iter().map(|p| cx.refutable_pat(p)).collect(),
guard: arm.guard.to_ref(),
body: arm.body.to_ref(),
}
-> ExprKind<'tcx> {
let substs = cx.tcx.node_id_item_substs(expr.id).substs;
// Otherwise there may be def_map borrow conflicts
- let def = cx.tcx.def_map.borrow()[&expr.id].full_def();
+ let def = cx.tcx.expect_def(expr.id);
let def_id = match def {
// A regular function.
Def::Fn(def_id) | Def::Method(def_id) => def_id,
id: node_id,
},
- def @ Def::Local(..) |
- def @ Def::Upvar(..) => return convert_var(cx, expr, def),
+ Def::Local(..) | Def::Upvar(..) => return convert_var(cx, expr, def),
- def =>
- span_bug!(
- expr.span,
- "def `{:?}` not yet implemented",
- def),
+ _ => span_bug!(expr.span, "def `{:?}` not yet implemented", def),
};
ExprKind::Literal {
literal: Literal::Item { def_id: def_id, substs: substs }
fn loop_label<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>,
expr: &'tcx hir::Expr) -> CodeExtent {
- match cx.tcx.def_map.borrow().get(&expr.id).map(|d| d.full_def()) {
- Some(Def::Label(loop_id)) => cx.tcx.region_maps.node_extent(loop_id),
- d => {
- span_bug!(expr.span, "loop scope resolved to {:?}", d);
- }
+ match cx.tcx.expect_def(expr.id) {
+ Def::Label(loop_id) => cx.tcx.region_maps.node_extent(loop_id),
+ d => span_bug!(expr.span, "loop scope resolved to {:?}", d),
}
}
use rustc::middle::const_val::ConstVal;
use rustc_const_eval as const_eval;
+use rustc_data_structures::indexed_vec::Idx;
use rustc::hir::def_id::DefId;
use rustc::hir::intravisit::FnKind;
use rustc::hir::map::blocks::FnLikeNode;
use hair::*;
use hair::cx::Cx;
-use rustc_data_structures::fnv::FnvHashMap;
+use rustc_data_structures::indexed_vec::Idx;
use rustc_const_eval as const_eval;
use rustc::hir::def::Def;
use rustc::hir::pat_util::{EnumerateAndAdjustIterator, pat_is_resolved_const};
use rustc::ty::{self, Ty};
use rustc::mir::repr::*;
use rustc::hir::{self, PatKind};
-use syntax::ast;
use syntax::codemap::Span;
use syntax::ptr::P;
/// ```
struct PatCx<'patcx, 'cx: 'patcx, 'gcx: 'cx+'tcx, 'tcx: 'cx> {
cx: &'patcx mut Cx<'cx, 'gcx, 'tcx>,
- binding_map: Option<&'patcx FnvHashMap<ast::Name, ast::NodeId>>,
}
impl<'cx, 'gcx, 'tcx> Cx<'cx, 'gcx, 'tcx> {
pub fn irrefutable_pat(&mut self, pat: &hir::Pat) -> Pattern<'tcx> {
- PatCx::new(self, None).to_pattern(pat)
+ PatCx::new(self).to_pattern(pat)
}
pub fn refutable_pat(&mut self,
- binding_map: Option<&FnvHashMap<ast::Name, ast::NodeId>>,
pat: &hir::Pat)
-> Pattern<'tcx> {
- PatCx::new(self, binding_map).to_pattern(pat)
+ PatCx::new(self).to_pattern(pat)
}
}
impl<'patcx, 'cx, 'gcx, 'tcx> PatCx<'patcx, 'cx, 'gcx, 'tcx> {
- fn new(cx: &'patcx mut Cx<'cx, 'gcx, 'tcx>,
- binding_map: Option<&'patcx FnvHashMap<ast::Name, ast::NodeId>>)
+ fn new(cx: &'patcx mut Cx<'cx, 'gcx, 'tcx>)
-> PatCx<'patcx, 'cx, 'gcx, 'tcx> {
PatCx {
cx: cx,
- binding_map: binding_map,
}
}
PatKind::Path(..) | PatKind::QPath(..)
if pat_is_resolved_const(&self.cx.tcx.def_map.borrow(), pat) =>
{
- let def = self.cx.tcx.def_map.borrow().get(&pat.id).unwrap().full_def();
- match def {
+ match self.cx.tcx.expect_def(pat.id) {
Def::Const(def_id) | Def::AssociatedConst(def_id) => {
let tcx = self.cx.tcx.global_tcx();
let substs = Some(self.cx.tcx.node_id_item_substs(pat.id).substs);
}
}
}
- _ =>
+ def =>
span_bug!(
pat.span,
"def not a constant: {:?}",
}
PatKind::Binding(bm, ref ident, ref sub) => {
- let id = match self.binding_map {
- None => pat.id,
- Some(ref map) => map[&ident.node],
- };
+ let id = self.cx.tcx.expect_def(pat.id).var_id();
let var_ty = self.cx.tcx.node_id_to_type(pat.id);
let region = match var_ty.sty {
ty::TyRef(&r, _) => Some(r),
ty::TyStruct(adt_def, _) | ty::TyEnum(adt_def, _) => adt_def,
_ => span_bug!(pat.span, "tuple struct pattern not applied to struct or enum"),
};
- let def = self.cx.tcx.def_map.borrow().get(&pat.id).unwrap().full_def();
- let variant_def = adt_def.variant_of_def(def);
+ let variant_def = adt_def.variant_of_def(self.cx.tcx.expect_def(pat.id));
let subpatterns =
subpatterns.iter()
"struct pattern not applied to struct or enum");
}
};
-
- let def = self.cx.tcx.def_map.borrow().get(&pat.id).unwrap().full_def();
- let variant_def = adt_def.variant_of_def(def);
+ let variant_def = adt_def.variant_of_def(self.cx.tcx.expect_def(pat.id));
let subpatterns =
fields.iter()
pat: &hir::Pat,
subpatterns: Vec<FieldPattern<'tcx>>)
-> PatternKind<'tcx> {
- let def = self.cx.tcx.def_map.borrow().get(&pat.id).unwrap().full_def();
- match def {
+ match self.cx.tcx.expect_def(pat.id) {
Def::Variant(enum_id, variant_id) => {
let adt_def = self.cx.tcx.lookup_adt_def(enum_id);
if adt_def.variants.len() > 1 {
PatternKind::Leaf { subpatterns: subpatterns }
}
- _ => {
+ def => {
span_bug!(pat.span, "inappropriate def for pattern: {:?}", def);
}
}
pub mod mir_map;
pub mod pretty;
pub mod transform;
-pub mod traversal;
use rustc::mir::transform::MirSource;
use rustc::ty::{self, TyCtxt};
use rustc_data_structures::fnv::FnvHashMap;
+use rustc_data_structures::indexed_vec::{Idx};
use std::fmt::Display;
use std::fs;
use std::io::{self, Write};
// compute scope/entry exit annotations
let mut annotations = FnvHashMap();
if let Some(auxiliary) = auxiliary {
- for (index, auxiliary) in auxiliary.vec.iter().enumerate() {
- let scope_id = ScopeId::new(index);
-
+ for (scope_id, auxiliary) in auxiliary.iter_enumerated() {
annotations.entry(auxiliary.dom)
.or_insert(vec![])
.push(Annotation::EnterScope(scope_id));
-> io::Result<()> {
let annotations = scope_entry_exit_annotations(auxiliary);
write_mir_intro(tcx, src, mir, w)?;
- for block in mir.all_basic_blocks() {
+ for block in mir.basic_blocks().indices() {
write_basic_block(tcx, block, mir, w, &annotations)?;
- if block.index() + 1 != mir.basic_blocks.len() {
+ if block.index() + 1 != mir.basic_blocks().len() {
writeln!(w, "")?;
}
}
w: &mut Write,
annotations: &FnvHashMap<Location, Vec<Annotation>>)
-> io::Result<()> {
- let data = mir.basic_block_data(block);
+ let data = &mir[block];
// Basic block label at the top.
writeln!(w, "{}{:?}: {{", INDENT, block)?;
writeln!(w, "{0:1$}scope {2} {{", "", indent, child.index())?;
// User variable types (including the user's name in a comment).
- for (i, var) in mir.var_decls.iter().enumerate() {
+ for (id, var) in mir.var_decls.iter_enumerated() {
// Skip if not declared in this scope.
if var.source_info.scope != child {
continue;
INDENT,
indent,
mut_str,
- Lvalue::Var(i as u32),
+ id,
var.ty);
writeln!(w, "{0:1$} // \"{2}\" in {3}",
indented_var,
write!(w, "(")?;
// fn argument types.
- for (i, arg) in mir.arg_decls.iter().enumerate() {
- if i > 0 {
+ for (i, arg) in mir.arg_decls.iter_enumerated() {
+ if i.index() != 0 {
write!(w, ", ")?;
}
- write!(w, "{:?}: {}", Lvalue::Arg(i as u32), arg.ty)?;
+ write!(w, "{:?}: {}", Lvalue::Arg(i), arg.ty)?;
}
write!(w, ") -> ")?;
fn write_mir_decls(mir: &Mir, w: &mut Write) -> io::Result<()> {
// Compiler-introduced temporary types.
- for (i, temp) in mir.temp_decls.iter().enumerate() {
- writeln!(w, "{}let mut {:?}: {};", INDENT, Lvalue::Temp(i as u32), temp.ty)?;
+ for (id, temp) in mir.temp_decls.iter_enumerated() {
+ writeln!(w, "{}let mut {:?}: {};", INDENT, id, temp.ty)?;
}
// Wrote any declaration? Add an empty line before the first block is printed.
use rustc::ty::TyCtxt;
use rustc::mir::repr::*;
use rustc::mir::transform::{MirPass, MirSource, Pass};
+use rustc_data_structures::indexed_vec::{Idx, IndexVec};
use pretty;
-use traversal;
-
pub struct AddCallGuards;
/**
impl<'tcx> MirPass<'tcx> for AddCallGuards {
fn run_pass<'a>(&mut self, tcx: TyCtxt<'a, 'tcx, 'tcx>, src: MirSource, mir: &mut Mir<'tcx>) {
- let mut pred_count = vec![0u32; mir.basic_blocks.len()];
-
- // Build the precedecessor map for the MIR
- for (_, data) in traversal::preorder(mir) {
- if let Some(ref term) = data.terminator {
- for &tgt in term.successors().iter() {
- pred_count[tgt.index()] += 1;
- }
- }
- }
+ let pred_count: IndexVec<_, _> =
+ mir.predecessors().iter().map(|ps| ps.len()).collect();
// We need a place to store the new blocks generated
let mut new_blocks = Vec::new();
- let bbs = mir.all_basic_blocks();
- let cur_len = mir.basic_blocks.len();
-
- for &bb in &bbs {
- let data = mir.basic_block_data_mut(bb);
+ let cur_len = mir.basic_blocks().len();
- match data.terminator {
+ for block in mir.basic_blocks_mut() {
+ match block.terminator {
Some(Terminator {
kind: TerminatorKind::Call {
destination: Some((_, ref mut destination)),
cleanup: Some(_),
..
}, source_info
- }) if pred_count[destination.index()] > 1 => {
+ }) if pred_count[*destination] > 1 => {
// It's a critical edge, break it
let call_guard = BasicBlockData {
statements: vec![],
- is_cleanup: data.is_cleanup,
+ is_cleanup: block.is_cleanup,
terminator: Some(Terminator {
source_info: source_info,
kind: TerminatorKind::Goto { target: *destination }
pretty::dump_mir(tcx, "break_cleanup_edges", &0, src, mir, None);
debug!("Broke {} N edges", new_blocks.len());
- mir.basic_blocks.extend_from_slice(&new_blocks);
+ mir.basic_blocks_mut().extend(new_blocks);
}
}
//! This pass just dumps MIR at a specified point.
+use std::fmt;
+
use rustc::ty::TyCtxt;
use rustc::mir::repr::*;
-use rustc::mir::transform::{Pass, MirPass, MirSource};
+use rustc::mir::transform::{Pass, MirPass, MirPassHook, MirSource};
use pretty;
-pub struct DumpMir<'a>(pub &'a str);
+pub struct Marker<'a>(pub &'a str);
+
+impl<'b, 'tcx> MirPass<'tcx> for Marker<'b> {
+ fn run_pass<'a>(&mut self, _tcx: TyCtxt<'a, 'tcx, 'tcx>,
+ _src: MirSource, _mir: &mut Mir<'tcx>)
+ {}
+}
+
+impl<'b> Pass for Marker<'b> {
+ fn name(&self) -> &str { self.0 }
+}
+
+pub struct Disambiguator<'a> {
+ pass: &'a Pass,
+ is_after: bool
+}
+
+impl<'a> fmt::Display for Disambiguator<'a> {
+ fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
+ let title = if self.is_after { "after" } else { "before" };
+ if let Some(fmt) = self.pass.disambiguator() {
+ write!(formatter, "{}-{}", fmt, title)
+ } else {
+ write!(formatter, "{}", title)
+ }
+ }
+}
+
+pub struct DumpMir;
-impl<'b, 'tcx> MirPass<'tcx> for DumpMir<'b> {
- fn run_pass<'a>(&mut self, tcx: TyCtxt<'a, 'tcx, 'tcx>,
- src: MirSource, mir: &mut Mir<'tcx>) {
- pretty::dump_mir(tcx, self.0, &0, src, mir, None);
+impl<'tcx> MirPassHook<'tcx> for DumpMir {
+ fn on_mir_pass<'a>(
+ &mut self,
+ tcx: TyCtxt<'a, 'tcx, 'tcx>,
+ src: MirSource,
+ mir: &Mir<'tcx>,
+ pass: &Pass,
+ is_after: bool)
+ {
+ pretty::dump_mir(
+ tcx,
+ pass.name(),
+ &Disambiguator {
+ pass: pass,
+ is_after: is_after
+ },
+ src,
+ mir,
+ None
+ );
}
}
-impl<'b> Pass for DumpMir<'b> {}
+impl<'b> Pass for DumpMir {}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-pub mod remove_dead_blocks;
+pub mod simplify_branches;
pub mod simplify_cfg;
pub mod erase_regions;
pub mod no_landing_pads;
TerminatorKind::Goto { .. } |
TerminatorKind::Resume |
TerminatorKind::Return |
+ TerminatorKind::Unreachable |
TerminatorKind::If { .. } |
TerminatorKind::Switch { .. } |
TerminatorKind::SwitchInt { .. } => {
use rustc::mir::repr::*;
use rustc::mir::visit::{LvalueContext, MutVisitor, Visitor};
+use rustc::mir::traversal::ReversePostorder;
use rustc::ty::{self, TyCtxt};
use syntax::codemap::Span;
use build::Location;
-use traversal::ReversePostorder;
+
+use rustc_data_structures::indexed_vec::{IndexVec, Idx};
use std::mem;
}
struct TempCollector {
- temps: Vec<TempState>,
+ temps: IndexVec<Temp, TempState>,
location: Location,
span: Span
}
return;
}
- let temp = &mut self.temps[index as usize];
+ let temp = &mut self.temps[index];
if *temp == TempState::Undefined {
match context {
LvalueContext::Store |
}
}
-pub fn collect_temps(mir: &Mir, rpo: &mut ReversePostorder) -> Vec<TempState> {
+pub fn collect_temps(mir: &Mir, rpo: &mut ReversePostorder) -> IndexVec<Temp, TempState> {
let mut collector = TempCollector {
- temps: vec![TempState::Undefined; mir.temp_decls.len()],
+ temps: IndexVec::from_elem(TempState::Undefined, &mir.temp_decls),
location: Location {
block: START_BLOCK,
statement_index: 0
struct Promoter<'a, 'tcx: 'a> {
source: &'a mut Mir<'tcx>,
promoted: Mir<'tcx>,
- temps: &'a mut Vec<TempState>,
+ temps: &'a mut IndexVec<Temp, TempState>,
/// If true, all nested temps are also kept in the
/// source MIR, not moved to the promoted MIR.
impl<'a, 'tcx> Promoter<'a, 'tcx> {
fn new_block(&mut self) -> BasicBlock {
- let index = self.promoted.basic_blocks.len();
- self.promoted.basic_blocks.push(BasicBlockData {
+ let span = self.promoted.span;
+ self.promoted.basic_blocks_mut().push(BasicBlockData {
statements: vec![],
terminator: Some(Terminator {
source_info: SourceInfo {
- span: self.promoted.span,
+ span: span,
scope: ARGUMENT_VISIBILITY_SCOPE
},
kind: TerminatorKind::Return
}),
is_cleanup: false
- });
- BasicBlock::new(index)
+ })
}
fn assign(&mut self, dest: Lvalue<'tcx>, rvalue: Rvalue<'tcx>, span: Span) {
- let data = self.promoted.basic_blocks.last_mut().unwrap();
+ let last = self.promoted.basic_blocks().last().unwrap();
+ let data = &mut self.promoted[last];
data.statements.push(Statement {
source_info: SourceInfo {
span: span,
/// Copy the initialization of this temp to the
/// promoted MIR, recursing through temps.
- fn promote_temp(&mut self, index: u32) -> u32 {
- let index = index as usize;
+ fn promote_temp(&mut self, temp: Temp) -> Temp {
let old_keep_original = self.keep_original;
- let (bb, stmt_idx) = match self.temps[index] {
+ let (bb, stmt_idx) = match self.temps[temp] {
TempState::Defined {
location: Location { block, statement_index },
uses
}
(block, statement_index)
}
- temp => {
- span_bug!(self.promoted.span, "tmp{} not promotable: {:?}",
- index, temp);
+ state => {
+ span_bug!(self.promoted.span, "{:?} not promotable: {:?}",
+ temp, state);
}
};
if !self.keep_original {
- self.temps[index] = TempState::PromotedOut;
+ self.temps[temp] = TempState::PromotedOut;
}
let no_stmts = self.source[bb].statements.len();
self.visit_terminator_kind(bb, call.as_mut().unwrap());
}
- let new_index = self.promoted.temp_decls.len() as u32;
- let new_temp = Lvalue::Temp(new_index);
- self.promoted.temp_decls.push(TempDecl {
- ty: self.source.temp_decls[index].ty
+ let new_temp = self.promoted.temp_decls.push(TempDecl {
+ ty: self.source.temp_decls[temp].ty
});
// Inject the Rvalue or Call into the promoted MIR.
if stmt_idx < no_stmts {
- self.assign(new_temp, rvalue.unwrap(), source_info.span);
+ self.assign(Lvalue::Temp(new_temp), rvalue.unwrap(), source_info.span);
} else {
- let last = self.promoted.basic_blocks.len() - 1;
+ let last = self.promoted.basic_blocks().last().unwrap();
let new_target = self.new_block();
let mut call = call.unwrap();
match call {
TerminatorKind::Call { ref mut destination, ..} => {
- *destination = Some((new_temp, new_target));
+ *destination = Some((Lvalue::Temp(new_temp), new_target));
}
_ => bug!()
}
- let terminator = &mut self.promoted.basic_blocks[last].terminator_mut();
+ let terminator = self.promoted[last].terminator_mut();
terminator.source_info.span = source_info.span;
terminator.kind = call;
}
// Restore the old duplication state.
self.keep_original = old_keep_original;
- new_index
+ new_temp
}
fn promote_candidate(mut self, candidate: Candidate) {
span: span,
ty: self.promoted.return_ty.unwrap(),
literal: Literal::Promoted {
- index: self.source.promoted.len()
+ index: Promoted::new(self.source.promoted.len())
}
});
let mut rvalue = match candidate {
/// Replaces all temporaries with their promoted counterparts.
impl<'a, 'tcx> MutVisitor<'tcx> for Promoter<'a, 'tcx> {
fn visit_lvalue(&mut self, lvalue: &mut Lvalue<'tcx>, context: LvalueContext) {
- if let Lvalue::Temp(ref mut index) = *lvalue {
- *index = self.promote_temp(*index);
+ if let Lvalue::Temp(ref mut temp) = *lvalue {
+ *temp = self.promote_temp(*temp);
}
self.super_lvalue(lvalue, context);
}
pub fn promote_candidates<'a, 'tcx>(mir: &mut Mir<'tcx>,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
- mut temps: Vec<TempState>,
+ mut temps: IndexVec<Temp, TempState>,
candidates: Vec<Candidate>) {
// Visit candidates in reverse, in case they're nested.
for candidate in candidates.into_iter().rev() {
let statement = &mir[bb].statements[stmt_idx];
let StatementKind::Assign(ref dest, _) = statement.kind;
if let Lvalue::Temp(index) = *dest {
- if temps[index as usize] == TempState::PromotedOut {
+ if temps[index] == TempState::PromotedOut {
// Already promoted.
continue;
}
let mut promoter = Promoter {
source: mir,
- promoted: Mir {
- basic_blocks: vec![],
- visibility_scopes: vec![VisibilityScopeData {
+ promoted: Mir::new(
+ IndexVec::new(),
+ Some(VisibilityScopeData {
span: span,
parent_scope: None
- }],
- promoted: vec![],
- return_ty: ty::FnConverging(ty),
- var_decls: vec![],
- arg_decls: vec![],
- temp_decls: vec![],
- upvar_decls: vec![],
- span: span
- },
+ }).into_iter().collect(),
+ IndexVec::new(),
+ ty::FnConverging(ty),
+ IndexVec::new(),
+ IndexVec::new(),
+ IndexVec::new(),
+ vec![],
+ span
+ ),
temps: &mut temps,
keep_original: false
};
}
// Eliminate assignments to, and drops of promoted temps.
- let promoted = |index: u32| temps[index as usize] == TempState::PromotedOut;
- for block in &mut mir.basic_blocks {
+ let promoted = |index: Temp| temps[index] == TempState::PromotedOut;
+ for block in mir.basic_blocks_mut() {
block.statements.retain(|statement| {
match statement.kind {
StatementKind::Assign(Lvalue::Temp(index), _) => {
//! diagnostics as to why a constant rvalue wasn't promoted.
use rustc_data_structures::bitvec::BitVector;
+use rustc_data_structures::indexed_vec::{IndexVec, Idx};
use rustc::hir;
use rustc::hir::def_id::DefId;
use rustc::hir::intravisit::FnKind;
use rustc::ty::cast::CastTy;
use rustc::mir::repr::*;
use rustc::mir::mir_map::MirMap;
-use rustc::mir::transform::{Pass, MirMapPass, MirSource};
+use rustc::mir::traversal::{self, ReversePostorder};
+use rustc::mir::transform::{Pass, MirMapPass, MirPassHook, MirSource};
use rustc::mir::visit::{LvalueContext, Visitor};
use rustc::util::nodemap::DefIdMap;
use syntax::abi::Abi;
use std::fmt;
use build::Location;
-use traversal::{self, ReversePostorder};
use super::promote_consts::{self, Candidate, TempState};
param_env: ty::ParameterEnvironment<'tcx>,
qualif_map: &'a mut DefIdMap<Qualif>,
mir_map: Option<&'a MirMap<'tcx>>,
- temp_qualif: Vec<Option<Qualif>>,
+ temp_qualif: IndexVec<Temp, Option<Qualif>>,
return_qualif: Option<Qualif>,
qualif: Qualif,
const_fn_arg_vars: BitVector,
location: Location,
- temp_promotion_state: Vec<TempState>,
+ temp_promotion_state: IndexVec<Temp, TempState>,
promotion_candidates: Vec<Candidate>
}
param_env: param_env,
qualif_map: qualif_map,
mir_map: mir_map,
- temp_qualif: vec![None; mir.temp_decls.len()],
+ temp_qualif: IndexVec::from_elem(None, &mir.temp_decls),
return_qualif: None,
qualif: Qualif::empty(),
const_fn_arg_vars: BitVector::new(mir.var_decls.len()),
// Only handle promotable temps in non-const functions.
if self.mode == Mode::Fn {
if let Lvalue::Temp(index) = *dest {
- if self.temp_promotion_state[index as usize].is_promotable() {
- store(&mut self.temp_qualif[index as usize]);
+ if self.temp_promotion_state[index].is_promotable() {
+ store(&mut self.temp_qualif[index]);
}
}
return;
}
match *dest {
- Lvalue::Temp(index) => store(&mut self.temp_qualif[index as usize]),
+ Lvalue::Temp(index) => store(&mut self.temp_qualif[index]),
Lvalue::ReturnPointer => store(&mut self.return_qualif),
Lvalue::Projection(box Projection {
base: Lvalue::Temp(index),
elem: ProjectionElem::Deref
- }) if self.mir.temp_decls[index as usize].ty.is_unique()
- && self.temp_qualif[index as usize].map_or(false, |qualif| {
+ }) if self.mir.temp_decls[index].ty.is_unique()
+ && self.temp_qualif[index].map_or(false, |qualif| {
qualif.intersects(Qualif::NOT_CONST)
}) => {
// Part of `box expr`, we should've errored
fn qualify_const(&mut self) -> Qualif {
let mir = self.mir;
- let mut seen_blocks = BitVector::new(mir.basic_blocks.len());
+ let mut seen_blocks = BitVector::new(mir.basic_blocks().len());
let mut bb = START_BLOCK;
loop {
seen_blocks.insert(bb.index());
TerminatorKind::Switch {..} |
TerminatorKind::SwitchInt {..} |
TerminatorKind::DropAndReplace { .. } |
- TerminatorKind::Resume => None,
+ TerminatorKind::Resume |
+ TerminatorKind::Unreachable => None,
TerminatorKind::Return => {
// Check for unused values. This usually means
// there are extra statements in the AST.
- for i in 0..mir.temp_decls.len() {
- if self.temp_qualif[i].is_none() {
+ for temp in mir.temp_decls.indices() {
+ if self.temp_qualif[temp].is_none() {
continue;
}
- let state = self.temp_promotion_state[i];
+ let state = self.temp_promotion_state[temp];
if let TempState::Defined { location, uses: 0 } = state {
let data = &mir[location.block];
let stmt_idx = location.statement_index;
self.qualif = Qualif::NOT_CONST;
for index in 0..mir.var_decls.len() {
if !self.const_fn_arg_vars.contains(index) {
- self.assign(&Lvalue::Var(index as u32));
+ self.assign(&Lvalue::Var(Var::new(index)));
}
}
self.add(Qualif::NOT_CONST);
}
Lvalue::Temp(index) => {
- if !self.temp_promotion_state[index as usize].is_promotable() {
+ if !self.temp_promotion_state[index].is_promotable() {
self.add(Qualif::NOT_PROMOTABLE);
}
- if let Some(qualif) = self.temp_qualif[index as usize] {
+ if let Some(qualif) = self.temp_qualif[index] {
self.add(qualif);
} else {
self.not_const();
}
ProjectionElem::ConstantIndex {..} |
+ ProjectionElem::Subslice {..} |
ProjectionElem::Downcast(..) => {
this.not_const()
}
}
}
- Rvalue::Slice {..} |
Rvalue::InlineAsm {..} => {
self.not_const();
}
// Check the allowed const fn argument forms.
if let (Mode::ConstFn, &Lvalue::Var(index)) = (self.mode, dest) {
- if self.const_fn_arg_vars.insert(index as usize) {
+ if self.const_fn_arg_vars.insert(index.index()) {
// Direct use of an argument is permitted.
if let Rvalue::Use(Operand::Consume(Lvalue::Arg(_))) = *rvalue {
return;
// Avoid a generic error for other uses of arguments.
if self.qualif.intersects(Qualif::FN_ARGUMENT) {
- let decl = &self.mir.var_decls[index as usize];
+ let decl = &self.mir.var_decls[index];
span_err!(self.tcx.sess, decl.source_info.span, E0022,
"arguments of constant functions can only \
be immutable by-value bindings");
impl Pass for QualifyAndPromoteConstants {}
impl<'tcx> MirMapPass<'tcx> for QualifyAndPromoteConstants {
- fn run_pass<'a>(&mut self, tcx: TyCtxt<'a, 'tcx, 'tcx>, map: &mut MirMap<'tcx>) {
+ fn run_pass<'a>(&mut self,
+ tcx: TyCtxt<'a, 'tcx, 'tcx>,
+ map: &mut MirMap<'tcx>,
+ hooks: &mut [Box<for<'s> MirPassHook<'s>>]) {
let mut qualif_map = DefIdMap();
// First, visit `const` items, potentially recursing, to get
};
let param_env = ty::ParameterEnvironment::for_item(tcx, id);
+ for hook in &mut *hooks {
+ hook.on_mir_pass(tcx, src, mir, self, false);
+ }
+
if mode == Mode::Fn || mode == Mode::ConstFn {
// This is ugly because Qualifier holds onto mir,
// which can't be mutated until its scope ends.
qualifier.qualify_const();
}
+ for hook in &mut *hooks {
+ hook.on_mir_pass(tcx, src, mir, self, true);
+ }
+
// Statics must be Sync.
if mode == Mode::Static {
let ty = mir.return_ty.unwrap();
+++ /dev/null
-// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! A pass that erases the contents of dead blocks. This pass must
-//! run before any analysis passes because some of the dead blocks
-//! can be ill-typed.
-//!
-//! The main problem is that typeck lets most blocks whose end is not
-//! reachable have an arbitrary return type, rather than having the
-//! usual () return type (as a note, typeck's notion of reachability
-//! is in fact slightly weaker than MIR CFG reachability - see #31617).
-//!
-//! A standard example of the situation is:
-//! ```rust
-//! fn example() {
-//! let _a: char = { return; };
-//! }
-//! ```
-//!
-//! Here the block (`{ return; }`) has the return type `char`,
-//! rather than `()`, but the MIR we naively generate still contains
-//! the `_a = ()` write in the unreachable block "after" the return.
-//!
-//! As we have to run this pass even when we want to debug the MIR,
-//! this pass just replaces the blocks with empty "return" blocks
-//! and does not renumber anything.
-
-use rustc_data_structures::bitvec::BitVector;
-use rustc::ty::TyCtxt;
-use rustc::mir::repr::*;
-use rustc::mir::transform::{Pass, MirPass, MirSource};
-
-pub struct RemoveDeadBlocks;
-
-impl<'tcx> MirPass<'tcx> for RemoveDeadBlocks {
- fn run_pass<'a>(&mut self, _: TyCtxt<'a, 'tcx, 'tcx>,
- _: MirSource, mir: &mut Mir<'tcx>) {
- let mut seen = BitVector::new(mir.basic_blocks.len());
- // This block is always required.
- seen.insert(START_BLOCK.index());
-
- let mut worklist = Vec::with_capacity(4);
- worklist.push(START_BLOCK);
- while let Some(bb) = worklist.pop() {
- for succ in mir.basic_block_data(bb).terminator().successors().iter() {
- if seen.insert(succ.index()) {
- worklist.push(*succ);
- }
- }
- }
- retain_basic_blocks(mir, &seen);
- }
-}
-
-impl Pass for RemoveDeadBlocks {}
-
-/// Mass removal of basic blocks to keep the ID-remapping cheap.
-fn retain_basic_blocks(mir: &mut Mir, keep: &BitVector) {
- let num_blocks = mir.basic_blocks.len();
-
- let mut replacements: Vec<_> = (0..num_blocks).map(BasicBlock::new).collect();
- let mut used_blocks = 0;
- for alive_index in keep.iter() {
- replacements[alive_index] = BasicBlock::new(used_blocks);
- if alive_index != used_blocks {
- // Swap the next alive block data with the current available slot. Since alive_index is
- // non-decreasing this is a valid operation.
- mir.basic_blocks.swap(alive_index, used_blocks);
- }
- used_blocks += 1;
- }
- mir.basic_blocks.truncate(used_blocks);
-
- for bb in mir.all_basic_blocks() {
- for target in mir.basic_block_data_mut(bb).terminator_mut().successors_mut() {
- *target = replacements[target.index()];
- }
- }
-}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+//! A pass that simplifies branches when their condition is known.
+
+use rustc::ty::TyCtxt;
+use rustc::middle::const_val::ConstVal;
+use rustc::mir::transform::{MirPass, MirSource, Pass};
+use rustc::mir::repr::*;
+
+use std::fmt;
+
+pub struct SimplifyBranches<'a> { label: &'a str }
+
+impl<'a> SimplifyBranches<'a> {
+ pub fn new(label: &'a str) -> Self {
+ SimplifyBranches { label: label }
+ }
+}
+
+impl<'l, 'tcx> MirPass<'tcx> for SimplifyBranches<'l> {
+ fn run_pass<'a>(&mut self, _tcx: TyCtxt<'a, 'tcx, 'tcx>, _src: MirSource, mir: &mut Mir<'tcx>) {
+ for block in mir.basic_blocks_mut() {
+ let terminator = block.terminator_mut();
+ terminator.kind = match terminator.kind {
+ TerminatorKind::If { ref targets, cond: Operand::Constant(Constant {
+ literal: Literal::Value {
+ value: ConstVal::Bool(cond)
+ }, ..
+ }) } => {
+ if cond {
+ TerminatorKind::Goto { target: targets.0 }
+ } else {
+ TerminatorKind::Goto { target: targets.1 }
+ }
+ }
+
+ TerminatorKind::Assert { target, cond: Operand::Constant(Constant {
+ literal: Literal::Value {
+ value: ConstVal::Bool(cond)
+ }, ..
+ }), expected, .. } if cond == expected => {
+ TerminatorKind::Goto { target: target }
+ }
+
+ _ => continue
+ };
+ }
+ }
+}
+
+impl<'l> Pass for SimplifyBranches<'l> {
+ fn disambiguator<'a>(&'a self) -> Option<Box<fmt::Display+'a>> {
+ Some(Box::new(self.label))
+ }
+}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+//! A pass that removes various redundancies in the CFG. It should be
+//! called after every significant CFG modification to tidy things
+//! up.
+//!
+//! This pass must also be run before any analysis passes because it removes
+//! dead blocks, and some of these can be ill-typed.
+//!
+//! The cause of that is that typeck lets most blocks whose end is not
+//! reachable have an arbitrary return type, rather than having the
+//! usual () return type (as a note, typeck's notion of reachability
+//! is in fact slightly weaker than MIR CFG reachability - see #31617).
+//!
+//! A standard example of the situation is:
+//! ```rust
+//! fn example() {
+//! let _a: char = { return; };
+//! }
+//! ```
+//!
+//! Here the block (`{ return; }`) has the return type `char`,
+//! rather than `()`, but the MIR we naively generate still contains
+//! the `_a = ()` write in the unreachable block "after" the return.
+
+
use rustc_data_structures::bitvec::BitVector;
-use rustc::middle::const_val::ConstVal;
+use rustc_data_structures::indexed_vec::{Idx, IndexVec};
use rustc::ty::TyCtxt;
use rustc::mir::repr::*;
use rustc::mir::transform::{MirPass, MirSource, Pass};
-use pretty;
-use std::mem;
-
-use super::remove_dead_blocks::RemoveDeadBlocks;
+use rustc::mir::traversal;
+use std::fmt;
-use traversal;
+pub struct SimplifyCfg<'a> { label: &'a str }
-pub struct SimplifyCfg;
-
-impl SimplifyCfg {
- pub fn new() -> SimplifyCfg {
- SimplifyCfg
+impl<'a> SimplifyCfg<'a> {
+ pub fn new(label: &'a str) -> Self {
+ SimplifyCfg { label: label }
}
}
-impl<'tcx> MirPass<'tcx> for SimplifyCfg {
- fn run_pass<'a>(&mut self, tcx: TyCtxt<'a, 'tcx, 'tcx>, src: MirSource, mir: &mut Mir<'tcx>) {
- simplify_branches(mir);
- RemoveDeadBlocks.run_pass(tcx, src, mir);
- merge_consecutive_blocks(mir);
- RemoveDeadBlocks.run_pass(tcx, src, mir);
- pretty::dump_mir(tcx, "simplify_cfg", &0, src, mir, None);
+impl<'l, 'tcx> MirPass<'tcx> for SimplifyCfg<'l> {
+ fn run_pass<'a>(&mut self, _tcx: TyCtxt<'a, 'tcx, 'tcx>, _src: MirSource, mir: &mut Mir<'tcx>) {
+ CfgSimplifier::new(mir).simplify();
+ remove_dead_blocks(mir);
// FIXME: Should probably be moved into some kind of pass manager
- mir.basic_blocks.shrink_to_fit();
+ mir.basic_blocks_mut().raw.shrink_to_fit();
+ }
+}
+
+impl<'l> Pass for SimplifyCfg<'l> {
+ fn disambiguator<'a>(&'a self) -> Option<Box<fmt::Display+'a>> {
+ Some(Box::new(self.label))
}
}
-impl Pass for SimplifyCfg {}
+pub struct CfgSimplifier<'a, 'tcx: 'a> {
+ basic_blocks: &'a mut IndexVec<BasicBlock, BasicBlockData<'tcx>>,
+ pred_count: IndexVec<BasicBlock, u32>
+}
-fn merge_consecutive_blocks(mir: &mut Mir) {
- // Build the precedecessor map for the MIR
- let mut pred_count = vec![0u32; mir.basic_blocks.len()];
- for (_, data) in traversal::preorder(mir) {
- if let Some(ref term) = data.terminator {
- for &tgt in term.successors().iter() {
- pred_count[tgt.index()] += 1;
+impl<'a, 'tcx: 'a> CfgSimplifier<'a, 'tcx> {
+ fn new(mir: &'a mut Mir<'tcx>) -> Self {
+ let mut pred_count = IndexVec::from_elem(0u32, mir.basic_blocks());
+
+ // we can't use mir.predecessors() here because that counts
+ // dead blocks, which we don't want to.
+ for (_, data) in traversal::preorder(mir) {
+ if let Some(ref term) = data.terminator {
+ for &tgt in term.successors().iter() {
+ pred_count[tgt] += 1;
+ }
}
}
+
+ let basic_blocks = mir.basic_blocks_mut();
+
+ CfgSimplifier {
+ basic_blocks: basic_blocks,
+ pred_count: pred_count
+ }
}
- loop {
- let mut changed = false;
- let mut seen = BitVector::new(mir.basic_blocks.len());
- let mut worklist = vec![START_BLOCK];
- while let Some(bb) = worklist.pop() {
- // Temporarily take ownership of the terminator we're modifying to keep borrowck happy
- let mut terminator = mir.basic_block_data_mut(bb).terminator.take()
- .expect("invalid terminator state");
-
- // See if we can merge the target block into this one
- loop {
- let mut inner_change = false;
-
- if let TerminatorKind::Goto { target } = terminator.kind {
- // Don't bother trying to merge a block into itself
- if target == bb {
- break;
- }
-
- let num_insts = mir.basic_block_data(target).statements.len();
- match mir.basic_block_data(target).terminator().kind {
- TerminatorKind::Goto { target: new_target } if num_insts == 0 => {
- inner_change = true;
- terminator.kind = TerminatorKind::Goto { target: new_target };
- pred_count[target.index()] -= 1;
- pred_count[new_target.index()] += 1;
- }
- _ if pred_count[target.index()] == 1 => {
- inner_change = true;
- let mut stmts = Vec::new();
- {
- let target_data = mir.basic_block_data_mut(target);
- mem::swap(&mut stmts, &mut target_data.statements);
- mem::swap(&mut terminator, target_data.terminator_mut());
- }
-
- mir.basic_block_data_mut(bb).statements.append(&mut stmts);
- }
- _ => {}
- };
+ fn simplify(mut self) {
+ loop {
+ let mut changed = false;
+
+ for bb in (0..self.basic_blocks.len()).map(BasicBlock::new) {
+ if self.pred_count[bb] == 0 {
+ continue
}
- for target in terminator.successors_mut() {
- let new_target = match final_target(mir, *target) {
- Some(new_target) => new_target,
- None if mir.basic_block_data(bb).statements.is_empty() => bb,
- None => continue
- };
- if *target != new_target {
- inner_change = true;
- pred_count[target.index()] -= 1;
- pred_count[new_target.index()] += 1;
- *target = new_target;
- }
+ debug!("simplifying {:?}", bb);
+
+ let mut terminator = self.basic_blocks[bb].terminator.take()
+ .expect("invalid terminator state");
+
+ for successor in terminator.successors_mut() {
+ self.collapse_goto_chain(successor, &mut changed);
}
- changed |= inner_change;
- if !inner_change {
- break;
+ let mut new_stmts = vec![];
+ let mut inner_changed = true;
+ while inner_changed {
+ inner_changed = false;
+ inner_changed |= self.simplify_branch(&mut terminator);
+ inner_changed |= self.merge_successor(&mut new_stmts, &mut terminator);
+ changed |= inner_changed;
}
- }
- mir.basic_block_data_mut(bb).terminator = Some(terminator);
+ self.basic_blocks[bb].statements.extend(new_stmts);
+ self.basic_blocks[bb].terminator = Some(terminator);
- for succ in mir.basic_block_data(bb).terminator().successors().iter() {
- if seen.insert(succ.index()) {
- worklist.push(*succ);
- }
+ changed |= inner_changed;
}
- }
- if !changed {
- break;
+ if !changed { break }
}
}
-}
-// Find the target at the end of the jump chain, return None if there is a loop
-fn final_target(mir: &Mir, mut target: BasicBlock) -> Option<BasicBlock> {
- // Keep track of already seen blocks to detect loops
- let mut seen: Vec<BasicBlock> = Vec::with_capacity(8);
-
- while mir.basic_block_data(target).statements.is_empty() {
- // NB -- terminator may have been swapped with `None` in
- // merge_consecutive_blocks, in which case we have a cycle and just want
- // to stop
- match mir.basic_block_data(target).terminator {
- Some(Terminator { kind: TerminatorKind::Goto { target: next }, .. }) => {
- if seen.contains(&next) {
- return None;
- }
- seen.push(next);
- target = next;
+ // Collapse a goto chain starting from `start`
+ fn collapse_goto_chain(&mut self, start: &mut BasicBlock, changed: &mut bool) {
+ let mut terminator = match self.basic_blocks[*start] {
+ BasicBlockData {
+ ref statements,
+ terminator: ref mut terminator @ Some(Terminator {
+ kind: TerminatorKind::Goto { .. }, ..
+ }), ..
+ } if statements.is_empty() => terminator.take(),
+ // if `terminator` is None, this means we are in a loop. In that
+ // case, let all the loop collapse to its entry.
+ _ => return
+ };
+
+ let target = match terminator {
+ Some(Terminator { kind: TerminatorKind::Goto { ref mut target }, .. }) => {
+ self.collapse_goto_chain(target, changed);
+ *target
}
- _ => break
- }
- }
+ _ => unreachable!()
+ };
+ self.basic_blocks[*start].terminator = terminator;
- Some(target)
-}
+ debug!("collapsing goto chain from {:?} to {:?}", *start, target);
-fn simplify_branches(mir: &mut Mir) {
- loop {
- let mut changed = false;
-
- for bb in mir.all_basic_blocks() {
- let basic_block = mir.basic_block_data_mut(bb);
- let mut terminator = basic_block.terminator_mut();
- terminator.kind = match terminator.kind {
- TerminatorKind::If { ref targets, .. } if targets.0 == targets.1 => {
- changed = true;
- TerminatorKind::Goto { target: targets.0 }
- }
+ *changed |= *start != target;
+ self.pred_count[target] += 1;
+ self.pred_count[*start] -= 1;
+ *start = target;
+ }
- TerminatorKind::If { ref targets, cond: Operand::Constant(Constant {
- literal: Literal::Value {
- value: ConstVal::Bool(cond)
- }, ..
- }) } => {
- changed = true;
- if cond {
- TerminatorKind::Goto { target: targets.0 }
- } else {
- TerminatorKind::Goto { target: targets.1 }
- }
- }
+ // merge a block with 1 `goto` predecessor to its parent
+ fn merge_successor(&mut self,
+ new_stmts: &mut Vec<Statement<'tcx>>,
+ terminator: &mut Terminator<'tcx>)
+ -> bool
+ {
+ let target = match terminator.kind {
+ TerminatorKind::Goto { target }
+ if self.pred_count[target] == 1
+ => target,
+ _ => return false
+ };
+
+ debug!("merging block {:?} into {:?}", target, terminator);
+ *terminator = match self.basic_blocks[target].terminator.take() {
+ Some(terminator) => terminator,
+ None => {
+ // unreachable loop - this should not be possible, as we
+ // don't strand blocks, but handle it correctly.
+ return false
+ }
+ };
+ new_stmts.extend(self.basic_blocks[target].statements.drain(..));
+ self.pred_count[target] = 0;
- TerminatorKind::Assert { target, cond: Operand::Constant(Constant {
- literal: Literal::Value {
- value: ConstVal::Bool(cond)
- }, ..
- }), expected, .. } if cond == expected => {
- changed = true;
- TerminatorKind::Goto { target: target }
- }
+ true
+ }
- TerminatorKind::SwitchInt { ref targets, .. } if targets.len() == 1 => {
- changed = true;
- TerminatorKind::Goto { target: targets[0] }
+ // turn a branch with all successors identical to a goto
+ fn simplify_branch(&mut self, terminator: &mut Terminator<'tcx>) -> bool {
+ match terminator.kind {
+ TerminatorKind::If { .. } |
+ TerminatorKind::Switch { .. } |
+ TerminatorKind::SwitchInt { .. } => {},
+ _ => return false
+ };
+
+ let first_succ = {
+ let successors = terminator.successors();
+ if let Some(&first_succ) = terminator.successors().get(0) {
+ if successors.iter().all(|s| *s == first_succ) {
+ self.pred_count[first_succ] -= (successors.len()-1) as u32;
+ first_succ
+ } else {
+ return false
}
- _ => continue
+ } else {
+ return false
}
+ };
+
+ debug!("simplifying branch {:?}", terminator);
+ terminator.kind = TerminatorKind::Goto { target: first_succ };
+ true
+ }
+}
+
+fn remove_dead_blocks(mir: &mut Mir) {
+ let mut seen = BitVector::new(mir.basic_blocks().len());
+ for (bb, _) in traversal::preorder(mir) {
+ seen.insert(bb.index());
+ }
+
+ let basic_blocks = mir.basic_blocks_mut();
+
+ let num_blocks = basic_blocks.len();
+ let mut replacements : Vec<_> = (0..num_blocks).map(BasicBlock::new).collect();
+ let mut used_blocks = 0;
+ for alive_index in seen.iter() {
+ replacements[alive_index] = BasicBlock::new(used_blocks);
+ if alive_index != used_blocks {
+ // Swap the next alive block data with the current available slot. Since alive_index is
+ // non-decreasing this is a valid operation.
+ basic_blocks.raw.swap(alive_index, used_blocks);
}
+ used_blocks += 1;
+ }
+ basic_blocks.raw.truncate(used_blocks);
- if !changed {
- break;
+ for block in basic_blocks {
+ for target in block.terminator_mut().successors_mut() {
+ *target = replacements[target.index()];
}
}
}
use std::fmt;
use syntax::codemap::{Span, DUMMY_SP};
+use rustc_data_structures::indexed_vec::Idx;
+
macro_rules! span_mirbug {
($context:expr, $elem:expr, $($message:tt)*) => ({
$context.tcx().sess.span_warn(
fn sanitize_lvalue(&mut self, lvalue: &Lvalue<'tcx>) -> LvalueTy<'tcx> {
debug!("sanitize_lvalue: {:?}", lvalue);
match *lvalue {
- Lvalue::Var(index) => LvalueTy::Ty { ty: self.mir.var_decls[index as usize].ty },
- Lvalue::Temp(index) =>
- LvalueTy::Ty { ty: self.mir.temp_decls[index as usize].ty },
- Lvalue::Arg(index) =>
- LvalueTy::Ty { ty: self.mir.arg_decls[index as usize].ty },
+ Lvalue::Var(index) => LvalueTy::Ty { ty: self.mir.var_decls[index].ty },
+ Lvalue::Temp(index) => LvalueTy::Ty { ty: self.mir.temp_decls[index].ty },
+ Lvalue::Arg(index) => LvalueTy::Ty { ty: self.mir.arg_decls[index].ty },
Lvalue::Static(def_id) =>
LvalueTy::Ty { ty: self.tcx().lookup_item_type(def_id).ty },
Lvalue::ReturnPointer => {
})
}
}
+ ProjectionElem::Subslice { from, to } => {
+ LvalueTy::Ty {
+ ty: match base_ty.sty {
+ ty::TyArray(inner, size) => {
+ let min_size = (from as usize) + (to as usize);
+ if let Some(rest_size) = size.checked_sub(min_size) {
+ tcx.mk_array(inner, rest_size)
+ } else {
+ span_mirbug_and_err!(
+ self, lvalue, "taking too-small slice of {:?}", base_ty)
+ }
+ }
+ ty::TySlice(..) => base_ty,
+ _ => {
+ span_mirbug_and_err!(
+ self, lvalue, "slice of non-array {:?}", base_ty)
+ }
+ }
+ }
+ }
ProjectionElem::Downcast(adt_def1, index) =>
match base_ty.sty {
ty::TyEnum(adt_def, substs) if adt_def == adt_def1 => {
TerminatorKind::Goto { .. } |
TerminatorKind::Resume |
TerminatorKind::Return |
+ TerminatorKind::Unreachable |
TerminatorKind::Drop { .. } => {
// no checks needed for these
}
span_mirbug!(self, block, "return on cleanup block")
}
}
+ TerminatorKind::Unreachable => {}
TerminatorKind::Drop { target, unwind, .. } |
TerminatorKind::DropAndReplace { target, unwind, .. } |
TerminatorKind::Assert { target, cleanup: unwind, .. } => {
bb: BasicBlock,
iscleanuppad: bool)
{
- if mir.basic_block_data(bb).is_cleanup != iscleanuppad {
+ if mir[bb].is_cleanup != iscleanuppad {
span_mirbug!(self, ctxt, "cleanuppad mismatch: {:?} should be {:?}",
bb, iscleanuppad);
}
fn typeck_mir(&mut self, mir: &Mir<'tcx>) {
self.last_span = mir.span;
debug!("run_on_mir: {:?}", mir.span);
- for block in &mir.basic_blocks {
+ for block in mir.basic_blocks() {
for stmt in &block.statements {
if stmt.source_info.span != DUMMY_SP {
self.last_span = stmt.source_info.span;
+++ /dev/null
-// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-use std::vec;
-
-use rustc_data_structures::bitvec::BitVector;
-
-use rustc::mir::repr::*;
-
-/// Preorder traversal of a graph.
-///
-/// Preorder traversal is when each node is visited before an of it's
-/// successors
-///
-/// ```text
-///
-/// A
-/// / \
-/// / \
-/// B C
-/// \ /
-/// \ /
-/// D
-/// ```
-///
-/// A preorder traversal of this graph is either `A B D C` or `A C D B`
-#[derive(Clone)]
-pub struct Preorder<'a, 'tcx: 'a> {
- mir: &'a Mir<'tcx>,
- visited: BitVector,
- worklist: Vec<BasicBlock>,
-}
-
-impl<'a, 'tcx> Preorder<'a, 'tcx> {
- pub fn new(mir: &'a Mir<'tcx>, root: BasicBlock) -> Preorder<'a, 'tcx> {
- let worklist = vec![root];
-
- Preorder {
- mir: mir,
- visited: BitVector::new(mir.basic_blocks.len()),
- worklist: worklist
- }
- }
-}
-
-pub fn preorder<'a, 'tcx>(mir: &'a Mir<'tcx>) -> Preorder<'a, 'tcx> {
- Preorder::new(mir, START_BLOCK)
-}
-
-impl<'a, 'tcx> Iterator for Preorder<'a, 'tcx> {
- type Item = (BasicBlock, &'a BasicBlockData<'tcx>);
-
- fn next(&mut self) -> Option<(BasicBlock, &'a BasicBlockData<'tcx>)> {
- while let Some(idx) = self.worklist.pop() {
- if !self.visited.insert(idx.index()) {
- continue;
- }
-
- let data = self.mir.basic_block_data(idx);
-
- if let Some(ref term) = data.terminator {
- for &succ in term.successors().iter() {
- self.worklist.push(succ);
- }
- }
-
- return Some((idx, data));
- }
-
- None
- }
-}
-
-/// Postorder traversal of a graph.
-///
-/// Postorder traversal is when each node is visited after all of it's
-/// successors, except when the successor is only reachable by a back-edge
-///
-///
-/// ```text
-///
-/// A
-/// / \
-/// / \
-/// B C
-/// \ /
-/// \ /
-/// D
-/// ```
-///
-/// A Postorder traversal of this graph is `D B C A` or `D C B A`
-pub struct Postorder<'a, 'tcx: 'a> {
- mir: &'a Mir<'tcx>,
- visited: BitVector,
- visit_stack: Vec<(BasicBlock, vec::IntoIter<BasicBlock>)>
-}
-
-impl<'a, 'tcx> Postorder<'a, 'tcx> {
- pub fn new(mir: &'a Mir<'tcx>, root: BasicBlock) -> Postorder<'a, 'tcx> {
- let mut po = Postorder {
- mir: mir,
- visited: BitVector::new(mir.basic_blocks.len()),
- visit_stack: Vec::new()
- };
-
-
- let data = po.mir.basic_block_data(root);
-
- if let Some(ref term) = data.terminator {
- po.visited.insert(root.index());
-
- let succs = term.successors().into_owned().into_iter();
-
- po.visit_stack.push((root, succs));
- po.traverse_successor();
- }
-
- po
- }
-
- fn traverse_successor(&mut self) {
- // This is quite a complex loop due to 1. the borrow checker not liking it much
- // and 2. what exactly is going on is not clear
- //
- // It does the actual traversal of the graph, while the `next` method on the iterator
- // just pops off of the stack. `visit_stack` is a stack containing pairs of nodes and
- // iterators over the sucessors of those nodes. Each iteration attempts to get the next
- // node from the top of the stack, then pushes that node and an iterator over the
- // successors to the top of the stack. This loop only grows `visit_stack`, stopping when
- // we reach a child that has no children that we haven't already visited.
- //
- // For a graph that looks like this:
- //
- // A
- // / \
- // / \
- // B C
- // | |
- // | |
- // D |
- // \ /
- // \ /
- // E
- //
- // The state of the stack starts out with just the root node (`A` in this case);
- // [(A, [B, C])]
- //
- // When the first call to `traverse_sucessor` happens, the following happens:
- //
- // [(B, [D]), // `B` taken from the successors of `A`, pushed to the
- // // top of the stack along with the successors of `B`
- // (A, [C])]
- //
- // [(D, [E]), // `D` taken from successors of `B`, pushed to stack
- // (B, []),
- // (A, [C])]
- //
- // [(E, []), // `E` taken from successors of `D`, pushed to stack
- // (D, []),
- // (B, []),
- // (A, [C])]
- //
- // Now that the top of the stack has no successors we can traverse, each item will
- // be popped off during iteration until we get back to `A`. This yeilds [E, D, B].
- //
- // When we yield `B` and call `traverse_successor`, we push `C` to the stack, but
- // since we've already visited `E`, that child isn't added to the stack. The last
- // two iterations yield `C` and finally `A` for a final traversal of [E, D, B, C, A]
- loop {
- let bb = if let Some(&mut (_, ref mut iter)) = self.visit_stack.last_mut() {
- if let Some(bb) = iter.next() {
- bb
- } else {
- break;
- }
- } else {
- break;
- };
-
- if self.visited.insert(bb.index()) {
- let data = self.mir.basic_block_data(bb);
-
- if let Some(ref term) = data.terminator {
- let succs = term.successors().into_owned().into_iter();
- self.visit_stack.push((bb, succs));
- }
- }
- }
- }
-}
-
-pub fn postorder<'a, 'tcx>(mir: &'a Mir<'tcx>) -> Postorder<'a, 'tcx> {
- Postorder::new(mir, START_BLOCK)
-}
-
-impl<'a, 'tcx> Iterator for Postorder<'a, 'tcx> {
- type Item = (BasicBlock, &'a BasicBlockData<'tcx>);
-
- fn next(&mut self) -> Option<(BasicBlock, &'a BasicBlockData<'tcx>)> {
- let next = self.visit_stack.pop();
- if next.is_some() {
- self.traverse_successor();
- }
-
- next.map(|(bb, _)| {
- let data = self.mir.basic_block_data(bb);
- (bb, data)
- })
- }
-}
-
-/// Reverse postorder traversal of a graph
-///
-/// Reverse postorder is the reverse order of a postorder traversal.
-/// This is different to a preorder traversal and represents a natural
-/// linearisation of control-flow.
-///
-/// ```text
-///
-/// A
-/// / \
-/// / \
-/// B C
-/// \ /
-/// \ /
-/// D
-/// ```
-///
-/// A reverse postorder traversal of this graph is either `A B C D` or `A C B D`
-/// Note that for a graph containing no loops (i.e. A DAG), this is equivalent to
-/// a topological sort.
-///
-/// Construction of a `ReversePostorder` traversal requires doing a full
-/// postorder traversal of the graph, therefore this traversal should be
-/// constructed as few times as possible. Use the `reset` method to be able
-/// to re-use the traversal
-#[derive(Clone)]
-pub struct ReversePostorder<'a, 'tcx: 'a> {
- mir: &'a Mir<'tcx>,
- blocks: Vec<BasicBlock>,
- idx: usize
-}
-
-impl<'a, 'tcx> ReversePostorder<'a, 'tcx> {
- pub fn new(mir: &'a Mir<'tcx>, root: BasicBlock) -> ReversePostorder<'a, 'tcx> {
- let blocks : Vec<_> = Postorder::new(mir, root).map(|(bb, _)| bb).collect();
-
- let len = blocks.len();
-
- ReversePostorder {
- mir: mir,
- blocks: blocks,
- idx: len
- }
- }
-
- pub fn reset(&mut self) {
- self.idx = self.blocks.len();
- }
-}
-
-
-pub fn reverse_postorder<'a, 'tcx>(mir: &'a Mir<'tcx>) -> ReversePostorder<'a, 'tcx> {
- ReversePostorder::new(mir, START_BLOCK)
-}
-
-impl<'a, 'tcx> Iterator for ReversePostorder<'a, 'tcx> {
- type Item = (BasicBlock, &'a BasicBlockData<'tcx>);
-
- fn next(&mut self) -> Option<(BasicBlock, &'a BasicBlockData<'tcx>)> {
- if self.idx == 0 { return None; }
- self.idx -= 1;
-
- self.blocks.get(self.idx).map(|&bb| {
- let data = self.mir.basic_block_data(bb);
- (bb, data)
- })
- }
-}
}
}
hir::ExprPath(..) => {
- let def = v.tcx.def_map.borrow().get(&e.id).map(|d| d.full_def());
- match def {
- Some(Def::Variant(..)) => {
+ match v.tcx.expect_def(e.id) {
+ Def::Variant(..) => {
// Count the discriminator or function pointer.
v.add_qualif(ConstQualif::NON_ZERO_SIZED);
}
- Some(Def::Struct(..)) => {
+ Def::Struct(..) => {
if let ty::TyFnDef(..) = node_ty.sty {
// Count the function pointer.
v.add_qualif(ConstQualif::NON_ZERO_SIZED);
}
}
- Some(Def::Fn(..)) | Some(Def::Method(..)) => {
+ Def::Fn(..) | Def::Method(..) => {
// Count the function pointer.
v.add_qualif(ConstQualif::NON_ZERO_SIZED);
}
- Some(Def::Static(..)) => {
+ Def::Static(..) => {
match v.mode {
Mode::Static | Mode::StaticMut => {}
Mode::Const | Mode::ConstFn => {}
Mode::Var => v.add_qualif(ConstQualif::NOT_CONST)
}
}
- Some(Def::Const(did)) |
- Some(Def::AssociatedConst(did)) => {
+ Def::Const(did) | Def::AssociatedConst(did) => {
let substs = Some(v.tcx.node_id_item_substs(e.id).substs);
if let Some((expr, _)) = lookup_const_by_id(v.tcx, did, substs) {
let inner = v.global_expr(Mode::Const, expr);
v.add_qualif(inner);
}
}
- Some(Def::Local(..)) if v.mode == Mode::ConstFn => {
+ Def::Local(..) if v.mode == Mode::ConstFn => {
// Sadly, we can't determine whether the types are zero-sized.
v.add_qualif(ConstQualif::NOT_CONST | ConstQualif::NON_ZERO_SIZED);
}
_ => break
};
}
- let def = v.tcx.def_map.borrow().get(&callee.id).map(|d| d.full_def());
- let is_const = match def {
+ // The callee is an arbitrary expression, it doesn't necessarily have a definition.
+ let is_const = match v.tcx.expect_def_or_none(callee.id) {
Some(Def::Struct(..)) => true,
Some(Def::Variant(..)) => {
// Count the discriminator.
}
}
hir::ExprStruct(..) => {
- let did = v.tcx.def_map.borrow().get(&e.id).map(|def| def.def_id());
- if did == v.tcx.lang_items.unsafe_cell_type() {
+ // unsafe_cell_type doesn't necessarily exist with no_core
+ if Some(v.tcx.expect_def(e.id).def_id()) == v.tcx.lang_items.unsafe_cell_type() {
v.add_qualif(ConstQualif::MUTABLE_MEM);
}
}
rustc_back = { path = "../librustc_back" }
rustc_bitflags = { path = "../librustc_bitflags" }
rustc_metadata = { path = "../librustc_metadata" }
-rustc_mir = { path = "../librustc_mir" }
syntax = { path = "../libsyntax" }
extern crate rustc;
extern crate rustc_back;
extern crate rustc_metadata;
-extern crate rustc_mir;
pub use self::registry::Registry;
impl<'a, 'tcx> EmbargoVisitor<'a, 'tcx> {
fn ty_level(&self, ty: &hir::Ty) -> Option<AccessLevel> {
if let hir::TyPath(..) = ty.node {
- match self.tcx.def_map.borrow().get(&ty.id).unwrap().full_def() {
+ match self.tcx.expect_def(ty.id) {
Def::PrimTy(..) | Def::SelfTy(..) | Def::TyParam(..) => {
Some(AccessLevel::Public)
}
}
fn trait_level(&self, trait_ref: &hir::TraitRef) -> Option<AccessLevel> {
- let did = self.tcx.trait_ref_to_def_id(trait_ref);
+ let did = self.tcx.expect_def(trait_ref.ref_id).def_id();
if let Some(node_id) = self.tcx.map.as_local_node_id(did) {
self.get(node_id)
} else {
impl<'b, 'a, 'tcx: 'a, 'v> Visitor<'v> for ReachEverythingInTheInterfaceVisitor<'b, 'a, 'tcx> {
fn visit_ty(&mut self, ty: &hir::Ty) {
if let hir::TyPath(_, ref path) = ty.node {
- let def = self.ev.tcx.def_map.borrow().get(&ty.id).unwrap().full_def();
+ let def = self.ev.tcx.expect_def(ty.id);
match def {
Def::Struct(def_id) | Def::Enum(def_id) | Def::TyAlias(def_id) |
Def::Trait(def_id) | Def::AssociatedTy(def_id, _) => {
}
fn visit_trait_ref(&mut self, trait_ref: &hir::TraitRef) {
- let def_id = self.ev.tcx.trait_ref_to_def_id(trait_ref);
+ let def_id = self.ev.tcx.expect_def(trait_ref.ref_id).def_id();
if let Some(node_id) = self.ev.tcx.map.as_local_node_id(def_id) {
let item = self.ev.tcx.map.expect_item(node_id);
self.ev.update(item.id, Some(AccessLevel::Reachable));
}
hir::ExprStruct(..) => {
let adt = self.tcx.expr_ty(expr).ty_adt_def().unwrap();
- let variant = adt.variant_of_def(self.tcx.resolve_expr(expr));
+ let variant = adt.variant_of_def(self.tcx.expect_def(expr.id));
// RFC 736: ensure all unmentioned fields are visible.
// Rather than computing the set of unmentioned fields
// (i.e. `all_fields - fields`), just check them all.
}
hir::ExprPath(..) => {
- if let Def::Struct(..) = self.tcx.resolve_expr(expr) {
+ if let Def::Struct(..) = self.tcx.expect_def(expr.id) {
let expr_ty = self.tcx.expr_ty(expr);
let def = match expr_ty.sty {
ty::TyFnDef(_, _, &ty::BareFnTy { sig: ty::Binder(ty::FnSig {
match pattern.node {
PatKind::Struct(_, ref fields, _) => {
let adt = self.tcx.pat_ty(pattern).ty_adt_def().unwrap();
- let def = self.tcx.def_map.borrow().get(&pattern.id).unwrap().full_def();
- let variant = adt.variant_of_def(def);
+ let variant = adt.variant_of_def(self.tcx.expect_def(pattern.id));
for field in fields {
self.check_field(pattern.span, adt, variant.field_named(field.node.name));
}
impl<'a, 'tcx> ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> {
fn path_is_private_type(&self, path_id: ast::NodeId) -> bool {
- let did = match self.tcx.def_map.borrow().get(&path_id).map(|d| d.full_def()) {
- // `int` etc. (None doesn't seem to occur.)
- None | Some(Def::PrimTy(..)) | Some(Def::SelfTy(..)) => return false,
- Some(def) => def.def_id(),
+ let did = match self.tcx.expect_def(path_id) {
+ Def::PrimTy(..) | Def::SelfTy(..) => return false,
+ def => def.def_id(),
};
// A path can only be private if:
let not_private_trait =
trait_ref.as_ref().map_or(true, // no trait counts as public trait
|tr| {
- let did = self.tcx.trait_ref_to_def_id(tr);
+ let did = self.tcx.expect_def(tr.ref_id).def_id();
if let Some(node_id) = self.tcx.map.as_local_node_id(did) {
self.trait_is_public(node_id)
impl<'a, 'tcx: 'a, 'v> Visitor<'v> for SearchInterfaceForPrivateItemsVisitor<'a, 'tcx> {
fn visit_ty(&mut self, ty: &hir::Ty) {
if let hir::TyPath(_, ref path) = ty.node {
- let def = self.tcx.def_map.borrow().get(&ty.id).unwrap().full_def();
- match def {
+ match self.tcx.expect_def(ty.id) {
Def::PrimTy(..) | Def::SelfTy(..) | Def::TyParam(..) => {
// Public
}
fn visit_trait_ref(&mut self, trait_ref: &hir::TraitRef) {
// Non-local means public (private items can't leave their crate, modulo bugs)
- let def_id = self.tcx.trait_ref_to_def_id(trait_ref);
+ let def_id = self.tcx.expect_def(trait_ref.ref_id).def_id();
if let Some(node_id) = self.tcx.map.as_local_node_id(def_id) {
let item = self.tcx.map.expect_item(node_id);
let vis = ty::Visibility::from_hir(&item.vis, node_id, self.tcx);
use rustc::ty::{self, VariantKind};
use syntax::ast::Name;
-use syntax::attr::AttrMetaMethods;
+use syntax::attr;
use syntax::parse::token;
use syntax::codemap::{Span, DUMMY_SP};
impl<'b> Resolver<'b> {
/// Constructs the reduced graph for the entire crate.
pub fn build_reduced_graph(&mut self, krate: &Crate) {
+ let no_implicit_prelude = attr::contains_name(&krate.attrs, "no_implicit_prelude");
+ self.graph_root.no_implicit_prelude.set(no_implicit_prelude);
+
let mut visitor = BuildReducedGraphVisitor {
parent: self.graph_root,
resolver: self,
};
// Build up the import directives.
- let is_prelude = item.attrs.iter().any(|attr| attr.name() == "prelude_import");
+ let is_prelude = attr::contains_name(&item.attrs, "prelude_import");
match view_path.node {
ViewPathSimple(binding, ref full_path) => {
let parent_link = ModuleParentLink(parent, name);
let def = Def::Mod(self.definitions.local_def_id(item.id));
let module = self.new_module(parent_link, Some(def), false);
+ module.no_implicit_prelude.set({
+ parent.no_implicit_prelude.get() ||
+ attr::contains_name(&item.attrs, "no_implicit_prelude")
+ });
self.define(parent, name, TypeNS, (module, sp, vis));
self.module_map.insert(item.id, module);
*parent_ref = module;
```
"##,
-E0413: r##"
-A declaration shadows an enum variant or unit-like struct in scope. Example of
-erroneous code:
-
-```compile_fail
-struct Foo;
-
-let Foo = 12i32; // error: declaration of `Foo` shadows an enum variant or
- // unit-like struct in scope
-```
-
-To fix this error, rename the variable such that it doesn't shadow any enum
-variable or structure in scope. Example:
-
-```
-struct Foo;
-
-let foo = 12i32; // ok!
-```
-
-Or:
-
-```
-struct FooStruct;
-
-let Foo = 12i32; // ok!
-```
-
-The goal here is to avoid a conflict of names.
-"##,
-
-E0414: r##"
-A variable binding in an irrefutable pattern is shadowing the name of a
-constant. Example of erroneous code:
-
-```compile_fail
-const FOO: u8 = 7;
-
-let FOO = 5; // error: variable bindings cannot shadow constants
-
-// or
-
-fn bar(FOO: u8) { // error: variable bindings cannot shadow constants
-
-}
-
-// or
-
-for FOO in bar {
-
-}
-```
-
-Introducing a new variable in Rust is done through a pattern. Thus you can have
-`let` bindings like `let (a, b) = ...`. However, patterns also allow constants
-in them, e.g. if you want to match over a constant:
-
-```ignore
-const FOO: u8 = 1;
-
-match (x,y) {
- (3, 4) => { .. }, // it is (3,4)
- (FOO, 1) => { .. }, // it is (1,1)
- (foo, 1) => { .. }, // it is (anything, 1)
- // call the value in the first slot "foo"
- _ => { .. } // it is anything
-}
-```
-
-Here, the second arm matches the value of `x` against the constant `FOO`,
-whereas the third arm will accept any value of `x` and call it `foo`.
-
-This works for `match`, however in cases where an irrefutable pattern is
-required, constants can't be used. An irrefutable pattern is one which always
-matches, whose purpose is only to bind variable names to values. These are
-required by let, for, and function argument patterns.
-
-Refutable patterns in such a situation do not make sense, for example:
-
-```ignore
-let Some(x) = foo; // what if foo is None, instead?
-
-let (1, x) = foo; // what if foo.0 is not 1?
-
-let (SOME_CONST, x) = foo; // what if foo.0 is not SOME_CONST?
-
-let SOME_CONST = foo; // what if foo is not SOME_CONST?
-```
-
-Thus, an irrefutable variable binding can't contain a constant.
-
-To fix this error, just give the marked variable a different name.
-"##,
-
E0415: r##"
More than one function parameter have the same name. Example of erroneous code:
```
"##,
-E0417: r##"
-A static variable was referenced in a pattern. Example of erroneous code:
-
-```compile_fail
-static FOO : i32 = 0;
-
-match 0 {
- FOO => {} // error: static variables cannot be referenced in a
- // pattern, use a `const` instead
- _ => {}
-}
-```
-
-The compiler needs to know the value of the pattern at compile time;
-compile-time patterns can defined via const or enum items. Please verify
-that the identifier is spelled correctly, and if so, use a const instead
-of static to define it. Example:
-
-```
-const FOO : i32 = 0;
-
-match 0 {
- FOO => {} // ok!
- _ => {}
-}
-```
-"##,
-
-E0419: r##"
-An unknown enum variant, struct or const was used. Example of erroneous code:
-
-```compile_fail
-match 0 {
- Something::Foo => {} // error: unresolved enum variant, struct
- // or const `Foo`
-}
-```
-
-Please verify you didn't misspell it and the enum variant, struct or const has
-been declared and imported into scope. Example:
-
-```
-enum Something {
- Foo,
- NotFoo,
-}
-
-match Something::NotFoo {
- Something::Foo => {} // ok!
- _ => {}
-}
-```
-"##,
-
E0422: r##"
You are trying to use an identifier that is either undefined or not a struct.
For instance:
E0402, // cannot use an outer type parameter in this context
E0406, // undeclared associated type
// E0410, merged into 408
- E0418, // is not an enum variant, struct or const
- E0420, // is not an associated const
- E0421, // unresolved associated const
- E0427, // cannot use `ref` binding mode with ...
+// E0413, merged into 530
+// E0414, merged into 530
+// E0417, merged into 532
+// E0418, merged into 532
+// E0419, merged into 531
+// E0420, merged into 532
+// E0421, merged into 531
+ E0530, // X bindings cannot shadow Ys
+ E0531, // unresolved pattern path kind `name`
+ E0532, // expected pattern path kind, found another pattern path kind
+// E0427, merged into 530
}
#[macro_use]
extern crate rustc;
-use self::PatternBindingMode::*;
use self::Namespace::*;
use self::ResolveResult::*;
use self::FallbackSuggestion::*;
use self::RibKind::*;
use self::UseLexicalScopeFlag::*;
use self::ModulePrefixResult::*;
-use self::AssocItemResolveResult::*;
-use self::BareIdentifierPatternResolution::*;
use self::ParentLink::*;
use rustc::hir::map::Definitions;
use syntax::ast::{Arm, BindingMode, Block, Crate, Expr, ExprKind};
use syntax::ast::{FnDecl, ForeignItem, ForeignItemKind, Generics};
use syntax::ast::{Item, ItemKind, ImplItem, ImplItemKind};
-use syntax::ast::{Local, Pat, PatKind, Path};
-use syntax::ast::{PathSegment, PathParameters, TraitItemKind, TraitRef, Ty, TyKind};
+use syntax::ast::{Local, Mutability, Pat, PatKind, Path};
+use syntax::ast::{PathSegment, PathParameters, QSelf, TraitItemKind, TraitRef, Ty, TyKind};
use std::collections::{HashMap, HashSet};
use std::cell::{Cell, RefCell};
SelfUsedOutsideImplOrTrait,
/// error E0412: use of undeclared
UseOfUndeclared(&'a str, &'a str, SuggestedCandidates),
- /// error E0413: cannot be named the same as an enum variant or unit-like struct in scope
- DeclarationShadowsEnumVariantOrUnitLikeStruct(Name),
- /// error E0414: only irrefutable patterns allowed here
- ConstantForIrrefutableBinding(Name),
/// error E0415: identifier is bound more than once in this parameter list
IdentifierBoundMoreThanOnceInParameterList(&'a str),
/// error E0416: identifier is bound more than once in the same pattern
IdentifierBoundMoreThanOnceInSamePattern(&'a str),
- /// error E0417: static variables cannot be referenced in a pattern
- StaticVariableReference(&'a NameBinding<'a>),
- /// error E0418: is not an enum variant, struct or const
- NotAnEnumVariantStructOrConst(&'a str),
- /// error E0419: unresolved enum variant, struct or const
- UnresolvedEnumVariantStructOrConst(&'a str),
- /// error E0420: is not an associated const
- NotAnAssociatedConst(&'a str),
- /// error E0421: unresolved associated const
- UnresolvedAssociatedConst(&'a str),
/// error E0422: does not name a struct
DoesNotNameAStruct(&'a str),
/// error E0423: is a struct variant name, but this expression uses it like a function name
},
/// error E0426: use of undeclared label
UndeclaredLabel(&'a str),
- /// error E0427: cannot use `ref` binding mode with ...
- CannotUseRefBindingModeWith(&'a str),
/// error E0429: `self` imports are only allowed within a { } list
SelfImportsOnlyAllowedWithin,
/// error E0430: `self` import can only appear once in the list
CannotCaptureDynamicEnvironmentInFnItem,
/// error E0435: attempt to use a non-constant value in a constant
AttemptToUseNonConstantValueInConstant,
+ /// error E0530: X bindings cannot shadow Ys
+ BindingShadowsSomethingUnacceptable(&'a str, &'a str, Name),
+ /// error E0531: unresolved pattern path kind `name`
+ PatPathUnresolved(&'a str, &'a Path),
+ /// error E0532: expected pattern path kind, found another pattern path kind
+ PatPathUnexpected(&'a str, &'a str, &'a Path),
}
/// Context of where `ResolutionError::UnresolvedName` arose.
err.span_label(span, &format!("undefined or not in scope"));
err
}
- ResolutionError::DeclarationShadowsEnumVariantOrUnitLikeStruct(name) => {
- let mut err = struct_span_err!(resolver.session,
- span,
- E0413,
- "`{}` cannot be named the same as an enum variant \
- or unit-like struct in scope",
- name);
- err.span_label(span,
- &format!("has same name as enum variant or unit-like struct"));
- err
- }
- ResolutionError::ConstantForIrrefutableBinding(name) => {
- let mut err = struct_span_err!(resolver.session,
- span,
- E0414,
- "let variables cannot be named the same as const variables");
- err.span_label(span,
- &format!("cannot be named the same as a const variable"));
- if let Some(binding) = resolver.current_module
- .resolve_name_in_lexical_scope(name, ValueNS) {
- let participle = if binding.is_import() { "imported" } else { "defined" };
- err.span_label(binding.span, &format!("a constant `{}` is {} here",
- name, participle));
- }
- err
- }
ResolutionError::IdentifierBoundMoreThanOnceInParameterList(identifier) => {
let mut err = struct_span_err!(resolver.session,
span,
err.span_label(span, &format!("used in a pattern more than once"));
err
}
- ResolutionError::StaticVariableReference(binding) => {
- let mut err = struct_span_err!(resolver.session,
- span,
- E0417,
- "static variables cannot be referenced in a \
- pattern, use a `const` instead");
- err.span_label(span, &format!("static variable used in pattern"));
- if binding.span != codemap::DUMMY_SP {
- let participle = if binding.is_import() { "imported" } else { "defined" };
- err.span_label(binding.span, &format!("static variable {} here", participle));
- }
- err
- }
- ResolutionError::NotAnEnumVariantStructOrConst(name) => {
- struct_span_err!(resolver.session,
- span,
- E0418,
- "`{}` is not an enum variant, struct or const",
- name)
- }
- ResolutionError::UnresolvedEnumVariantStructOrConst(name) => {
- struct_span_err!(resolver.session,
- span,
- E0419,
- "unresolved enum variant, struct or const `{}`",
- name)
- }
- ResolutionError::NotAnAssociatedConst(name) => {
- struct_span_err!(resolver.session,
- span,
- E0420,
- "`{}` is not an associated const",
- name)
- }
- ResolutionError::UnresolvedAssociatedConst(name) => {
- struct_span_err!(resolver.session,
- span,
- E0421,
- "unresolved associated const `{}`",
- name)
- }
ResolutionError::DoesNotNameAStruct(name) => {
struct_span_err!(resolver.session,
span,
"use of undeclared label `{}`",
name)
}
- ResolutionError::CannotUseRefBindingModeWith(descr) => {
- struct_span_err!(resolver.session,
- span,
- E0427,
- "cannot use `ref` binding mode with {}",
- descr)
- }
ResolutionError::SelfImportsOnlyAllowedWithin => {
struct_span_err!(resolver.session,
span,
E0435,
"attempt to use a non-constant value in a constant")
}
+ ResolutionError::BindingShadowsSomethingUnacceptable(what_binding, shadows_what, name) => {
+ let mut err = struct_span_err!(resolver.session,
+ span,
+ E0530,
+ "{}s cannot shadow {}s", what_binding, shadows_what);
+ err.span_label(span, &format!("cannot be named the same as a {}", shadows_what));
+ if let Success(binding) = resolver.current_module.resolve_name(name, ValueNS, true) {
+ let participle = if binding.is_import() { "imported" } else { "defined" };
+ err.span_label(binding.span, &format!("a {} `{}` is {} here",
+ shadows_what, name, participle));
+ }
+ err
+ }
+ ResolutionError::PatPathUnresolved(expected_what, path) => {
+ struct_span_err!(resolver.session,
+ span,
+ E0531,
+ "unresolved {} `{}`",
+ expected_what,
+ path.segments.last().unwrap().identifier)
+ }
+ ResolutionError::PatPathUnexpected(expected_what, found_what, path) => {
+ struct_span_err!(resolver.session,
+ span,
+ E0532,
+ "expected {}, found {} `{}`",
+ expected_what,
+ found_what,
+ path.segments.last().unwrap().identifier)
+ }
}
}
// Map from the name in a pattern to its binding mode.
type BindingMap = HashMap<Name, BindingInfo>;
-#[derive(Copy, Clone, PartialEq)]
-enum PatternBindingMode {
- RefutableMode,
- LocalIrrefutableMode,
- ArgumentIrrefutableMode,
+#[derive(Copy, Clone, PartialEq, Eq, Debug)]
+enum PatternSource {
+ Match,
+ IfLet,
+ WhileLet,
+ Let,
+ For,
+ FnParam,
+}
+
+impl PatternSource {
+ fn is_refutable(self) -> bool {
+ match self {
+ PatternSource::Match | PatternSource::IfLet | PatternSource::WhileLet => true,
+ PatternSource::Let | PatternSource::For | PatternSource::FnParam => false,
+ }
+ }
+ fn descr(self) -> &'static str {
+ match self {
+ PatternSource::Match => "match binding",
+ PatternSource::IfLet => "if let binding",
+ PatternSource::WhileLet => "while let binding",
+ PatternSource::Let => "let binding",
+ PatternSource::For => "for binding",
+ PatternSource::FnParam => "function parameter",
+ }
+ }
}
#[derive(Copy, Clone, PartialEq, Eq, Hash, Debug)]
PrefixFound(Module<'a>, usize),
}
-#[derive(Copy, Clone)]
-enum AssocItemResolveResult {
- /// Syntax such as `<T>::item`, which can't be resolved until type
- /// checking.
- TypecheckRequired,
- /// We should have been able to resolve the associated item.
- ResolveAttempt(Option<PathResolution>),
-}
-
-#[derive(Copy, Clone)]
-enum BareIdentifierPatternResolution {
- FoundStructOrEnumVariant(Def),
- FoundConst(Def, Name),
- BareIdentifierPatternUnresolved,
-}
-
/// One local scope.
#[derive(Debug)]
struct Rib<'a> {
resolutions: RefCell<HashMap<(Name, Namespace), &'a RefCell<NameResolution<'a>>>>,
unresolved_imports: RefCell<Vec<&'a ImportDirective<'a>>>,
- prelude: RefCell<Option<Module<'a>>>,
+ no_implicit_prelude: Cell<bool>,
glob_importers: RefCell<Vec<(Module<'a>, &'a ImportDirective<'a>)>>,
globs: RefCell<Vec<&'a ImportDirective<'a>>>,
extern_crate_id: None,
resolutions: RefCell::new(HashMap::new()),
unresolved_imports: RefCell::new(Vec::new()),
- prelude: RefCell::new(None),
+ no_implicit_prelude: Cell::new(false),
glob_importers: RefCell::new(Vec::new()),
globs: RefCell::new((Vec::new())),
traits: RefCell::new(None),
graph_root: Module<'a>,
+ prelude: Option<Module<'a>>,
+
trait_item_map: FnvHashMap<(Name, DefId), bool /* is static method? */>,
structs: FnvHashMap<DefId, Vec<Name>>,
}
fn record_resolution(&mut self, id: NodeId, def: Def) {
- self.def_map.insert(id, PathResolution { base_def: def, depth: 0 });
+ self.def_map.insert(id, PathResolution::new(def));
}
fn definitions(&mut self) -> Option<&mut Definitions> {
// The outermost module has def ID 0; this is not reflected in the
// AST.
graph_root: graph_root,
+ prelude: None,
trait_item_map: FnvHashMap(),
structs: FnvHashMap(),
}
// We can only see through anonymous modules
- if module.def.is_some() { return None; }
+ if module.def.is_some() {
+ return match self.prelude {
+ Some(prelude) if !module.no_implicit_prelude.get() => {
+ prelude.resolve_name(name, ns, false).success()
+ .map(LexicalScopeBinding::Item)
+ }
+ _ => None,
+ };
+ }
}
}
debug!("(resolving name in module) resolving `{}` in `{}`", name, module_to_string(module));
self.populate_module_if_necessary(module);
- match use_lexical_scope {
- true => module.resolve_name_in_lexical_scope(name, namespace)
- .map(Success).unwrap_or(Failed(None)),
- false => module.resolve_name(name, namespace, false),
- }.and_then(|binding| {
+ module.resolve_name(name, namespace, use_lexical_scope).and_then(|binding| {
if record_used {
if let NameBindingKind::Import { directive, .. } = binding.kind {
self.used_imports.insert((directive.id, namespace));
TypeNS) {
Ok(binding) => {
let def = binding.def().unwrap();
- self.record_def(item.id, PathResolution::new(def, 0));
+ self.record_def(item.id, PathResolution::new(def));
}
Err(true) => self.record_def(item.id, err_path_resolution()),
Err(false) => {
// Add each argument to the rib.
let mut bindings_list = HashMap::new();
for argument in &declaration.inputs {
- self.resolve_pattern(&argument.pat, ArgumentIrrefutableMode, &mut bindings_list);
+ self.resolve_pattern(&argument.pat, PatternSource::FnParam, &mut bindings_list);
self.visit_ty(&argument.ty);
walk_list!(self, visit_expr, &local.init);
// Resolve the pattern.
- self.resolve_pattern(&local.pat, LocalIrrefutableMode, &mut HashMap::new());
+ self.resolve_pattern(&local.pat, PatternSource::Let, &mut HashMap::new());
}
// build a map from pattern identifiers to binding-info's.
let mut bindings_list = HashMap::new();
for pattern in &arm.pats {
- self.resolve_pattern(&pattern, RefutableMode, &mut bindings_list);
+ self.resolve_pattern(&pattern, PatternSource::Match, &mut bindings_list);
}
// This has to happen *after* we determine which
fn resolve_type(&mut self, ty: &Ty) {
match ty.node {
TyKind::Path(ref maybe_qself, ref path) => {
- let resolution = match self.resolve_possibly_assoc_item(ty.id,
- maybe_qself.as_ref(),
- path,
- TypeNS) {
- // `<T>::a::b::c` is resolved by typeck alone.
- TypecheckRequired => {
- // Resolve embedded types.
- visit::walk_ty(self, ty);
- return;
- }
- ResolveAttempt(resolution) => resolution,
- };
-
// This is a path in the type namespace. Walk through scopes
// looking for it.
- if let Some(def) = resolution {
- // Write the result into the def map.
- debug!("(resolving type) writing resolution for `{}` (id {}) = {:?}",
- path_names_to_string(path, 0), ty.id, def);
- self.record_def(ty.id, def);
+ if let Some(def) = self.resolve_possibly_assoc_item(ty.id, maybe_qself.as_ref(),
+ path, TypeNS) {
+ match def.base_def {
+ Def::Mod(..) if def.depth == 0 => {
+ self.session.span_err(path.span, "expected type, found module");
+ self.record_def(ty.id, err_path_resolution());
+ }
+ _ => {
+ // Write the result into the def map.
+ debug!("(resolving type) writing resolution for `{}` (id {}) = {:?}",
+ path_names_to_string(path, 0), ty.id, def);
+ self.record_def(ty.id, def);
+ }
+ }
} else {
self.record_def(ty.id, err_path_resolution());
visit::walk_ty(self, ty);
}
- fn resolve_pattern(&mut self,
- pattern: &Pat,
- mode: PatternBindingMode,
- // Maps idents to the node ID for the (outermost)
- // pattern that binds them
- bindings_list: &mut HashMap<Name, NodeId>) {
- let pat_id = pattern.id;
- pattern.walk(&mut |pattern| {
- match pattern.node {
- PatKind::Ident(binding_mode, ref path1, ref at_rhs) => {
- // The meaning of PatKind::Ident with no type parameters
- // depends on whether an enum variant or unit-like struct
- // with that name is in scope. The probing lookup has to
- // be careful not to emit spurious errors. Only matching
- // patterns (match) can match nullary variants or
- // unit-like structs. For binding patterns (let
- // and the LHS of @-patterns), matching such a value is
- // simply disallowed (since it's rarely what you want).
- let const_ok = mode == RefutableMode && at_rhs.is_none();
-
- let ident = path1.node;
- let renamed = mtwt::resolve(ident);
-
- match self.resolve_bare_identifier_pattern(ident, pattern.span) {
- FoundStructOrEnumVariant(def) if const_ok => {
- debug!("(resolving pattern) resolving `{}` to struct or enum variant",
- renamed);
-
- self.enforce_default_binding_mode(pattern,
- binding_mode,
- "an enum variant");
- self.record_def(pattern.id,
- PathResolution {
- base_def: def,
- depth: 0,
- });
- }
- FoundStructOrEnumVariant(..) => {
- resolve_error(
- self,
- pattern.span,
- ResolutionError::DeclarationShadowsEnumVariantOrUnitLikeStruct(
- renamed)
- );
- self.record_def(pattern.id, err_path_resolution());
- }
- FoundConst(def, _) if const_ok => {
- debug!("(resolving pattern) resolving `{}` to constant", renamed);
-
- self.enforce_default_binding_mode(pattern, binding_mode, "a constant");
- self.record_def(pattern.id,
- PathResolution {
- base_def: def,
- depth: 0,
- });
- }
- FoundConst(_, name) => {
- resolve_error(
- self,
- pattern.span,
- ResolutionError::ConstantForIrrefutableBinding(name)
- );
- self.record_def(pattern.id, err_path_resolution());
- }
- BareIdentifierPatternUnresolved => {
- debug!("(resolving pattern) binding `{}`", renamed);
-
- let def_id = self.definitions.local_def_id(pattern.id);
- let def = Def::Local(def_id, pattern.id);
-
- // Record the definition so that later passes
- // will be able to distinguish variants from
- // locals in patterns.
-
- self.record_def(pattern.id,
- PathResolution {
- base_def: def,
- depth: 0,
- });
-
- // Add the binding to the local ribs, if it
- // doesn't already exist in the bindings list. (We
- // must not add it if it's in the bindings list
- // because that breaks the assumptions later
- // passes make about or-patterns.)
- if !bindings_list.contains_key(&renamed) {
- let this = &mut *self;
- let last_rib = this.value_ribs.last_mut().unwrap();
- last_rib.bindings.insert(renamed, def);
- bindings_list.insert(renamed, pat_id);
- } else if mode == ArgumentIrrefutableMode &&
- bindings_list.contains_key(&renamed) {
- // Forbid duplicate bindings in the same
- // parameter list.
- resolve_error(
- self,
- pattern.span,
- ResolutionError::IdentifierBoundMoreThanOnceInParameterList(
- &ident.name.as_str())
- );
- } else if bindings_list.get(&renamed) == Some(&pat_id) {
- // Then this is a duplicate variable in the
- // same disjunction, which is an error.
- resolve_error(
- self,
- pattern.span,
- ResolutionError::IdentifierBoundMoreThanOnceInSamePattern(
- &ident.name.as_str())
- );
- }
- // Else, not bound in the same pattern: do
- // nothing.
- }
- }
+ fn fresh_binding(&mut self,
+ ident: &ast::SpannedIdent,
+ pat_id: NodeId,
+ outer_pat_id: NodeId,
+ pat_src: PatternSource,
+ bindings: &mut HashMap<Name, NodeId>)
+ -> PathResolution {
+ // Add the binding to the local ribs, if it
+ // doesn't already exist in the bindings map. (We
+ // must not add it if it's in the bindings map
+ // because that breaks the assumptions later
+ // passes make about or-patterns.)
+ let renamed = mtwt::resolve(ident.node);
+ let def = match bindings.get(&renamed).cloned() {
+ Some(id) if id == outer_pat_id => {
+ // `Variant(a, a)`, error
+ resolve_error(
+ self,
+ ident.span,
+ ResolutionError::IdentifierBoundMoreThanOnceInSamePattern(
+ &ident.node.name.as_str())
+ );
+ Def::Err
+ }
+ Some(..) if pat_src == PatternSource::FnParam => {
+ // `fn f(a: u8, a: u8)`, error
+ resolve_error(
+ self,
+ ident.span,
+ ResolutionError::IdentifierBoundMoreThanOnceInParameterList(
+ &ident.node.name.as_str())
+ );
+ Def::Err
+ }
+ Some(..) if pat_src == PatternSource::Match => {
+ // `Variant1(a) | Variant2(a)`, ok
+ // Reuse definition from the first `a`.
+ self.value_ribs.last_mut().unwrap().bindings[&renamed]
+ }
+ Some(..) => {
+ span_bug!(ident.span, "two bindings with the same name from \
+ unexpected pattern source {:?}", pat_src);
+ }
+ None => {
+ // A completely fresh binding, add to the lists.
+ // FIXME: Later stages are not ready to deal with `Def::Err` here yet, so
+ // define `Invalid` bindings as `Def::Local`, just don't add them to the lists.
+ let def = Def::Local(self.definitions.local_def_id(pat_id), pat_id);
+ if ident.node.name != keywords::Invalid.name() {
+ bindings.insert(renamed, outer_pat_id);
+ self.value_ribs.last_mut().unwrap().bindings.insert(renamed, def);
}
+ def
+ }
+ };
- PatKind::TupleStruct(ref path, _, _) | PatKind::Path(ref path) => {
- // This must be an enum variant, struct or const.
- let resolution = match self.resolve_possibly_assoc_item(pat_id,
- None,
- path,
- ValueNS) {
- // The below shouldn't happen because all
- // qualified paths should be in PatKind::QPath.
- TypecheckRequired =>
- span_bug!(path.span,
- "resolve_possibly_assoc_item claimed that a path \
- in PatKind::Path or PatKind::TupleStruct \
- requires typecheck to resolve, but qualified \
- paths should be PatKind::QPath"),
- ResolveAttempt(resolution) => resolution,
- };
- if let Some(path_res) = resolution {
- match path_res.base_def {
- Def::Struct(..) if path_res.depth == 0 => {
- self.record_def(pattern.id, path_res);
- }
- Def::Variant(..) | Def::Const(..) => {
- self.record_def(pattern.id, path_res);
- }
- Def::Static(..) => {
- let segments = &path.segments;
- let binding = if path.global {
- self.resolve_crate_relative_path(path.span, segments, ValueNS)
- } else {
- self.resolve_module_relative_path(path.span, segments, ValueNS)
- }.unwrap();
+ PathResolution::new(def)
+ }
- let error = ResolutionError::StaticVariableReference(binding);
- resolve_error(self, path.span, error);
- self.record_def(pattern.id, err_path_resolution());
- }
- _ => {
- // If anything ends up here entirely resolved,
- // it's an error. If anything ends up here
- // partially resolved, that's OK, because it may
- // be a `T::CONST` that typeck will resolve.
- if path_res.depth == 0 {
- resolve_error(
- self,
- path.span,
- ResolutionError::NotAnEnumVariantStructOrConst(
- &path.segments
- .last()
- .unwrap()
- .identifier
- .name
- .as_str())
- );
- self.record_def(pattern.id, err_path_resolution());
- } else {
- let const_name = path.segments
- .last()
- .unwrap()
- .identifier
- .name;
- let traits = self.get_traits_containing_item(const_name);
- self.trait_map.insert(pattern.id, traits);
- self.record_def(pattern.id, path_res);
- }
- }
- }
- } else {
- if let Err(false) = self.resolve_path(pat_id, &path, 0, ValueNS) {
- // No error has been reported, so we need to do this ourselves.
- resolve_error(
- self,
- path.span,
- ResolutionError::UnresolvedEnumVariantStructOrConst(
- &path.segments.last().unwrap().identifier.name.as_str())
- );
- }
- self.record_def(pattern.id, err_path_resolution());
- }
- visit::walk_path(self, path);
+ fn resolve_pattern_path<ExpectedFn>(&mut self,
+ pat_id: NodeId,
+ qself: Option<&QSelf>,
+ path: &Path,
+ namespace: Namespace,
+ expected_fn: ExpectedFn,
+ expected_what: &str)
+ where ExpectedFn: FnOnce(Def) -> bool
+ {
+ let resolution = if let Some(resolution) = self.resolve_possibly_assoc_item(pat_id,
+ qself, path, namespace) {
+ if resolution.depth == 0 {
+ if expected_fn(resolution.base_def) {
+ resolution
+ } else {
+ resolve_error(
+ self,
+ path.span,
+ ResolutionError::PatPathUnexpected(expected_what,
+ resolution.kind_name(), path)
+ );
+ err_path_resolution()
+ }
+ } else {
+ // Not fully resolved associated item `T::A::B` or `<T as Tr>::A::B`
+ // or `<T>::A::B`. If `B` should be resolved in value namespace then
+ // it needs to be added to the trait map.
+ if namespace == ValueNS {
+ let item_name = path.segments.last().unwrap().identifier.name;
+ let traits = self.get_traits_containing_item(item_name);
+ self.trait_map.insert(pat_id, traits);
}
+ resolution
+ }
+ } else {
+ if let Err(false) = self.resolve_path(pat_id, path, 0, namespace) {
+ resolve_error(
+ self,
+ path.span,
+ ResolutionError::PatPathUnresolved(expected_what, path)
+ );
+ }
+ err_path_resolution()
+ };
- PatKind::QPath(ref qself, ref path) => {
- // Associated constants only.
- let resolution = match self.resolve_possibly_assoc_item(pat_id,
- Some(qself),
- path,
- ValueNS) {
- TypecheckRequired => {
- // All `<T>::CONST` should end up here, and will
- // require use of the trait map to resolve
- // during typechecking.
- let const_name = path.segments
- .last()
- .unwrap()
- .identifier
- .name;
- let traits = self.get_traits_containing_item(const_name);
- self.trait_map.insert(pattern.id, traits);
- visit::walk_pat(self, pattern);
- return true;
- }
- ResolveAttempt(resolution) => resolution,
- };
- if let Some(path_res) = resolution {
- match path_res.base_def {
- // All `<T as Trait>::CONST` should end up here, and
- // have the trait already selected.
- Def::AssociatedConst(..) => {
- self.record_def(pattern.id, path_res);
+ self.record_def(pat_id, resolution);
+ }
+
+ fn resolve_pattern(&mut self,
+ pat: &Pat,
+ pat_src: PatternSource,
+ // Maps idents to the node ID for the
+ // outermost pattern that binds them.
+ bindings: &mut HashMap<Name, NodeId>) {
+ // Visit all direct subpatterns of this pattern.
+ let outer_pat_id = pat.id;
+ pat.walk(&mut |pat| {
+ match pat.node {
+ PatKind::Ident(bmode, ref ident, ref opt_pat) => {
+ // First try to resolve the identifier as some existing
+ // entity, then fall back to a fresh binding.
+ let resolution = if let Ok(resolution) = self.resolve_path(pat.id,
+ &Path::from_ident(ident.span, ident.node), 0, ValueNS) {
+ let always_binding = !pat_src.is_refutable() || opt_pat.is_some() ||
+ bmode != BindingMode::ByValue(Mutability::Immutable);
+ match resolution.base_def {
+ Def::Struct(..) | Def::Variant(..) |
+ Def::Const(..) | Def::AssociatedConst(..) if !always_binding => {
+ // A constant, unit variant, etc pattern.
+ resolution
}
- _ => {
+ Def::Struct(..) | Def::Variant(..) |
+ Def::Const(..) | Def::AssociatedConst(..) | Def::Static(..) => {
+ // A fresh binding that shadows something unacceptable.
resolve_error(
self,
- path.span,
- ResolutionError::NotAnAssociatedConst(
- &path.segments.last().unwrap().identifier.name.as_str()
- )
+ ident.span,
+ ResolutionError::BindingShadowsSomethingUnacceptable(
+ pat_src.descr(), resolution.kind_name(), ident.node.name)
);
- self.record_def(pattern.id, err_path_resolution());
+ err_path_resolution()
+ }
+ Def::Local(..) | Def::Upvar(..) | Def::Fn(..) | Def::Err => {
+ // These entities are explicitly allowed
+ // to be shadowed by fresh bindings.
+ self.fresh_binding(ident, pat.id, outer_pat_id,
+ pat_src, bindings)
+ }
+ def => {
+ span_bug!(ident.span, "unexpected definition for an \
+ identifier in pattern {:?}", def);
}
}
} else {
- resolve_error(self,
- path.span,
- ResolutionError::UnresolvedAssociatedConst(&path.segments
- .last()
- .unwrap()
- .identifier
- .name
- .as_str()));
- self.record_def(pattern.id, err_path_resolution());
- }
- visit::walk_pat(self, pattern);
+ // Fall back to a fresh binding.
+ self.fresh_binding(ident, pat.id, outer_pat_id, pat_src, bindings)
+ };
+
+ self.record_def(pat.id, resolution);
}
- PatKind::Struct(ref path, _, _) => {
- match self.resolve_path(pat_id, path, 0, TypeNS) {
- Ok(definition) => {
- self.record_def(pattern.id, definition);
+ PatKind::TupleStruct(ref path, _, _) => {
+ self.resolve_pattern_path(pat.id, None, path, ValueNS, |def| {
+ match def {
+ Def::Struct(..) | Def::Variant(..) | Def::Err => true,
+ _ => false,
}
- Err(true) => self.record_def(pattern.id, err_path_resolution()),
- Err(false) => {
- resolve_error(
- self,
- path.span,
- ResolutionError::DoesNotNameAStruct(
- &path_names_to_string(path, 0))
- );
- self.record_def(pattern.id, err_path_resolution());
+ }, "variant or struct");
+ }
+
+ PatKind::Path(ref path) => {
+ self.resolve_pattern_path(pat.id, None, path, ValueNS, |def| {
+ match def {
+ Def::Struct(..) | Def::Variant(..) |
+ Def::Const(..) | Def::AssociatedConst(..) | Def::Err => true,
+ _ => false,
}
- }
- visit::walk_path(self, path);
+ }, "variant, struct or constant");
}
- PatKind::Lit(_) | PatKind::Range(..) => {
- visit::walk_pat(self, pattern);
+ PatKind::QPath(ref qself, ref path) => {
+ self.resolve_pattern_path(pat.id, Some(qself), path, ValueNS, |def| {
+ match def {
+ Def::AssociatedConst(..) | Def::Err => true,
+ _ => false,
+ }
+ }, "associated constant");
}
- _ => {
- // Nothing to do.
+ PatKind::Struct(ref path, _, _) => {
+ self.resolve_pattern_path(pat.id, None, path, TypeNS, |def| {
+ match def {
+ Def::Struct(..) | Def::Variant(..) |
+ Def::TyAlias(..) | Def::AssociatedTy(..) | Def::Err => true,
+ _ => false,
+ }
+ }, "variant, struct or type alias");
}
+
+ _ => {}
}
true
});
- }
- fn resolve_bare_identifier_pattern(&mut self, ident: ast::Ident, span: Span)
- -> BareIdentifierPatternResolution {
- let binding = match self.resolve_ident_in_lexical_scope(ident, ValueNS, true) {
- Some(LexicalScopeBinding::Item(binding)) => binding,
- _ => return BareIdentifierPatternUnresolved,
- };
- let def = binding.def().unwrap();
-
- match def {
- Def::Variant(..) | Def::Struct(..) => FoundStructOrEnumVariant(def),
- Def::Const(..) | Def::AssociatedConst(..) => FoundConst(def, ident.name),
- Def::Static(..) => {
- let error = ResolutionError::StaticVariableReference(binding);
- resolve_error(self, span, error);
- BareIdentifierPatternUnresolved
- }
- _ => BareIdentifierPatternUnresolved,
- }
+ visit::walk_pat(self, pat);
}
/// Handles paths that may refer to associated items
fn resolve_possibly_assoc_item(&mut self,
id: NodeId,
- maybe_qself: Option<&ast::QSelf>,
+ maybe_qself: Option<&QSelf>,
path: &Path,
namespace: Namespace)
- -> AssocItemResolveResult {
+ -> Option<PathResolution> {
let max_assoc_types;
match maybe_qself {
Some(qself) => {
if qself.position == 0 {
- return TypecheckRequired;
+ // FIXME: Create some fake resolution that can't possibly be a type.
+ return Some(PathResolution {
+ base_def: Def::Mod(self.definitions.local_def_id(ast::CRATE_NODE_ID)),
+ depth: path.segments.len(),
+ });
}
max_assoc_types = path.segments.len() - qself.position;
// Make sure the trait is valid.
break;
}
self.with_no_errors(|this| {
- resolution = this.resolve_path(id, path, depth, TypeNS).ok();
+ let partial_resolution = this.resolve_path(id, path, depth, TypeNS).ok();
+ if let Some(Def::Mod(..)) = partial_resolution.map(|r| r.base_def) {
+ // Modules cannot have associated items
+ } else {
+ resolution = partial_resolution;
+ }
});
}
- if let Some(Def::Mod(_)) = resolution.map(|r| r.base_def) {
- // A module is not a valid type or value.
- resolution = None;
- }
- ResolveAttempt(resolution)
+ resolution
}
/// Skips `path_depth` trailing segments, which is also reflected in the
let span = path.span;
let segments = &path.segments[..path.segments.len() - path_depth];
- let mk_res = |def| PathResolution::new(def, path_depth);
+ let mk_res = |def| PathResolution { base_def: def, depth: path_depth };
if path.global {
let binding = self.resolve_crate_relative_path(span, segments, namespace);
if let Some(node_id) = self.current_self_type.as_ref().and_then(extract_node_id) {
// Look for a field with the same name in the current self_type.
- match self.def_map.get(&node_id).map(|d| d.full_def()) {
- Some(Def::Enum(did)) |
- Some(Def::TyAlias(did)) |
- Some(Def::Struct(did)) |
- Some(Def::Variant(_, did)) => match self.structs.get(&did) {
- None => {}
- Some(fields) => {
- if fields.iter().any(|&field_name| name == field_name) {
- return Field;
+ if let Some(resolution) = self.def_map.get(&node_id) {
+ match resolution.base_def {
+ Def::Enum(did) | Def::TyAlias(did) |
+ Def::Struct(did) | Def::Variant(_, did) if resolution.depth == 0 => {
+ if let Some(fields) = self.structs.get(&did) {
+ if fields.iter().any(|&field_name| name == field_name) {
+ return Field;
+ }
}
}
- },
- _ => {} // Self type didn't resolve properly
+ _ => {}
+ }
}
}
// Next, resolve the node.
match expr.node {
ExprKind::Path(ref maybe_qself, ref path) => {
- let resolution = match self.resolve_possibly_assoc_item(expr.id,
- maybe_qself.as_ref(),
- path,
- ValueNS) {
- // `<T>::a::b::c` is resolved by typeck alone.
- TypecheckRequired => {
- let method_name = path.segments.last().unwrap().identifier.name;
- let traits = self.get_traits_containing_item(method_name);
- self.trait_map.insert(expr.id, traits);
- visit::walk_expr(self, expr);
- return;
- }
- ResolveAttempt(resolution) => resolution,
- };
-
// This is a local path in the value namespace. Walk through
// scopes looking for it.
- if let Some(path_res) = resolution {
+ if let Some(path_res) = self.resolve_possibly_assoc_item(expr.id,
+ maybe_qself.as_ref(), path, ValueNS) {
// Check if struct variant
let is_struct_variant = if let Def::Variant(_, variant_id) = path_res.base_def {
self.structs.contains_key(&variant_id)
}
Some(def @ Def::Label(_)) => {
// Since this def is a label, it is never read.
- self.record_def(expr.id,
- PathResolution {
- base_def: def,
- depth: 0,
- })
+ self.record_def(expr.id, PathResolution::new(def))
}
Some(_) => {
span_bug!(expr.span, "label wasn't mapped to a label def!")
self.visit_expr(subexpression);
self.value_ribs.push(Rib::new(NormalRibKind));
- self.resolve_pattern(pattern, RefutableMode, &mut HashMap::new());
+ self.resolve_pattern(pattern, PatternSource::IfLet, &mut HashMap::new());
self.visit_block(if_block);
self.value_ribs.pop();
ExprKind::WhileLet(ref pattern, ref subexpression, ref block, label) => {
self.visit_expr(subexpression);
self.value_ribs.push(Rib::new(NormalRibKind));
- self.resolve_pattern(pattern, RefutableMode, &mut HashMap::new());
+ self.resolve_pattern(pattern, PatternSource::WhileLet, &mut HashMap::new());
self.resolve_labeled_block(label.map(|l| l.node), expr.id, block);
ExprKind::ForLoop(ref pattern, ref subexpression, ref block, label) => {
self.visit_expr(subexpression);
self.value_ribs.push(Rib::new(NormalRibKind));
- self.resolve_pattern(pattern, LocalIrrefutableMode, &mut HashMap::new());
+ self.resolve_pattern(pattern, PatternSource::For, &mut HashMap::new());
self.resolve_labeled_block(label.map(|l| l.node), expr.id, block);
let mut search_module = self.current_module;
loop {
// Look for trait children.
- let mut search_in_module = |module: Module<'a>| {
+ let mut search_in_module = |this: &mut Self, module: Module<'a>| {
let mut traits = module.traits.borrow_mut();
if traits.is_none() {
let mut collected_traits = Vec::new();
for &(trait_name, binding) in traits.as_ref().unwrap().iter() {
let trait_def_id = binding.def().unwrap().def_id();
- if self.trait_item_map.contains_key(&(name, trait_def_id)) {
+ if this.trait_item_map.contains_key(&(name, trait_def_id)) {
let mut import_id = None;
if let NameBindingKind::Import { directive, .. } = binding.kind {
let id = directive.id;
- self.maybe_unused_trait_imports.insert(id);
+ this.maybe_unused_trait_imports.insert(id);
import_id = Some(id);
}
add_trait_info(&mut found_traits, trait_def_id, import_id, name);
- self.record_use(trait_name, binding);
+ this.record_use(trait_name, binding);
}
}
};
- search_in_module(search_module);
+ search_in_module(self, search_module);
match search_module.parent_link {
NoParentLink | ModuleParentLink(..) => {
- search_module.prelude.borrow().map(search_in_module);
+ if !search_module.no_implicit_prelude.get() {
+ self.prelude.map(|prelude| search_in_module(self, prelude));
+ }
break;
}
BlockParentLink(parent_module, _) => {
}
}
- fn enforce_default_binding_mode(&mut self,
- pat: &Pat,
- pat_binding_mode: BindingMode,
- descr: &str) {
- match pat_binding_mode {
- BindingMode::ByValue(_) => {}
- BindingMode::ByRef(..) => {
- resolve_error(self,
- pat.span,
- ResolutionError::CannotUseRefBindingModeWith(descr));
- }
- }
- }
-
fn resolve_visibility(&mut self, vis: &ast::Visibility) -> ty::Visibility {
let (path, id) = match *vis {
ast::Visibility::Public => return ty::Visibility::Public,
};
let segments: Vec<_> = path.segments.iter().map(|seg| seg.identifier.name).collect();
+ let mut path_resolution = err_path_resolution();
let vis = match self.resolve_module_path(&segments, DontUseLexicalScope, path.span) {
Success(module) => {
let def = module.def.unwrap();
- let path_resolution = PathResolution { base_def: def, depth: 0 };
- self.def_map.insert(id, path_resolution);
+ path_resolution = PathResolution::new(def);
ty::Visibility::Restricted(self.definitions.as_local_node_id(def.def_id()).unwrap())
}
Failed(Some((span, msg))) => {
ty::Visibility::Public
}
};
+ self.def_map.insert(id, path_resolution);
if !self.is_accessible(vis) {
let msg = format!("visibilities can only be restricted to ancestor modules");
self.session.span_err(path.span, &msg);
}
fn err_path_resolution() -> PathResolution {
- PathResolution {
- base_def: Def::Err,
- depth: 0,
- }
+ PathResolution::new(Def::Err)
}
-
#[derive(PartialEq,Copy, Clone)]
pub enum MakeGlobMap {
Yes,
Failed(None)
}
- // Invariant: this may not be called until import resolution is complete.
- pub fn resolve_name_in_lexical_scope(&self, name: Name, ns: Namespace)
- -> Option<&'a NameBinding<'a>> {
- self.resolution(name, ns).borrow().binding
- .or_else(|| self.prelude.borrow().and_then(|prelude| {
- prelude.resolve_name(name, ns, false).success()
- }))
- }
-
// Define the name or return the existing binding if there is a collision.
pub fn try_define_child(&self, name: Name, ns: Namespace, binding: NameBinding<'a>)
-> Result<(), &'a NameBinding<'a>> {
Some(def) => def,
None => value_result.success().and_then(NameBinding::def).unwrap(),
};
- let path_resolution = PathResolution { base_def: def, depth: 0 };
+ let path_resolution = PathResolution::new(def);
self.resolver.def_map.insert(directive.id, path_resolution);
debug!("(resolving single import) successfully resolved import");
self.resolver.populate_module_if_necessary(target_module);
if let GlobImport { is_prelude: true } = directive.subclass {
- *module_.prelude.borrow_mut() = Some(target_module);
+ self.resolver.prelude = Some(target_module);
return Success(());
}
// Record the destination of this import
if let Some(did) = target_module.def_id() {
- let resolution = PathResolution { base_def: Def::Mod(did), depth: 0 };
+ let resolution = PathResolution::new(Def::Mod(did));
self.resolver.def_map.insert(directive.id, resolution);
}
// looks up anything, not just a type
fn lookup_type_ref(&self, ref_id: NodeId) -> Option<DefId> {
- if !self.tcx.def_map.borrow().contains_key(&ref_id) {
- bug!("def_map has no key for {} in lookup_type_ref", ref_id);
- }
- let def = self.tcx.def_map.borrow().get(&ref_id).unwrap().full_def();
- match def {
+ match self.tcx.expect_def(ref_id) {
Def::PrimTy(..) => None,
Def::SelfTy(..) => None,
- _ => Some(def.def_id()),
+ def => Some(def.def_id()),
}
}
return;
}
- let def_map = self.tcx.def_map.borrow();
- if !def_map.contains_key(&ref_id) {
- span_bug!(span,
- "def_map has no key for {} in lookup_def_kind",
- ref_id);
- }
- let def = def_map.get(&ref_id).unwrap().full_def();
+ let def = self.tcx.expect_def(ref_id);
match def {
Def::Mod(_) |
Def::ForeignMod(_) => {
}
// Modules or types in the path prefix.
- let def_map = self.tcx.def_map.borrow();
- let def = def_map.get(&id).unwrap().full_def();
- match def {
+ match self.tcx.expect_def(id) {
Def::Method(did) => {
let ti = self.tcx.impl_or_trait_item(did);
if let ty::MethodTraitItem(m) = ti {
PatKind::Struct(ref path, ref fields, _) => {
visit::walk_path(self, path);
let adt = self.tcx.node_id_to_type(p.id).ty_adt_def().unwrap();
- let def = self.tcx.def_map.borrow()[&p.id].full_def();
- let variant = adt.variant_of_def(def);
+ let variant = adt.variant_of_def(self.tcx.expect_def(p.id));
for &Spanned { node: ref field, span } in fields {
let sub_span = self.span.span_for_first_ident(span);
ast::ExprKind::Struct(ref path, ref fields, ref base) => {
let hir_expr = self.save_ctxt.tcx.map.expect_expr(ex.id);
let adt = self.tcx.expr_ty(&hir_expr).ty_adt_def().unwrap();
- let def = self.tcx.resolve_expr(&hir_expr);
+ let def = self.tcx.expect_def(hir_expr.id);
self.process_struct_lit(ex, path, fields, adt.variant_of_def(def), base)
}
ast::ExprKind::MethodCall(_, _, ref args) => self.process_method_call(ex, args),
// process collected paths
for &(id, ref p, immut, ref_kind) in &collector.collected_paths {
- let def_map = self.tcx.def_map.borrow();
- if !def_map.contains_key(&id) {
- span_bug!(p.span, "def_map has no key for {} in visit_arm", id);
- }
- let def = def_map.get(&id).unwrap().full_def();
- match def {
+ match self.tcx.expect_def(id) {
Def::Local(_, id) => {
let value = if immut == ast::Mutability::Immutable {
self.span.snippet(p.span).to_string()
Def::Static(_, _) |
Def::Const(..) |
Def::AssociatedConst(..) => {}
- _ => error!("unexpected definition kind when processing collected paths: {:?}",
- def),
+ def => error!("unexpected definition kind when processing collected paths: {:?}",
+ def),
}
}
}
pub fn get_path_data(&self, id: NodeId, path: &ast::Path) -> Option<Data> {
- let def_map = self.tcx.def_map.borrow();
- if !def_map.contains_key(&id) {
- span_bug!(path.span, "def_map has no key for {} in visit_expr", id);
- }
- let def = def_map.get(&id).unwrap().full_def();
+ let def = self.tcx.expect_def(id);
let sub_span = self.span_utils.span_for_last_ident(path.span);
filter!(self.span_utils, sub_span, path.span, None);
match def {
}
fn lookup_ref_id(&self, ref_id: NodeId) -> Option<DefId> {
- if !self.tcx.def_map.borrow().contains_key(&ref_id) {
- bug!("def_map has no key for {} in lookup_type_ref", ref_id);
- }
- let def = self.tcx.def_map.borrow().get(&ref_id).unwrap().full_def();
- match def {
+ match self.tcx.expect_def(ref_id) {
Def::PrimTy(_) | Def::SelfTy(..) => None,
- _ => Some(def.def_id()),
+ def => Some(def.def_id()),
}
}
rustc_data_structures = { path = "../librustc_data_structures" }
rustc_incremental = { path = "../librustc_incremental" }
rustc_llvm = { path = "../librustc_llvm" }
-rustc_mir = { path = "../librustc_mir" }
rustc_platform_intrinsics = { path = "../librustc_platform_intrinsics" }
serialize = { path = "../libserialize" }
syntax = { path = "../libsyntax" }
val: MatchInput,
mut e: F)
-> Vec<Match<'a, 'p, 'blk, 'tcx>> where
- F: FnMut(&[&'p hir::Pat]) -> Option<Vec<&'p hir::Pat>>,
+ F: FnMut(&[(&'p hir::Pat, Option<Ty<'tcx>>)])
+ -> Option<Vec<(&'p hir::Pat, Option<Ty<'tcx>>)>>,
{
debug!("enter_match(bcx={}, m={:?}, col={}, val={:?})",
bcx.to_str(), m, col, val);
let _indenter = indenter();
m.iter().filter_map(|br| {
- e(&br.pats).map(|pats| {
+ let pats : Vec<_> = br.pats.iter().map(|p| (*p, None)).collect();
+ e(&pats).map(|pats| {
let this = br.pats[col];
let mut bound_ptrs = br.bound_ptrs.clone();
match this.node {
_ => {}
}
Match {
- pats: pats,
+ pats: pats.into_iter().map(|p| p.0).collect(),
data: br.data,
bound_ptrs: bound_ptrs,
pat_renaming_map: br.pat_renaming_map,
// Collect all of the matches that can match against anything.
enter_match(bcx, m, col, val, |pats| {
- match pats[col].node {
+ match pats[col].0.node {
PatKind::Binding(..) | PatKind::Wild => {
let mut r = pats[..col].to_vec();
r.extend_from_slice(&pats[col + 1..]);
ConstantValue(ConstantExpr(&l), debug_loc)
}
PatKind::Path(..) | PatKind::TupleStruct(..) | PatKind::Struct(..) => {
- let opt_def = tcx.def_map.borrow().get(&cur.id).map(|d| d.full_def());
- match opt_def {
- Some(Def::Variant(enum_id, var_id)) => {
+ match tcx.expect_def(cur.id) {
+ Def::Variant(enum_id, var_id) => {
let variant = tcx.lookup_adt_def(enum_id).variant_with_id(var_id);
Variant(Disr::from(variant.disr_val),
adt::represent_node(bcx, cur.id),
let (base, len) = vec_datum.get_vec_base_and_len(bcx);
let slice_begin = InBoundsGEP(bcx, base, &[C_uint(bcx.ccx(), offset_left)]);
- let slice_len_offset = C_uint(bcx.ccx(), offset_left + offset_right);
+ let diff = offset_left + offset_right;
+ if let ty::TyArray(ty, n) = vec_ty_contents.sty {
+ let array_ty = bcx.tcx().mk_array(ty, n-diff);
+ let llty_array = type_of::type_of(bcx.ccx(), array_ty);
+ return PointerCast(bcx, slice_begin, llty_array.ptr_to());
+ }
+
+ let slice_len_offset = C_uint(bcx.ccx(), diff);
let slice_len = Sub(bcx, len, slice_len_offset, DebugLoc::None);
let slice_ty = bcx.tcx().mk_imm_ref(bcx.tcx().mk_region(ty::ReErased),
bcx.tcx().mk_slice(unit_ty));
match pat.node {
PatKind::Tuple(..) => true,
PatKind::Struct(..) | PatKind::TupleStruct(..) | PatKind::Path(..) => {
- match tcx.def_map.borrow().get(&pat.id).unwrap().full_def() {
+ match tcx.expect_def(pat.id) {
Def::Struct(..) | Def::TyAlias(..) => true,
_ => false,
}
}
Some(field_vals)
} else if any_uniq_pat(m, col) || any_region_pat(m, col) {
- Some(vec!(Load(bcx, val.val)))
+ let ptr = if type_is_fat_ptr(bcx.tcx(), left_ty) {
+ val.val
+ } else {
+ Load(bcx, val.val)
+ };
+ Some(vec!(ptr))
} else {
match left_ty.sty {
ty::TyArray(_, n) => {
/// Checks whether the binding in `discr` is assigned to anywhere in the expression `body`
fn is_discr_reassigned(bcx: Block, discr: &hir::Expr, body: &hir::Expr) -> bool {
let (vid, field) = match discr.node {
- hir::ExprPath(..) => match bcx.def(discr.id) {
+ hir::ExprPath(..) => match bcx.tcx().expect_def(discr.id) {
Def::Local(_, vid) | Def::Upvar(_, vid, _, _) => (vid, None),
_ => return false
},
hir::ExprField(ref base, field) => {
- let vid = match bcx.tcx().def_map.borrow().get(&base.id).map(|d| d.full_def()) {
+ let vid = match bcx.tcx().expect_def_or_none(base.id) {
Some(Def::Local(_, vid)) | Some(Def::Upvar(_, vid, _, _)) => vid,
_ => return false
};
(vid, Some(mc::NamedField(field.node)))
},
hir::ExprTupField(ref base, field) => {
- let vid = match bcx.tcx().def_map.borrow().get(&base.id).map(|d| d.full_def()) {
+ let vid = match bcx.tcx().expect_def_or_none(base.id) {
Some(Def::Local(_, vid)) | Some(Def::Upvar(_, vid, _, _)) => vid,
_ => return false
};
}
}
PatKind::TupleStruct(_, ref sub_pats, ddpos) => {
- let opt_def = bcx.tcx().def_map.borrow().get(&pat.id).map(|d| d.full_def());
- match opt_def {
- Some(Def::Variant(enum_id, var_id)) => {
+ match bcx.tcx().expect_def(pat.id) {
+ Def::Variant(enum_id, var_id) => {
let repr = adt::represent_node(bcx, pat.id);
let vinfo = ccx.tcx().lookup_adt_def(enum_id).variant_with_id(var_id);
let args = extract_variant_args(bcx,
cleanup_scope);
}
}
- Some(Def::Struct(..)) => {
+ Def::Struct(..) => {
let expected_len = match *ccx.tcx().pat_ty(&pat) {
ty::TyS{sty: ty::TyStruct(adt_def, _), ..} => {
adt_def.struct_variant().fields.len()
attributes::emit_uwtable(llfndecl, true);
}
- debug!("trans_closure(..., {})", instance);
+ // this is an info! to allow collecting monomorphization statistics
+ // and to allow finding the last function before LLVM aborts from
+ // release builds.
+ info!("trans_closure(..., {})", instance);
let fn_ty = FnType::new(ccx, abi, sig, &[]);
/// Return the variant corresponding to a given node (e.g. expr)
pub fn of_node(tcx: TyCtxt<'a, 'tcx, 'tcx>, ty: Ty<'tcx>, id: ast::NodeId) -> Self {
- let node_def = tcx.def_map.borrow().get(&id).map(|v| v.full_def());
- Self::from_ty(tcx, ty, node_def)
+ Self::from_ty(tcx, ty, Some(tcx.expect_def(id)))
}
pub fn field_index(&self, name: ast::Name) -> usize {
self.tcx().map.node_to_string(id).to_string()
}
- pub fn def(&self, nid: ast::NodeId) -> Def {
- match self.tcx().def_map.borrow().get(&nid) {
- Some(v) => v.full_def(),
- None => {
- bug!("no def associated with node id {}", nid);
- }
- }
- }
-
pub fn to_str(&self) -> String {
format!("[block {:p}]", self)
}
// `def` must be its own statement and cannot be in the `match`
// otherwise the `def_map` will be borrowed for the entire match instead
// of just to get the `def` value
- let def = ccx.tcx().def_map.borrow().get(&expr.id).unwrap().full_def();
- match def {
+ match ccx.tcx().expect_def(expr.id) {
Def::Const(def_id) | Def::AssociatedConst(def_id) => {
if !ccx.tcx().tables.borrow().adjustments.contains_key(&expr.id) {
debug!("get_const_expr_as_global ({:?}): found const {:?}",
_ => break,
}
}
- let opt_def = cx.tcx().def_map.borrow().get(&cur.id).map(|d| d.full_def());
- if let Some(Def::Static(def_id, _)) = opt_def {
+ if let Some(Def::Static(def_id, _)) = cx.tcx().expect_def_or_none(cur.id) {
get_static(cx, def_id).val
} else {
// If this isn't the address of a static, then keep going through
}
},
hir::ExprPath(..) => {
- let def = cx.tcx().def_map.borrow().get(&e.id).unwrap().full_def();
- match def {
+ match cx.tcx().expect_def(e.id) {
Def::Local(_, id) => {
if let Some(val) = fn_args.and_then(|args| args.get(&id).cloned()) {
val
_ => break,
};
}
- let def = cx.tcx().def_map.borrow()[&callee.id].full_def();
let arg_vals = map_list(args)?;
- match def {
+ match cx.tcx().expect_def(callee.id) {
Def::Fn(did) | Def::Method(did) => {
const_fn_call(
cx,
let loop_id = match opt_label {
None => fcx.top_loop_scope(),
Some(_) => {
- match bcx.tcx().def_map.borrow().get(&expr.id).map(|d| d.full_def()) {
- Some(Def::Label(loop_id)) => loop_id,
+ match bcx.tcx().expect_def(expr.id) {
+ Def::Label(loop_id) => loop_id,
r => {
bug!("{:?} in def-map for label", r)
}
use syntax::{ast, codemap};
use rustc_data_structures::bitvec::BitVector;
+use rustc_data_structures::indexed_vec::{Idx, IndexVec};
use rustc::hir::{self, PatKind};
// This procedure builds the *scope map* for a given function, which maps any
/// Produce DIScope DIEs for each MIR Scope which has variables defined in it.
/// If debuginfo is disabled, the returned vector is empty.
-pub fn create_mir_scopes(fcx: &FunctionContext) -> Vec<DIScope> {
+pub fn create_mir_scopes(fcx: &FunctionContext) -> IndexVec<VisibilityScope, DIScope> {
let mir = fcx.mir.clone().expect("create_mir_scopes: missing MIR for fn");
- let mut scopes = vec![ptr::null_mut(); mir.visibility_scopes.len()];
+ let mut scopes = IndexVec::from_elem(ptr::null_mut(), &mir.visibility_scopes);
let fn_metadata = match fcx.debug_context {
FunctionDebugContext::RegularContext(box ref data) => data.fn_metadata,
has_variables: &BitVector,
fn_metadata: DISubprogram,
scope: VisibilityScope,
- scopes: &mut [DIScope]) {
- let idx = scope.index();
- if !scopes[idx].is_null() {
+ scopes: &mut IndexVec<VisibilityScope, DIScope>) {
+ if !scopes[scope].is_null() {
return;
}
let scope_data = &mir.visibility_scopes[scope];
let parent_scope = if let Some(parent) = scope_data.parent_scope {
make_mir_scope(ccx, mir, has_variables, fn_metadata, parent, scopes);
- scopes[parent.index()]
+ scopes[parent]
} else {
// The root is the function itself.
- scopes[idx] = fn_metadata;
+ scopes[scope] = fn_metadata;
return;
};
- if !has_variables.contains(idx) {
+ if !has_variables.contains(scope.index()) {
// Do not create a DIScope if there are no variables
// defined in this MIR Scope, to avoid debuginfo bloat.
// our parent is the root, because we might want to
// put arguments in the root and not have shadowing.
if parent_scope != fn_metadata {
- scopes[idx] = parent_scope;
+ scopes[scope] = parent_scope;
return;
}
}
let loc = span_start(ccx, scope_data.span);
let file_metadata = file_metadata(ccx, &loc.file.name);
- scopes[idx] = unsafe {
+ scopes[scope] = unsafe {
llvm::LLVMDIBuilderCreateLexicalBlock(
DIB(ccx),
parent_scope,
// have side effects. This seems to be reached through tuple struct constructors being
// passed zero-size constants.
if let hir::ExprPath(..) = expr.node {
- match bcx.def(expr.id) {
+ match bcx.tcx().expect_def(expr.id) {
Def::Const(_) | Def::AssociatedConst(_) => {
assert!(type_is_zero_size(bcx.ccx(), bcx.tcx().node_id_to_type(expr.id)));
return bcx;
// `[x; N]` somewhere within.
match expr.node {
hir::ExprPath(..) => {
- match bcx.def(expr.id) {
+ match bcx.tcx().expect_def(expr.id) {
Def::Const(did) | Def::AssociatedConst(did) => {
let empty_substs = bcx.tcx().mk_substs(Substs::empty());
let const_expr = consts::get_const_expr(bcx.ccx(), did, expr,
trans(bcx, &e)
}
hir::ExprPath(..) => {
- let var = trans_var(bcx, bcx.def(expr.id));
+ let var = trans_var(bcx, bcx.tcx().expect_def(expr.id));
DatumBlock::new(bcx, var.to_expr_datum())
}
hir::ExprField(ref base, name) => {
trans_into(bcx, &e, dest)
}
hir::ExprPath(..) => {
- trans_def_dps_unadjusted(bcx, expr, bcx.def(expr.id), dest)
+ trans_def_dps_unadjusted(bcx, expr, bcx.tcx().expect_def(expr.id), dest)
}
hir::ExprIf(ref cond, ref thn, ref els) => {
controlflow::trans_if(bcx, expr.id, &cond, &thn, els.as_ref().map(|e| &**e), dest)
match expr.node {
hir::ExprPath(..) => {
- match tcx.resolve_expr(expr) {
+ match tcx.expect_def(expr.id) {
// Put functions and ctors with the ADTs, as they
// are zero-sized, so DPS is the cheapest option.
Def::Struct(..) | Def::Variant(..) |
extern crate rustc_data_structures;
extern crate rustc_incremental;
pub extern crate rustc_llvm as llvm;
-extern crate rustc_mir;
extern crate rustc_platform_intrinsics as intrinsics;
extern crate serialize;
extern crate rustc_const_math;
//! which do not.
use rustc_data_structures::bitvec::BitVector;
+use rustc_data_structures::indexed_vec::{Idx, IndexVec};
use rustc::mir::repr as mir;
use rustc::mir::repr::TerminatorKind;
use rustc::mir::visit::{Visitor, LvalueContext};
-use rustc_mir::traversal;
+use rustc::mir::traversal;
use common::{self, Block, BlockAndBuilder};
use super::rvalue;
debug!("visit_assign(block={:?}, lvalue={:?}, rvalue={:?})", block, lvalue, rvalue);
match *lvalue {
- mir::Lvalue::Temp(index) => {
- self.mark_assigned(index as usize);
+ mir::Lvalue::Temp(temp) => {
+ self.mark_assigned(temp.index());
if !rvalue::rvalue_creates_operand(self.mir, self.bcx, rvalue) {
- self.mark_as_lvalue(index as usize);
+ self.mark_as_lvalue(temp.index());
}
}
_ => {
// Allow uses of projections of immediate pair fields.
if let mir::Lvalue::Projection(ref proj) = *lvalue {
- if let mir::Lvalue::Temp(index) = proj.base {
- let ty = self.mir.temp_decls[index as usize].ty;
+ if let mir::Lvalue::Temp(temp) = proj.base {
+ let ty = self.mir.temp_decls[temp].ty;
let ty = self.bcx.monomorphize(&ty);
if common::type_is_imm_pair(self.bcx.ccx(), ty) {
if let mir::ProjectionElem::Field(..) = proj.elem {
}
match *lvalue {
- mir::Lvalue::Temp(index) => {
+ mir::Lvalue::Temp(temp) => {
match context {
LvalueContext::Call => {
- self.mark_assigned(index as usize);
+ self.mark_assigned(temp.index());
}
LvalueContext::Consume => {
}
LvalueContext::Borrow { .. } |
LvalueContext::Slice { .. } |
LvalueContext::Projection => {
- self.mark_as_lvalue(index as usize);
+ self.mark_as_lvalue(temp.index());
}
}
}
pub fn cleanup_kinds<'bcx,'tcx>(_bcx: Block<'bcx,'tcx>,
mir: &mir::Mir<'tcx>)
- -> Vec<CleanupKind>
+ -> IndexVec<mir::BasicBlock, CleanupKind>
{
- fn discover_masters<'tcx>(result: &mut [CleanupKind], mir: &mir::Mir<'tcx>) {
- for bb in mir.all_basic_blocks() {
- let data = mir.basic_block_data(bb);
+ fn discover_masters<'tcx>(result: &mut IndexVec<mir::BasicBlock, CleanupKind>,
+ mir: &mir::Mir<'tcx>) {
+ for (bb, data) in mir.basic_blocks().iter_enumerated() {
match data.terminator().kind {
TerminatorKind::Goto { .. } |
TerminatorKind::Resume |
TerminatorKind::Return |
+ TerminatorKind::Unreachable |
TerminatorKind::If { .. } |
TerminatorKind::Switch { .. } |
TerminatorKind::SwitchInt { .. } => {
if let Some(unwind) = unwind {
debug!("cleanup_kinds: {:?}/{:?} registering {:?} as funclet",
bb, data, unwind);
- result[unwind.index()] = CleanupKind::Funclet;
+ result[unwind] = CleanupKind::Funclet;
}
}
}
}
}
- fn propagate<'tcx>(result: &mut [CleanupKind], mir: &mir::Mir<'tcx>) {
- let mut funclet_succs : Vec<_> =
- mir.all_basic_blocks().iter().map(|_| None).collect();
+ fn propagate<'tcx>(result: &mut IndexVec<mir::BasicBlock, CleanupKind>,
+ mir: &mir::Mir<'tcx>) {
+ let mut funclet_succs = IndexVec::from_elem(None, mir.basic_blocks());
let mut set_successor = |funclet: mir::BasicBlock, succ| {
- match funclet_succs[funclet.index()] {
+ match funclet_succs[funclet] {
ref mut s @ None => {
debug!("set_successor: updating successor of {:?} to {:?}",
funclet, succ);
};
for (bb, data) in traversal::reverse_postorder(mir) {
- let funclet = match result[bb.index()] {
+ let funclet = match result[bb] {
CleanupKind::NotCleanup => continue,
CleanupKind::Funclet => bb,
CleanupKind::Internal { funclet } => funclet,
};
debug!("cleanup_kinds: {:?}/{:?}/{:?} propagating funclet {:?}",
- bb, data, result[bb.index()], funclet);
+ bb, data, result[bb], funclet);
for &succ in data.terminator().successors().iter() {
- let kind = result[succ.index()];
+ let kind = result[succ];
debug!("cleanup_kinds: propagating {:?} to {:?}/{:?}",
funclet, succ, kind);
match kind {
CleanupKind::NotCleanup => {
- result[succ.index()] = CleanupKind::Internal { funclet: funclet };
+ result[succ] = CleanupKind::Internal { funclet: funclet };
}
CleanupKind::Funclet => {
set_successor(funclet, succ);
debug!("promoting {:?} to a funclet and updating {:?}", succ,
succ_funclet);
- result[succ.index()] = CleanupKind::Funclet;
+ result[succ] = CleanupKind::Funclet;
set_successor(succ_funclet, succ);
set_successor(funclet, succ);
}
}
}
- let mut result : Vec<_> =
- mir.all_basic_blocks().iter().map(|_| CleanupKind::NotCleanup).collect();
+ let mut result = IndexVec::from_elem(CleanupKind::NotCleanup, mir.basic_blocks());
discover_masters(&mut result, mir);
propagate(&mut result, mir);
pub fn trans_block(&mut self, bb: mir::BasicBlock) {
let mut bcx = self.bcx(bb);
let mir = self.mir.clone();
- let data = mir.basic_block_data(bb);
+ let data = &mir[bb];
debug!("trans_block({:?}={:?})", bb, data);
let cleanup_bundle = bcx.lpad().and_then(|l| l.bundle());
let funclet_br = |this: &Self, bcx: BlockAndBuilder, bb: mir::BasicBlock| {
- let lltarget = this.blocks[bb.index()].llbb;
+ let lltarget = this.blocks[bb].llbb;
if let Some(cp) = cleanup_pad {
- match this.cleanup_kind(bb) {
+ match this.cleanup_kinds[bb] {
CleanupKind::Funclet => {
// micro-optimization: generate a `ret` rather than a jump
// to a return block
};
let llblock = |this: &mut Self, target: mir::BasicBlock| {
- let lltarget = this.blocks[target.index()].llbb;
+ let lltarget = this.blocks[target].llbb;
if let Some(cp) = cleanup_pad {
- match this.cleanup_kind(target) {
+ match this.cleanup_kinds[target] {
CleanupKind::Funclet => {
// MSVC cross-funclet jump - need a trampoline
}
} else {
if let (CleanupKind::NotCleanup, CleanupKind::Funclet) =
- (this.cleanup_kind(bb), this.cleanup_kind(target))
+ (this.cleanup_kinds[bb], this.cleanup_kinds[target])
{
// jump *into* cleanup - need a landing pad if GNU
this.landing_pad_to(target).llbb
})
}
+ mir::TerminatorKind::Unreachable => {
+ bcx.unreachable();
+ }
+
mir::TerminatorKind::Drop { ref location, target, unwind } => {
let lvalue = self.trans_lvalue(&bcx, location);
let ty = lvalue.ty.to_ty(bcx.tcx());
if let Some(unwind) = unwind {
bcx.invoke(drop_fn,
&[llvalue],
- self.blocks[target.index()].llbb,
+ self.blocks[target].llbb,
llblock(self, unwind),
cleanup_bundle);
} else {
// Many different ways to call a function handled here
if let &Some(cleanup) = cleanup {
let ret_bcx = if let Some((_, target)) = *destination {
- self.blocks[target.index()]
+ self.blocks[target]
} else {
self.unreachable_block()
};
}
}
- fn cleanup_kind(&self, bb: mir::BasicBlock) -> CleanupKind {
- self.cleanup_kinds[bb.index()]
- }
-
/// Return the landingpad wrapper around the given basic block
///
/// No-op in MSVC SEH scheme.
fn landing_pad_to(&mut self, target_bb: mir::BasicBlock) -> Block<'bcx, 'tcx>
{
- if let Some(block) = self.landing_pads[target_bb.index()] {
+ if let Some(block) = self.landing_pads[target_bb] {
return block;
}
if base::wants_msvc_seh(self.fcx.ccx.sess()) {
- return self.blocks[target_bb.index()];
+ return self.blocks[target_bb];
}
let target = self.bcx(target_bb);
let block = self.fcx.new_block("cleanup", None);
- self.landing_pads[target_bb.index()] = Some(block);
+ self.landing_pads[target_bb] = Some(block);
let bcx = block.build();
let ccx = bcx.ccx();
pub fn init_cpad(&mut self, bb: mir::BasicBlock) {
let bcx = self.bcx(bb);
- let data = self.mir.basic_block_data(bb);
+ let data = &self.mir[bb];
debug!("init_cpad({:?})", data);
- match self.cleanup_kinds[bb.index()] {
+ match self.cleanup_kinds[bb] {
CleanupKind::NotCleanup => {
bcx.set_lpad(None)
}
}
fn bcx(&self, bb: mir::BasicBlock) -> BlockAndBuilder<'bcx, 'tcx> {
- self.blocks[bb.index()].build()
+ self.blocks[bb].build()
}
fn make_return_dest(&mut self, bcx: &BlockAndBuilder<'bcx, 'tcx>,
}
let dest = match *dest {
mir::Lvalue::Temp(idx) => {
- let lvalue_ty = self.mir.lvalue_ty(bcx.tcx(), dest);
- let lvalue_ty = bcx.monomorphize(&lvalue_ty);
- let ret_ty = lvalue_ty.to_ty(bcx.tcx());
- match self.temps[idx as usize] {
+ let ret_ty = self.lvalue_ty(dest);
+ match self.temps[idx] {
TempRef::Lvalue(dest) => dest,
TempRef::Operand(None) => {
// Handle temporary lvalues, specifically Operand ones, as
self.store_operand(bcx, cast_ptr, val);
}
+
// Stores the return value of a function call into it's final location.
fn store_return(&mut self,
bcx: &BlockAndBuilder<'bcx, 'tcx>,
Store(dst) => ret_ty.store(bcx, op.immediate(), dst),
IndirectOperand(tmp, idx) => {
let op = self.trans_load(bcx, tmp, op.ty);
- self.temps[idx as usize] = TempRef::Operand(Some(op));
+ self.temps[idx] = TempRef::Operand(Some(op));
}
DirectOperand(idx) => {
// If there is a cast, we have to store and reload.
} else {
op.unpack_if_pair(bcx)
};
- self.temps[idx as usize] = TempRef::Operand(Some(op));
+ self.temps[idx] = TempRef::Operand(Some(op));
}
}
}
// Store the return value to the pointer
Store(ValueRef),
// Stores an indirect return value to an operand temporary lvalue
- IndirectOperand(ValueRef, u32),
+ IndirectOperand(ValueRef, mir::Temp),
// Stores a direct return value to an operand temporary lvalue
- DirectOperand(u32)
+ DirectOperand(mir::Temp)
}
use rustc::ty::{self, Ty, TyCtxt, TypeFoldable};
use rustc::ty::cast::{CastTy, IntTy};
use rustc::ty::subst::Substs;
+use rustc_data_structures::indexed_vec::{Idx, IndexVec};
use {abi, adt, base, Disr};
use callee::Callee;
use common::{self, BlockAndBuilder, CrateContext, const_get_elt, val_ty};
substs: &'tcx Substs<'tcx>,
/// Arguments passed to a const fn.
- args: Vec<Const<'tcx>>,
+ args: IndexVec<mir::Arg, Const<'tcx>>,
/// Variable values - specifically, argument bindings of a const fn.
- vars: Vec<Option<Const<'tcx>>>,
+ vars: IndexVec<mir::Var, Option<Const<'tcx>>>,
/// Temp values.
- temps: Vec<Option<Const<'tcx>>>,
+ temps: IndexVec<mir::Temp, Option<Const<'tcx>>>,
/// Value assigned to Return, which is the resulting constant.
return_value: Option<Const<'tcx>>
fn new(ccx: &'a CrateContext<'a, 'tcx>,
mir: &'a mir::Mir<'tcx>,
substs: &'tcx Substs<'tcx>,
- args: Vec<Const<'tcx>>)
+ args: IndexVec<mir::Arg, Const<'tcx>>)
-> MirConstContext<'a, 'tcx> {
MirConstContext {
ccx: ccx,
mir: mir,
substs: substs,
args: args,
- vars: vec![None; mir.var_decls.len()],
- temps: vec![None; mir.temp_decls.len()],
+ vars: IndexVec::from_elem(None, &mir.var_decls),
+ temps: IndexVec::from_elem(None, &mir.temp_decls),
return_value: None
}
}
fn trans_def(ccx: &'a CrateContext<'a, 'tcx>,
mut instance: Instance<'tcx>,
- args: Vec<Const<'tcx>>)
+ args: IndexVec<mir::Arg, Const<'tcx>>)
-> Result<Const<'tcx>, ConstEvalFailure> {
// Try to resolve associated constants.
if instance.substs.self_ty().is_some() {
let mut failure = Ok(());
loop {
- let data = self.mir.basic_block_data(bb);
+ let data = &self.mir[bb];
for statement in &data.statements {
let span = statement.source_info.span;
match statement.kind {
func, fn_ty)
};
- let mut const_args = Vec::with_capacity(args.len());
+ let mut const_args = IndexVec::with_capacity(args.len());
for arg in args {
match self.const_operand(arg, span) {
- Ok(arg) => const_args.push(arg),
+ Ok(arg) => { const_args.push(arg); },
Err(err) => if failure.is_ok() { failure = Err(err); }
}
}
fn store(&mut self, dest: &mir::Lvalue<'tcx>, value: Const<'tcx>, span: Span) {
let dest = match *dest {
- mir::Lvalue::Var(index) => &mut self.vars[index as usize],
- mir::Lvalue::Temp(index) => &mut self.temps[index as usize],
+ mir::Lvalue::Var(var) => &mut self.vars[var],
+ mir::Lvalue::Temp(temp) => &mut self.temps[temp],
mir::Lvalue::ReturnPointer => &mut self.return_value,
_ => span_bug!(span, "assignment to {:?} in constant", dest)
};
-> Result<ConstLvalue<'tcx>, ConstEvalFailure> {
let tcx = self.ccx.tcx();
let lvalue = match *lvalue {
- mir::Lvalue::Var(index) => {
- self.vars[index as usize].unwrap_or_else(|| {
- span_bug!(span, "var{} not initialized", index)
+ mir::Lvalue::Var(var) => {
+ self.vars[var].unwrap_or_else(|| {
+ span_bug!(span, "{:?} not initialized", var)
}).as_lvalue()
}
- mir::Lvalue::Temp(index) => {
- self.temps[index as usize].unwrap_or_else(|| {
- span_bug!(span, "tmp{} not initialized", index)
+ mir::Lvalue::Temp(temp) => {
+ self.temps[temp].unwrap_or_else(|| {
+ span_bug!(span, "{:?} not initialized", temp)
}).as_lvalue()
}
- mir::Lvalue::Arg(index) => self.args[index as usize].as_lvalue(),
+ mir::Lvalue::Arg(arg) => self.args[arg].as_lvalue(),
mir::Lvalue::Static(def_id) => {
ConstLvalue {
base: Base::Static(consts::get_static(self.ccx, def_id).val),
let substs = self.monomorphize(&substs);
let instance = Instance::new(def_id, substs);
- MirConstContext::trans_def(self.ccx, instance, vec![])
+ MirConstContext::trans_def(self.ccx, instance, IndexVec::new())
}
mir::Literal::Promoted { index } => {
let mir = &self.mir.promoted[index];
- MirConstContext::new(self.ccx, mir, self.substs, vec![]).trans()
+ MirConstContext::new(self.ccx, mir, self.substs, IndexVec::new()).trans()
}
mir::Literal::Value { value } => {
Ok(Const::from_constval(self.ccx, value, ty))
let substs = bcx.monomorphize(&substs);
let instance = Instance::new(def_id, substs);
- MirConstContext::trans_def(bcx.ccx(), instance, vec![])
+ MirConstContext::trans_def(bcx.ccx(), instance, IndexVec::new())
}
mir::Literal::Promoted { index } => {
let mir = &self.mir.promoted[index];
- MirConstContext::new(bcx.ccx(), mir, bcx.fcx().param_substs, vec![]).trans()
+ MirConstContext::new(bcx.ccx(), mir, bcx.fcx().param_substs,
+ IndexVec::new()).trans()
}
mir::Literal::Value { value } => {
Ok(Const::from_constval(bcx.ccx(), value, ty))
pub fn trans_static_initializer(ccx: &CrateContext, def_id: DefId)
-> Result<ValueRef, ConstEvalFailure> {
let instance = Instance::mono(ccx.shared(), def_id);
- MirConstContext::trans_def(ccx, instance, vec![]).map(|c| c.llval)
+ MirConstContext::trans_def(ccx, instance, IndexVec::new()).map(|c| c.llval)
}
use rustc::ty::{self, Ty, TypeFoldable};
use rustc::mir::repr as mir;
use rustc::mir::tcx::LvalueTy;
+use rustc_data_structures::indexed_vec::Idx;
use abi;
use adt;
use base;
use consts;
use machine;
use type_of::type_of;
+use type_of;
use Disr;
use std::ptr;
use super::{MirContext, TempRef};
-#[derive(Copy, Clone)]
+#[derive(Copy, Clone, Debug)]
pub struct LvalueRef<'tcx> {
/// Pointer to the contents of the lvalue
pub llval: ValueRef,
let fcx = bcx.fcx();
let ccx = bcx.ccx();
let tcx = bcx.tcx();
- match *lvalue {
- mir::Lvalue::Var(index) => self.vars[index as usize],
- mir::Lvalue::Temp(index) => match self.temps[index as usize] {
+ let result = match *lvalue {
+ mir::Lvalue::Var(var) => self.vars[var],
+ mir::Lvalue::Temp(temp) => match self.temps[temp] {
TempRef::Lvalue(lvalue) =>
lvalue,
TempRef::Operand(..) =>
bug!("using operand temp {:?} as lvalue", lvalue),
},
- mir::Lvalue::Arg(index) => self.args[index as usize],
+ mir::Lvalue::Arg(arg) => self.args[arg],
mir::Lvalue::Static(def_id) => {
- let const_ty = self.mir.lvalue_ty(tcx, lvalue);
- LvalueRef::new_sized(consts::get_static(ccx, def_id).val, const_ty)
+ let const_ty = self.lvalue_ty(lvalue);
+ LvalueRef::new_sized(consts::get_static(ccx, def_id).val,
+ LvalueTy::from_ty(const_ty))
},
mir::Lvalue::ReturnPointer => {
let llval = if !fcx.fn_ty.ret.is_ignore() {
let zero = common::C_uint(bcx.ccx(), 0u64);
bcx.inbounds_gep(tr_base.llval, &[zero, llindex])
};
- (element, ptr::null_mut())
+ element
};
let (llprojected, llextra) = match projection.elem {
}
mir::ProjectionElem::Index(ref index) => {
let index = self.trans_operand(bcx, index);
- project_index(self.prepare_index(bcx, index.immediate()))
+ (project_index(self.prepare_index(bcx, index.immediate())), ptr::null_mut())
}
mir::ProjectionElem::ConstantIndex { offset,
from_end: false,
min_length: _ } => {
let lloffset = C_uint(bcx.ccx(), offset);
- project_index(self.prepare_index(bcx, lloffset))
+ (project_index(lloffset), ptr::null_mut())
}
mir::ProjectionElem::ConstantIndex { offset,
from_end: true,
let lloffset = C_uint(bcx.ccx(), offset);
let lllen = tr_base.len(bcx.ccx());
let llindex = bcx.sub(lllen, lloffset);
- project_index(self.prepare_index(bcx, llindex))
+ (project_index(llindex), ptr::null_mut())
+ }
+ mir::ProjectionElem::Subslice { from, to } => {
+ let llindex = C_uint(bcx.ccx(), from);
+ let llbase = project_index(llindex);
+
+ let base_ty = tr_base.ty.to_ty(bcx.tcx());
+ match base_ty.sty {
+ ty::TyArray(..) => {
+ // must cast the lvalue pointer type to the new
+ // array type (*[%_; new_len]).
+ let base_ty = self.lvalue_ty(lvalue);
+ let llbasety = type_of::type_of(bcx.ccx(), base_ty).ptr_to();
+ let llbase = bcx.pointercast(llbase, llbasety);
+ (llbase, ptr::null_mut())
+ }
+ ty::TySlice(..) => {
+ assert!(tr_base.llextra != ptr::null_mut());
+ let lllen = bcx.sub(tr_base.llextra,
+ C_uint(bcx.ccx(), from+to));
+ (llbase, lllen)
+ }
+ _ => bug!("unexpected type {:?} in Subslice", base_ty)
+ }
}
mir::ProjectionElem::Downcast(..) => {
(tr_base.llval, tr_base.llextra)
ty: projected_ty,
}
}
- }
+ };
+ debug!("trans_lvalue(lvalue={:?}) => {:?}", lvalue, result);
+ result
}
// Perform an action using the given Lvalue.
where F: FnOnce(&mut Self, LvalueRef<'tcx>) -> U
{
match *lvalue {
- mir::Lvalue::Temp(idx) => {
- match self.temps[idx as usize] {
+ mir::Lvalue::Temp(temp) => {
+ match self.temps[temp] {
TempRef::Lvalue(lvalue) => f(self, lvalue),
TempRef::Operand(None) => {
- let lvalue_ty = self.mir.lvalue_ty(bcx.tcx(), lvalue);
- let lvalue_ty = bcx.monomorphize(&lvalue_ty);
+ let lvalue_ty = self.lvalue_ty(lvalue);
let lvalue = LvalueRef::alloca(bcx,
- lvalue_ty.to_ty(bcx.tcx()),
+ lvalue_ty,
"lvalue_temp");
let ret = f(self, lvalue);
- let op = self.trans_load(bcx, lvalue.llval, lvalue_ty.to_ty(bcx.tcx()));
- self.temps[idx as usize] = TempRef::Operand(Some(op));
+ let op = self.trans_load(bcx, lvalue.llval, lvalue_ty);
+ self.temps[temp] = TempRef::Operand(Some(op));
ret
}
TempRef::Operand(Some(_)) => {
- let lvalue_ty = self.mir.lvalue_ty(bcx.tcx(), lvalue);
- let lvalue_ty = bcx.monomorphize(&lvalue_ty);
-
// See comments in TempRef::new_operand as to why
// we always have Some in a ZST TempRef::Operand.
- let ty = lvalue_ty.to_ty(bcx.tcx());
+ let ty = self.lvalue_ty(lvalue);
if common::type_is_zero_size(bcx.ccx(), ty) {
// Pass an undef pointer as no stores can actually occur.
let llptr = C_undef(type_of(bcx.ccx(), ty).ptr_to());
- f(self, LvalueRef::new_sized(llptr, lvalue_ty))
+ f(self, LvalueRef::new_sized(llptr, LvalueTy::from_ty(ty)))
} else {
bug!("Lvalue temp already set");
}
llindex
}
}
+
+ pub fn lvalue_ty(&self, lvalue: &mir::Lvalue<'tcx>) -> Ty<'tcx> {
+ let tcx = self.fcx.ccx.tcx();
+ let lvalue_ty = self.mir.lvalue_ty(tcx, lvalue);
+ self.fcx.monomorphize(&lvalue_ty.to_ty(tcx))
+ }
}
use basic_block::BasicBlock;
use rustc_data_structures::bitvec::BitVector;
+use rustc_data_structures::indexed_vec::{IndexVec, Idx};
pub use self::constant::trans_static_initializer;
use self::lvalue::{LvalueRef, get_dataptr, get_meta};
-use rustc_mir::traversal;
+use rustc::mir::traversal;
use self::operand::{OperandRef, OperandValue};
llpersonalityslot: Option<ValueRef>,
/// A `Block` for each MIR `BasicBlock`
- blocks: Vec<Block<'bcx, 'tcx>>,
+ blocks: IndexVec<mir::BasicBlock, Block<'bcx, 'tcx>>,
/// The funclet status of each basic block
- cleanup_kinds: Vec<analyze::CleanupKind>,
+ cleanup_kinds: IndexVec<mir::BasicBlock, analyze::CleanupKind>,
/// This stores the landing-pad block for a given BB, computed lazily on GNU
/// and eagerly on MSVC.
- landing_pads: Vec<Option<Block<'bcx, 'tcx>>>,
+ landing_pads: IndexVec<mir::BasicBlock, Option<Block<'bcx, 'tcx>>>,
/// Cached unreachable block
unreachable_block: Option<Block<'bcx, 'tcx>>,
/// An LLVM alloca for each MIR `VarDecl`
- vars: Vec<LvalueRef<'tcx>>,
+ vars: IndexVec<mir::Var, LvalueRef<'tcx>>,
/// The location where each MIR `TempDecl` is stored. This is
/// usually an `LvalueRef` representing an alloca, but not always:
///
/// Avoiding allocs can also be important for certain intrinsics,
/// notably `expect`.
- temps: Vec<TempRef<'tcx>>,
+ temps: IndexVec<mir::Temp, TempRef<'tcx>>,
/// The arguments to the function; as args are lvalues, these are
/// always indirect, though we try to avoid creating an alloca
/// when we can (and just reuse the pointer the caller provided).
- args: Vec<LvalueRef<'tcx>>,
+ args: IndexVec<mir::Arg, LvalueRef<'tcx>>,
/// Debug information for MIR scopes.
- scopes: Vec<DIScope>
+ scopes: IndexVec<mir::VisibilityScope, DIScope>
}
impl<'blk, 'tcx> MirContext<'blk, 'tcx> {
pub fn debug_loc(&self, source_info: mir::SourceInfo) -> DebugLoc {
- DebugLoc::ScopeAt(self.scopes[source_info.scope.index()], source_info.span)
+ DebugLoc::ScopeAt(self.scopes[source_info.scope], source_info.span)
}
}
let bcx = fcx.init(false, None).build();
let mir = bcx.mir();
- let mir_blocks = mir.all_basic_blocks();
-
// Analyze the temps to determine which must be lvalues
// FIXME
let (lvalue_temps, cleanup_kinds) = bcx.with_block(|bcx| {
.map(|(mty, decl)| {
let lvalue = LvalueRef::alloca(&bcx, mty, &decl.name.as_str());
- let scope = scopes[decl.source_info.scope.index()];
+ let scope = scopes[decl.source_info.scope];
if !scope.is_null() && bcx.sess().opts.debuginfo == FullDebugInfo {
bcx.with_block(|bcx| {
declare_local(bcx, decl.name, mty, scope,
.collect();
// Allocate a `Block` for every basic block
- let block_bcxs: Vec<Block<'blk,'tcx>> =
- mir_blocks.iter()
- .map(|&bb|{
- if bb == mir::START_BLOCK {
- fcx.new_block("start", None)
- } else {
- fcx.new_block(&format!("{:?}", bb), None)
- }
- })
- .collect();
+ let block_bcxs: IndexVec<mir::BasicBlock, Block<'blk,'tcx>> =
+ mir.basic_blocks().indices().map(|bb| {
+ if bb == mir::START_BLOCK {
+ fcx.new_block("start", None)
+ } else {
+ fcx.new_block(&format!("{:?}", bb), None)
+ }
+ }).collect();
// Branch to the START block
- let start_bcx = block_bcxs[mir::START_BLOCK.index()];
+ let start_bcx = block_bcxs[mir::START_BLOCK];
bcx.br(start_bcx.llbb);
// Up until here, IR instructions for this function have explicitly not been annotated with
blocks: block_bcxs,
unreachable_block: None,
cleanup_kinds: cleanup_kinds,
- landing_pads: mir_blocks.iter().map(|_| None).collect(),
+ landing_pads: IndexVec::from_elem(None, mir.basic_blocks()),
vars: vars,
temps: temps,
args: args,
scopes: scopes
};
- let mut visited = BitVector::new(mir_blocks.len());
+ let mut visited = BitVector::new(mir.basic_blocks().len());
let mut rpo = traversal::reverse_postorder(&mir);
// Remove blocks that haven't been visited, or have no
// predecessors.
- for &bb in &mir_blocks {
- let block = mircx.blocks[bb.index()];
+ for bb in mir.basic_blocks().indices() {
+ let block = mircx.blocks[bb];
let block = BasicBlock(block.llbb);
// Unreachable block
if !visited.contains(bb.index()) {
/// indirect.
fn arg_value_refs<'bcx, 'tcx>(bcx: &BlockAndBuilder<'bcx, 'tcx>,
mir: &mir::Mir<'tcx>,
- scopes: &[DIScope])
- -> Vec<LvalueRef<'tcx>> {
+ scopes: &IndexVec<mir::VisibilityScope, DIScope>)
+ -> IndexVec<mir::Arg, LvalueRef<'tcx>> {
let fcx = bcx.fcx();
let tcx = bcx.tcx();
let mut idx = 0;
let mut llarg_idx = fcx.fn_ty.ret.is_indirect() as usize;
// Get the argument scope, if it exists and if we need it.
- let arg_scope = scopes[mir::ARGUMENT_VISIBILITY_SCOPE.index()];
+ let arg_scope = scopes[mir::ARGUMENT_VISIBILITY_SCOPE];
let arg_scope = if !arg_scope.is_null() && bcx.sess().opts.debuginfo == FullDebugInfo {
Some(arg_scope)
} else {
use llvm::ValueRef;
use rustc::ty::Ty;
use rustc::mir::repr as mir;
+use rustc_data_structures::indexed_vec::Idx;
+
use base;
use common::{self, Block, BlockAndBuilder};
use value::Value;
// watch out for temporaries that do not have an
// alloca; they are handled somewhat differently
if let &mir::Lvalue::Temp(index) = lvalue {
- match self.temps[index as usize] {
+ match self.temps[index] {
TempRef::Operand(Some(o)) => {
return o;
}
// Moves out of pair fields are trivial.
if let &mir::Lvalue::Projection(ref proj) = lvalue {
if let mir::Lvalue::Temp(index) = proj.base {
- let temp_ref = &self.temps[index as usize];
+ let temp_ref = &self.temps[index];
if let &TempRef::Operand(Some(o)) = temp_ref {
match (o.val, &proj.elem) {
(OperandValue::Pair(a, b),
use super::MirContext;
use super::constant::const_scalar_checked_binop;
use super::operand::{OperandRef, OperandValue};
-use super::lvalue::{LvalueRef, get_dataptr, get_meta};
+use super::lvalue::{LvalueRef, get_dataptr};
impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> {
pub fn trans_rvalue(&mut self,
bcx
}
- mir::Rvalue::Slice { ref input, from_start, from_end } => {
- let ccx = bcx.ccx();
- let input = self.trans_lvalue(&bcx, input);
- let ty = input.ty.to_ty(bcx.tcx());
- let (llbase1, lllen) = match ty.sty {
- ty::TyArray(_, n) => {
- (bcx.gepi(input.llval, &[0, from_start]), C_uint(ccx, n))
- }
- ty::TySlice(_) | ty::TyStr => {
- (bcx.gepi(input.llval, &[from_start]), input.llextra)
- }
- _ => bug!("cannot slice {}", ty)
- };
- let adj = C_uint(ccx, from_start + from_end);
- let lllen1 = bcx.sub(lllen, adj);
- bcx.store(llbase1, get_dataptr(&bcx, dest.llval));
- bcx.store(lllen1, get_meta(&bcx, dest.llval));
- bcx
- }
-
mir::Rvalue::InlineAsm { ref asm, ref outputs, ref inputs } => {
let outputs = outputs.iter().map(|output| {
let lvalue = self.trans_lvalue(&bcx, output);
}
mir::Rvalue::Repeat(..) |
mir::Rvalue::Aggregate(..) |
- mir::Rvalue::Slice { .. } |
mir::Rvalue::InlineAsm { .. } => {
bug!("cannot generate operand from rvalue {:?}", rvalue);
true,
mir::Rvalue::Repeat(..) |
mir::Rvalue::Aggregate(..) |
- mir::Rvalue::Slice { .. } |
mir::Rvalue::InlineAsm { .. } =>
false,
}
// except according to those terms.
use rustc::mir::repr as mir;
+
use common::{self, BlockAndBuilder};
use super::MirContext;
mir::StatementKind::Assign(ref lvalue, ref rvalue) => {
match *lvalue {
mir::Lvalue::Temp(index) => {
- let index = index as usize;
- match self.temps[index as usize] {
+ match self.temps[index] {
TempRef::Lvalue(tr_dest) => {
self.trans_rvalue(bcx, tr_dest, rvalue, debug_loc)
}
bcx
}
TempRef::Operand(Some(_)) => {
- let ty = self.mir.lvalue_ty(bcx.tcx(), lvalue);
- let ty = bcx.monomorphize(&ty.to_ty(bcx.tcx()));
+ let ty = self.lvalue_ty(lvalue);
if !common::type_is_zero_size(bcx.ccx(), ty) {
span_bug!(statement.source_info.span,
fn trait_def_id(&self, trait_ref: &hir::TraitRef) -> DefId {
let path = &trait_ref.path;
- match ::lookup_full_def(self.tcx(), path.span, trait_ref.ref_id) {
+ match self.tcx().expect_def(trait_ref.ref_id) {
Def::Trait(trait_def_id) => trait_def_id,
Def::Err => {
self.tcx().sess.fatal("cannot continue compilation due to previous error");
match ty.node {
hir::TyPath(None, ref path) => {
- let def = match self.tcx().def_map.borrow().get(&ty.id) {
- Some(&def::PathResolution { base_def, depth: 0, .. }) => Some(base_def),
- _ => None
- };
- match def {
- Some(Def::Trait(trait_def_id)) => {
+ let resolution = self.tcx().expect_resolution(ty.id);
+ match resolution.base_def {
+ Def::Trait(trait_def_id) if resolution.depth == 0 => {
let mut projection_bounds = Vec::new();
let trait_ref =
self.object_path_to_poly_trait_ref(rscope,
}
hir::TyPath(ref maybe_qself, ref path) => {
debug!("ast_ty_to_ty: maybe_qself={:?} path={:?}", maybe_qself, path);
- let path_res = if let Some(&d) = tcx.def_map.borrow().get(&ast_ty.id) {
- d
- } else if let Some(hir::QSelf { position: 0, .. }) = *maybe_qself {
- // Create some fake resolution that can't possibly be a type.
- def::PathResolution {
- base_def: Def::Mod(tcx.map.local_def_id(ast::CRATE_NODE_ID)),
- depth: path.segments.len()
- }
- } else {
- span_bug!(ast_ty.span, "unbound path {:?}", ast_ty)
- };
+ let path_res = tcx.expect_resolution(ast_ty.id);
let def = path_res.base_def;
let base_ty_end = path.segments.len() - path_res.depth;
let opt_self_ty = maybe_qself.as_ref().map(|qself| {
if path_res.depth != 0 && ty.sty != ty::TyError {
// Write back the new resolution.
- tcx.def_map.borrow_mut().insert(ast_ty.id, def::PathResolution {
- base_def: def,
- depth: 0
- });
+ tcx.def_map.borrow_mut().insert(ast_ty.id, def::PathResolution::new(def));
}
ty
for ast_bound in ast_bounds {
match *ast_bound {
hir::TraitTyParamBound(ref b, hir::TraitBoundModifier::None) => {
- match ::lookup_full_def(tcx, b.trait_ref.path.span, b.trait_ref.ref_id) {
+ match tcx.expect_def(b.trait_ref.ref_id) {
Def::Trait(trait_did) => {
if tcx.try_add_builtin_trait(trait_did,
&mut builtin_bounds) {
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-use hir::def::{self, Def};
+use hir::def::Def;
use rustc::infer::{self, InferOk, TypeOrigin};
-use hir::pat_util::{PatIdMap, pat_id_map};
use hir::pat_util::{EnumerateAndAdjustIterator, pat_is_resolved_const};
use rustc::ty::subst::Substs;
use rustc::ty::{self, Ty, TypeFoldable, LvaluePreference};
use util::nodemap::FnvHashMap;
use session::Session;
-use std::cmp;
use std::collections::hash_map::Entry::{Occupied, Vacant};
-use std::ops::Deref;
+use std::cmp;
use syntax::ast;
use syntax::codemap::{Span, Spanned};
use syntax::ptr::P;
use rustc::hir::{self, PatKind};
use rustc::hir::print as pprust;
-pub struct PatCtxt<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
- pub fcx: &'a FnCtxt<'a, 'gcx, 'tcx>,
- pub map: PatIdMap,
-}
-
-impl<'a, 'gcx, 'tcx> Deref for PatCtxt<'a, 'gcx, 'tcx> {
- type Target = FnCtxt<'a, 'gcx, 'tcx>;
- fn deref(&self) -> &Self::Target {
- self.fcx
- }
-}
-
// This function exists due to the warning "diagnostic code E0164 already used"
fn bad_struct_kind_err(sess: &Session, pat: &hir::Pat, path: &hir::Path, lint: bool) {
let name = pprust::path_to_string(path);
}
}
-impl<'a, 'gcx, 'tcx> PatCtxt<'a, 'gcx, 'tcx> {
+impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> {
pub fn check_pat(&self, pat: &'gcx hir::Pat, expected: Ty<'tcx>) {
let tcx = self.tcx;
self.demand_eqtype(pat.span, expected, lhs_ty);
}
PatKind::Path(..) if pat_is_resolved_const(&tcx.def_map.borrow(), pat) => {
- if let Some(pat_def) = tcx.def_map.borrow().get(&pat.id) {
- let const_did = pat_def.def_id();
- let const_scheme = tcx.lookup_item_type(const_did);
- assert!(const_scheme.generics.is_empty());
- let const_ty = self.instantiate_type_scheme(pat.span,
- &Substs::empty(),
- &const_scheme.ty);
- self.write_ty(pat.id, const_ty);
-
- // FIXME(#20489) -- we should limit the types here to scalars or something!
-
- // As with PatKind::Lit, what we really want here is that there
- // exist a LUB, but for the cases that can occur, subtype
- // is good enough.
- self.demand_suptype(pat.span, expected, const_ty);
- } else {
- self.write_error(pat.id);
- }
- }
- PatKind::Binding(bm, ref path, ref sub) => {
+ let const_did = tcx.expect_def(pat.id).def_id();
+ let const_scheme = tcx.lookup_item_type(const_did);
+ assert!(const_scheme.generics.is_empty());
+ let const_ty = self.instantiate_type_scheme(pat.span,
+ &Substs::empty(),
+ &const_scheme.ty);
+ self.write_ty(pat.id, const_ty);
+
+ // FIXME(#20489) -- we should limit the types here to scalars or something!
+
+ // As with PatKind::Lit, what we really want here is that there
+ // exist a LUB, but for the cases that can occur, subtype
+ // is good enough.
+ self.demand_suptype(pat.span, expected, const_ty);
+ }
+ PatKind::Binding(bm, _, ref sub) => {
let typ = self.local_ty(pat.span, pat.id);
match bm {
hir::BindByRef(mutbl) => {
// if there are multiple arms, make sure they all agree on
// what the type of the binding `x` ought to be
- if let Some(&canon_id) = self.map.get(&path.node) {
- if canon_id != pat.id {
- let ct = self.local_ty(pat.span, canon_id);
- self.demand_eqtype(pat.span, ct, typ);
+ match tcx.expect_def(pat.id) {
+ Def::Err => {}
+ Def::Local(_, var_id) => {
+ if var_id != pat.id {
+ let vt = self.local_ty(pat.span, var_id);
+ self.demand_eqtype(pat.span, vt, typ);
+ }
}
+ d => bug!("bad def for pattern binding `{:?}`", d)
+ }
- if let Some(ref p) = *sub {
- self.check_pat(&p, expected);
- }
+ if let Some(ref p) = *sub {
+ self.check_pat(&p, expected);
}
}
PatKind::TupleStruct(ref path, ref subpats, ddpos) => {
}
PatKind::QPath(ref qself, ref path) => {
let self_ty = self.to_ty(&qself.ty);
- let path_res = if let Some(&d) = tcx.def_map.borrow().get(&pat.id) {
- if d.base_def == Def::Err {
- self.set_tainted_by_errors();
- self.write_error(pat.id);
- return;
- }
- d
- } else if qself.position == 0 {
- // This is just a sentinel for finish_resolving_def_to_ty.
- let sentinel = self.tcx.map.local_def_id(ast::CRATE_NODE_ID);
- def::PathResolution {
- base_def: Def::Mod(sentinel),
- depth: path.segments.len()
- }
- } else {
- debug!("unbound path {:?}", pat);
+ let path_res = tcx.expect_resolution(pat.id);
+ if path_res.base_def == Def::Err {
+ self.set_tainted_by_errors();
self.write_error(pat.id);
return;
- };
+ }
if let Some((opt_ty, segments, def)) =
self.resolve_ty_and_def_ufcs(path_res, Some(self_ty),
path, pat.span, pat.id) {
}
PatKind::Vec(ref before, ref slice, ref after) => {
let expected_ty = self.structurally_resolved_type(pat.span, expected);
- let inner_ty = self.next_ty_var();
- let pat_ty = match expected_ty.sty {
- ty::TyArray(_, size) => tcx.mk_array(inner_ty, {
+ let (inner_ty, slice_ty) = match expected_ty.sty {
+ ty::TyArray(inner_ty, size) => {
let min_len = before.len() + after.len();
- match *slice {
- Some(_) => cmp::max(min_len, size),
- None => min_len
+ if slice.is_none() {
+ if min_len != size {
+ span_err!(tcx.sess, pat.span, E0527,
+ "pattern requires {} elements but array has {}",
+ min_len, size);
+ }
+ (inner_ty, tcx.types.err)
+ } else if let Some(rest) = size.checked_sub(min_len) {
+ (inner_ty, tcx.mk_array(inner_ty, rest))
+ } else {
+ span_err!(tcx.sess, pat.span, E0528,
+ "pattern requires at least {} elements but array has {}",
+ min_len, size);
+ (inner_ty, tcx.types.err)
}
- }),
+ }
+ ty::TySlice(inner_ty) => (inner_ty, expected_ty),
_ => {
- let region = self.next_region_var(infer::PatternRegion(pat.span));
- tcx.mk_ref(tcx.mk_region(region), ty::TypeAndMut {
- ty: tcx.mk_slice(inner_ty),
- mutbl: expected_ty.builtin_deref(true, ty::NoPreference)
- .map_or(hir::MutImmutable, |mt| mt.mutbl)
- })
+ if !expected_ty.references_error() {
+ let mut err = struct_span_err!(
+ tcx.sess, pat.span, E0529,
+ "expected an array or slice, found `{}`",
+ expected_ty);
+ if let ty::TyRef(_, ty::TypeAndMut { mutbl: _, ty }) = expected_ty.sty {
+ match ty.sty {
+ ty::TyArray(..) | ty::TySlice(..) => {
+ err.help("the semantics of slice patterns changed \
+ recently; see issue #23121");
+ }
+ _ => {}
+ }
+ }
+ err.emit();
+ }
+ (tcx.types.err, tcx.types.err)
}
};
- self.write_ty(pat.id, pat_ty);
-
- // `demand::subtype` would be good enough, but using
- // `eqtype` turns out to be equally general. See (*)
- // below for details.
- self.demand_eqtype(pat.span, expected, pat_ty);
+ self.write_ty(pat.id, expected_ty);
for elt in before {
self.check_pat(&elt, inner_ty);
}
if let Some(ref slice) = *slice {
- let region = self.next_region_var(infer::PatternRegion(pat.span));
- let mutbl = expected_ty.builtin_deref(true, ty::NoPreference)
- .map_or(hir::MutImmutable, |mt| mt.mutbl);
-
- let slice_ty = tcx.mk_ref(tcx.mk_region(region), ty::TypeAndMut {
- ty: tcx.mk_slice(inner_ty),
- mutbl: mutbl
- });
self.check_pat(&slice, slice_ty);
}
for elt in after {
}
}
-
// (*) In most of the cases above (literals and constants being
// the exception), we relate types using strict equality, evewn
// though subtyping would be sufficient. There are a few reasons
// Typecheck the patterns first, so that we get types for all the
// bindings.
for arm in arms {
- let pcx = PatCtxt {
- fcx: self,
- map: pat_id_map(&arm.pats[0]),
- };
for p in &arm.pats {
- pcx.check_pat(&p, discrim_ty);
+ self.check_pat(&p, discrim_ty);
}
}
}
}
-impl<'a, 'gcx, 'tcx> PatCtxt<'a, 'gcx, 'tcx> {
+impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> {
pub fn check_pat_struct(&self, pat: &'gcx hir::Pat,
path: &hir::Path, fields: &'gcx [Spanned<hir::FieldPat>],
etc: bool, expected: Ty<'tcx>) {
let tcx = self.tcx;
- let def = tcx.def_map.borrow().get(&pat.id).unwrap().full_def();
+ let def = tcx.expect_def(pat.id);
let variant = match self.def_struct_variant(def, path.span) {
Some((_, variant)) => variant,
None => {
// Typecheck the path.
let tcx = self.tcx;
- let path_res = match tcx.def_map.borrow().get(&pat.id) {
- Some(&path_res) if path_res.base_def != Def::Err => path_res,
- _ => {
- self.set_tainted_by_errors();
- self.write_error(pat.id);
+ let path_res = tcx.expect_resolution(pat.id);
+ if path_res.base_def == Def::Err {
+ self.set_tainted_by_errors();
+ self.write_error(pat.id);
- for pat in subpats {
- self.check_pat(&pat, tcx.types.err);
- }
- return;
+ for pat in subpats {
+ self.check_pat(&pat, tcx.types.err);
}
- };
+ return;
+ }
let (opt_ty, segments, def) = match self.resolve_ty_and_def_ufcs(path_res,
None, path,
let tcx = self.tcx;
if let Some(pr) = tcx.def_map.borrow().get(&expr.id) {
if pr.depth == 0 && pr.base_def != Def::Err {
- if let Some(span) = tcx.map.span_if_local(pr.def_id()) {
+ if let Some(span) = tcx.map.span_if_local(pr.base_def.def_id()) {
err.span_note(span, "defined here");
}
}
use self::TupleArgumentsFlag::*;
use astconv::{AstConv, ast_region_to_region, PathParamMode};
-use check::_match::PatCtxt;
use dep_graph::DepNode;
use fmt_macros::{Parser, Piece, Position};
use middle::cstore::LOCAL_CRATE;
use hir::def::{self, Def};
use hir::def_id::DefId;
+use hir::pat_util;
use rustc::infer::{self, InferCtxt, InferOk, TypeOrigin, TypeTrace, type_variable};
-use hir::pat_util::{self, pat_id_map};
use rustc::ty::subst::{self, Subst, Substs, VecPerParamSpace, ParamSpace};
use rustc::traits::{self, ProjectionMode};
use rustc::ty::{GenericPredicates, TypeScheme};
use require_c_abi_if_variadic;
use rscope::{ElisionFailureInfo, RegionScope};
use session::{Session, CompileResult};
-use {CrateCtxt, lookup_full_def};
+use CrateCtxt;
use TypeAndSubsts;
use lint;
use util::common::{block_query, ErrorReported, indenter, loop_query};
});
// Check the pattern.
- let pcx = PatCtxt {
- fcx: &fcx,
- map: pat_id_map(&input.pat),
- };
- pcx.check_pat(&input.pat, *arg_ty);
+ fcx.check_pat(&input.pat, *arg_ty);
}
visit.visit_block(body);
let tcx = self.tcx;
// Find the relevant variant
- let def = lookup_full_def(tcx, path.span, expr.id);
+ let def = tcx.expect_def(expr.id);
if def == Def::Err {
self.set_tainted_by_errors();
self.check_struct_fields_on_error(expr.id, fields, base_expr);
self.to_ty(&qself.ty)
});
- let path_res = if let Some(&d) = tcx.def_map.borrow().get(&id) {
- d
- } else if let Some(hir::QSelf { position: 0, .. }) = *maybe_qself {
- // Create some fake resolution that can't possibly be a type.
- def::PathResolution {
- base_def: Def::Mod(tcx.map.local_def_id(ast::CRATE_NODE_ID)),
- depth: path.segments.len()
- }
- } else {
- span_bug!(expr.span, "unbound path {:?}", expr)
- };
-
+ let path_res = tcx.expect_resolution(id);
if let Some((opt_ty, segments, def)) =
self.resolve_ty_and_def_ufcs(path_res, opt_self_ty, path,
expr.span, expr.id) {
if let Some(def) = def {
// Write back the new resolution.
- self.tcx().def_map.borrow_mut().insert(node_id, def::PathResolution {
- base_def: def,
- depth: 0,
- });
+ self.tcx().def_map.borrow_mut().insert(node_id, def::PathResolution::new(def));
Some((Some(ty), slice::ref_slice(item_segment), def))
} else {
self.write_error(node_id);
}
}
- let pcx = PatCtxt {
- fcx: self,
- map: pat_id_map(&local.pat),
- };
- pcx.check_pat(&local.pat, t);
+ self.check_pat(&local.pat, t);
let pat_ty = self.node_ty(local.pat.id);
if pat_ty.references_error() {
self.write_ty(local.id, pat_ty);
// <id> nested anywhere inside the loop?
(block_query(b, |e| {
if let hir::ExprBreak(Some(_)) = e.node {
- lookup_full_def(tcx, e.span, e.id) == Def::Label(id)
+ tcx.expect_def(e.id) == Def::Label(id)
} else {
false
}
debug!("link_pattern(discr_cmt={:?}, root_pat={:?})",
discr_cmt,
root_pat);
- let _ = mc.cat_pattern(discr_cmt, root_pat, |mc, sub_cmt, sub_pat| {
+ let _ = mc.cat_pattern(discr_cmt, root_pat, |_, sub_cmt, sub_pat| {
match sub_pat.node {
// `ref x` pattern
PatKind::Binding(hir::BindByRef(mutbl), _, _) => {
self.link_region_from_node_type(sub_pat.span, sub_pat.id,
mutbl, sub_cmt);
}
-
- // `[_, ..slice, _]` pattern
- PatKind::Vec(_, Some(ref slice_pat), _) => {
- match mc.cat_slice_pattern(sub_cmt, &slice_pat) {
- Ok((slice_cmt, slice_mutbl, slice_r)) => {
- self.link_region(sub_pat.span, &slice_r,
- ty::BorrowKind::from_mutbl(slice_mutbl),
- slice_cmt);
- }
- Err(()) => {}
- }
- }
_ => {}
}
});
-> bool
{
if let hir::TyPath(None, _) = ast_ty.node {
- let path_res = *tcx.def_map.borrow().get(&ast_ty.id).unwrap();
+ let path_res = tcx.expect_resolution(ast_ty.id);
match path_res.base_def {
- Def::SelfTy(Some(def_id), None) => {
- path_res.depth == 0 && def_id == tcx.map.local_def_id(param_id)
- }
- Def::TyParam(_, _, def_id, _) => {
- path_res.depth == 0 && def_id == tcx.map.local_def_id(param_id)
- }
- _ => {
- false
+ Def::SelfTy(Some(def_id), None) |
+ Def::TyParam(_, _, def_id, _) if path_res.depth == 0 => {
+ def_id == tcx.map.local_def_id(param_id)
}
+ _ => false
}
} else {
false
match unbound {
Some(ref tpb) => {
// FIXME(#8559) currently requires the unbound to be built-in.
- let trait_def_id = tcx.trait_ref_to_def_id(tpb);
+ let trait_def_id = tcx.expect_def(tpb.ref_id).def_id();
match kind_id {
Ok(kind_id) if trait_def_id != kind_id => {
tcx.sess.span_warn(span,
```compile_fail
#[repr(i32)]
-enum NightWatch {} // error: unsupported representation for zero-variant enum
+enum NightsWatch {} // error: unsupported representation for zero-variant enum
```
It is impossible to define an integer type to be used to represent zero-variant
```
#[repr(i32)]
-enum NightWatch {
- JohnSnow,
+enum NightsWatch {
+ JonSnow,
Commander,
}
```
or you remove the integer represention of your enum:
```
-enum NightWatch {}
+enum NightsWatch {}
```
"##,
// E0239, // `next` method of `Iterator` trait has unexpected type
// E0240,
// E0241,
- E0242, // internal error looking up a definition
+// E0242,
E0245, // not a trait
// E0246, // invalid recursive type
// E0247,
// type `{}` was overridden
E0436, // functional record update requires a struct
E0513, // no type for local variable ..
- E0521 // redundant default implementations of trait
+ E0521, // redundant default implementations of trait
+ E0527, // expected {} elements, found {}
+ E0528, // expected at least {} elements, found {}
+ E0529, // slice pattern expects array or slice, not `{}`
}
use dep_graph::DepNode;
use hir::map as hir_map;
-use hir::def::Def;
use rustc::infer::TypeOrigin;
use rustc::ty::subst::Substs;
use rustc::ty::{self, Ty, TyCtxt, TypeFoldable};
}
}
-fn lookup_full_def(tcx: TyCtxt, sp: Span, id: ast::NodeId) -> Def {
- match tcx.def_map.borrow().get(&id) {
- Some(x) => x.full_def(),
- None => {
- span_fatal!(tcx.sess, sp, E0242, "internal error looking up a definition")
- }
- }
-}
-
fn require_c_abi_if_variadic(tcx: TyCtxt,
decl: &hir::FnDecl,
abi: Abi,
/// Basic usage:
///
/// ```
- /// assert_eq!('C'.to_lowercase().next(), Some('c'));
+ /// assert_eq!('C'.to_lowercase().collect::<String>(), "c");
+ ///
+ /// // Sometimes the result is more than one character:
+ /// assert_eq!('Ä°'.to_lowercase().collect::<String>(), "i\u{307}");
///
/// // Japanese scripts do not have case, and so:
- /// assert_eq!('å±±'.to_lowercase().next(), Some('å±±'));
+ /// assert_eq!('å±±'.to_lowercase().collect::<String>(), "å±±");
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
#[inline]
/// Basic usage:
///
/// ```
- /// assert_eq!('c'.to_uppercase().next(), Some('C'));
+ /// assert_eq!('c'.to_uppercase().collect::<String>(), "C");
+ ///
+ /// // Sometimes the result is more than one character:
+ /// assert_eq!('ß'.to_uppercase().collect::<String>(), "SS");
///
/// // Japanese does not have case, and so:
- /// assert_eq!('å±±'.to_uppercase().next(), Some('å±±'));
+ /// assert_eq!('å±±'.to_uppercase().collect::<String>(), "å±±");
/// ```
///
/// In Turkish, the equivalent of 'i' in Latin has five forms instead of two:
/// Note that the lowercase dotted 'i' is the same as the Latin. Therefore:
///
/// ```
- /// let upper_i = 'i'.to_uppercase().next();
+ /// let upper_i: String = 'i'.to_uppercase().collect();
/// ```
///
/// The value of `upper_i` here relies on the language of the text: if we're
- /// in `en-US`, it should be `Some('I')`, but if we're in `tr_TR`, it should
- /// be `Some('Ä°')`. `to_uppercase()` does not take this into account, and so:
+ /// in `en-US`, it should be `"I"`, but if we're in `tr_TR`, it should
+ /// be `"Ä°"`. `to_uppercase()` does not take this into account, and so:
///
/// ```
- /// let upper_i = 'i'.to_uppercase().next();
+ /// let upper_i: String = 'i'.to_uppercase().collect();
///
- /// assert_eq!(Some('I'), upper_i);
+ /// assert_eq!(upper_i, "I");
/// ```
///
/// holds across languages.
use rustc::hir::print as pprust;
use rustc::ty::{self, TyCtxt};
use rustc::ty::subst;
+use rustc::util::common::slice_pat;
use rustc_const_eval::lookup_const_by_id;
Some(tcx) => tcx,
None => return None,
};
- let def = match tcx.def_map.borrow().get(&id) {
- Some(d) => d.full_def(),
+ let def = match tcx.expect_def_or_none(id) {
+ Some(def) => def,
None => return None,
};
let did = def.def_id();
let variant = tcx.lookup_adt_def(did).struct_variant();
clean::Struct {
- struct_type: match &*variant.fields {
- [] => doctree::Unit,
- [_] if variant.kind == ty::VariantKind::Tuple => doctree::Newtype,
- [..] if variant.kind == ty::VariantKind::Tuple => doctree::Tuple,
+ struct_type: match slice_pat(&&*variant.fields) {
+ &[] => doctree::Unit,
+ &[_] if variant.kind == ty::VariantKind::Tuple => doctree::Newtype,
+ &[..] if variant.kind == ty::VariantKind::Tuple => doctree::Tuple,
_ => doctree::Plain,
},
generics: (&t.generics, &predicates, subst::TypeSpace).clean(cx),
};
}
};
- let def = tcx.def_map.borrow().get(&id).expect("unresolved id not in defmap").full_def();
+ let def = tcx.expect_def(id);
debug!("resolve_type: def={:?}", def);
let is_generic = match def {
fn resolve_def(cx: &DocContext, id: ast::NodeId) -> Option<DefId> {
cx.tcx_opt().and_then(|tcx| {
- tcx.def_map.borrow().get(&id).map(|d| register_def(cx, d.full_def()))
+ tcx.expect_def_or_none(id).map(|def| register_def(cx, def))
})
}
use rustc::middle::cstore::LOCAL_CRATE;
use rustc::hir::def_id::{CRATE_DEF_INDEX, DefId};
+use rustc::util::common::slice_pat;
use syntax::abi::Abi;
use rustc::hir;
decl.decl)
}
clean::Tuple(ref typs) => {
- match &**typs {
- [] => primitive_link(f, clean::PrimitiveTuple, "()"),
- [ref one] => {
+ match slice_pat(&&**typs) {
+ &[] => primitive_link(f, clean::PrimitiveTuple, "()"),
+ &[ref one] => {
primitive_link(f, clean::PrimitiveTuple, "(")?;
write!(f, "{},", one)?;
primitive_link(f, clean::PrimitiveTuple, ")")
}
}
- write!(w, "</table>")
+ if curty.is_some() {
+ write!(w, "</table>")?;
+ }
+ Ok(())
}
fn short_stability(item: &clean::Item, cx: &Context, show_reason: bool) -> Vec<String> {
Some(tcx) => tcx,
None => return false
};
- let def = tcx.def_map.borrow()[&id];
+ let def = tcx.expect_def(id);
let def_did = def.def_id();
let use_attrs = tcx.map.attrs(id).clean(self.cx);
// reachable in documentation - a previously nonreachable item can be
// made reachable by cross-crate inlining which we're checking here.
// (this is done here because we need to know this upfront)
- if !def.def_id().is_local() && !is_no_inline {
+ if !def_did.is_local() && !is_no_inline {
let attrs = clean::inline::load_attrs(self.cx, tcx, def_did);
let self_is_hidden = attrs.list("doc").has_word("hidden");
- match def.base_def {
+ match def {
Def::Trait(did) |
Def::Struct(did) |
Def::Enum(did) |
// the rustdoc documentation for primitive types. Using `include!`
// because rustdoc only looks for these modules at the crate level.
include!("primitive_docs.rs");
+
+// FIXME(stage0): remove this after a snapshot
+// HACK: this is needed because the interpretation of slice
+// patterns changed between stage0 and now.
+#[cfg(stage0)]
+fn slice_pat<'a, 'b, T>(t: &'a &'b [T]) -> &'a &'b [T] {
+ t
+}
+#[cfg(not(stage0))]
+fn slice_pat<'a, 'b, T>(t: &'a &'b [T]) -> &'b [T] {
+ *t
+}
// Ensure the borrowchecker works
match queue.peek() {
- Some(vec) => match &**vec {
- // Note that `pop` is not allowed here due to borrow
- [1] => {}
- _ => return
+ Some(vec) => {
+ assert_eq!(&*vec, &[1]);
},
None => unreachable!()
}
- queue.pop();
+ match queue.pop() {
+ Some(vec) => {
+ assert_eq!(&*vec, &[1]);
+ },
+ None => unreachable!()
+ }
}
}
if len < 3 {
return None
}
- match &self.bytes[(len - 3)..] {
- [0xED, b2 @ 0xA0...0xAF, b3] => Some(decode_surrogate(b2, b3)),
+ match ::slice_pat(&&self.bytes[(len - 3)..]) {
+ &[0xED, b2 @ 0xA0...0xAF, b3] => Some(decode_surrogate(b2, b3)),
_ => None
}
}
if len < 3 {
return None
}
- match &self.bytes[..3] {
- [0xED, b2 @ 0xB0...0xBF, b3] => Some(decode_surrogate(b2, b3)),
+ match ::slice_pat(&&self.bytes[..3]) {
+ &[0xED, b2 @ 0xB0...0xBF, b3] => Some(decode_surrogate(b2, b3)),
_ => None
}
}
impl DirEntry {
fn new(root: &Arc<PathBuf>, wfd: &c::WIN32_FIND_DATAW) -> Option<DirEntry> {
- match &wfd.cFileName[0..3] {
+ match ::slice_pat(&&wfd.cFileName[0..3]) {
// check for '.' and '..'
- [46, 0, ..] |
- [46, 46, 0, ..] => return None,
+ &[46, 0, ..] |
+ &[46, 46, 0, ..] => return None,
_ => {}
}
syntax_expanders
}
+pub trait MacroLoader {
+ fn load_crate(&mut self, extern_crate: &ast::Item, allows_macros: bool) -> Vec<ast::MacroDef>;
+}
+
+pub struct DummyMacroLoader;
+impl MacroLoader for DummyMacroLoader {
+ fn load_crate(&mut self, _: &ast::Item, _: bool) -> Vec<ast::MacroDef> {
+ Vec::new()
+ }
+}
+
/// One of these is made during expansion and incrementally updated as we go;
/// when a macro expansion occurs, the resulting nodes have the backtrace()
/// -> expn_info of their expansion context stored into their span.
pub ecfg: expand::ExpansionConfig<'a>,
pub crate_root: Option<&'static str>,
pub feature_gated_cfgs: &'a mut Vec<GatedCfgAttr>,
+ pub loader: &'a mut MacroLoader,
pub mod_path: Vec<ast::Ident> ,
pub exported_macros: Vec<ast::MacroDef>,
impl<'a> ExtCtxt<'a> {
pub fn new(parse_sess: &'a parse::ParseSess, cfg: ast::CrateConfig,
ecfg: expand::ExpansionConfig<'a>,
- feature_gated_cfgs: &'a mut Vec<GatedCfgAttr>) -> ExtCtxt<'a> {
+ feature_gated_cfgs: &'a mut Vec<GatedCfgAttr>,
+ loader: &'a mut MacroLoader)
+ -> ExtCtxt<'a> {
let env = initial_syntax_expander_table(&ecfg);
ExtCtxt {
parse_sess: parse_sess,
crate_root: None,
feature_gated_cfgs: feature_gated_cfgs,
exported_macros: Vec::new(),
+ loader: loader,
syntax_env: env,
recursion_count: 0,
let last_chain_index = self.chain.len() - 1;
&mut self.chain[last_chain_index].info
}
+
+ pub fn is_crate_root(&mut self) -> bool {
+ // The first frame is pushed in `SyntaxEnv::new()` and the second frame is
+ // pushed when folding the crate root pseudo-module (c.f. noop_fold_crate).
+ self.chain.len() == 2
+ }
}
let new_items: SmallVector<Annotatable> = match a {
Annotatable::Item(it) => match it.node {
ast::ItemKind::Mac(..) => {
- let new_items: SmallVector<P<ast::Item>> = it.and_then(|it| match it.node {
+ it.and_then(|it| match it.node {
ItemKind::Mac(mac) =>
expand_mac_invoc(mac, Some(it.ident), it.attrs, it.span, fld),
_ => unreachable!(),
- });
-
- new_items.into_iter().map(|i| Annotatable::Item(i)).collect()
+ })
}
ast::ItemKind::Mod(_) | ast::ItemKind::ForeignMod(_) => {
let valid_ident =
if valid_ident {
fld.cx.mod_pop();
}
- result.into_iter().map(|i| Annotatable::Item(i)).collect()
+ result
},
- _ => noop_fold_item(it, fld).into_iter().map(|i| Annotatable::Item(i)).collect(),
- },
+ ast::ItemKind::ExternCrate(_) => {
+ // We need to error on `#[macro_use] extern crate` when it isn't at the
+ // crate root, because `$crate` won't work properly.
+ let allows_macros = fld.cx.syntax_env.is_crate_root();
+ for def in fld.cx.loader.load_crate(&it, allows_macros) {
+ fld.cx.insert_macro(def);
+ }
+ SmallVector::one(it)
+ },
+ _ => noop_fold_item(it, fld),
+ }.into_iter().map(|i| Annotatable::Item(i)).collect(),
Annotatable::TraitItem(it) => match it.node {
ast::TraitItemKind::Method(_, Some(_)) => {
}
pub fn expand_crate(mut cx: ExtCtxt,
- // these are the macros being imported to this crate:
- imported_macros: Vec<ast::MacroDef>,
user_exts: Vec<NamedSyntaxExtension>,
c: Crate) -> (Crate, HashSet<Name>) {
if std_inject::no_core(&c) {
let ret = {
let mut expander = MacroExpander::new(&mut cx);
- for def in imported_macros {
- expander.cx.insert_macro(def);
- }
-
for (name, extension) in user_exts {
expander.cx.syntax_env.insert(name, extension);
}
use ast;
use ast::Name;
use codemap;
- use ext::base::ExtCtxt;
+ use ext::base::{ExtCtxt, DummyMacroLoader};
use ext::mtwt;
use fold::Folder;
use parse;
src,
Vec::new(), &sess).unwrap();
// should fail:
- let mut gated_cfgs = vec![];
- let ecx = ExtCtxt::new(&sess, vec![], test_ecfg(), &mut gated_cfgs);
- expand_crate(ecx, vec![], vec![], crate_ast);
+ let (mut gated_cfgs, mut loader) = (vec![], DummyMacroLoader);
+ let ecx = ExtCtxt::new(&sess, vec![], test_ecfg(), &mut gated_cfgs, &mut loader);
+ expand_crate(ecx, vec![], crate_ast);
}
// make sure that macros can't escape modules
"<test>".to_string(),
src,
Vec::new(), &sess).unwrap();
- let mut gated_cfgs = vec![];
- let ecx = ExtCtxt::new(&sess, vec![], test_ecfg(), &mut gated_cfgs);
- expand_crate(ecx, vec![], vec![], crate_ast);
+ let (mut gated_cfgs, mut loader) = (vec![], DummyMacroLoader);
+ let ecx = ExtCtxt::new(&sess, vec![], test_ecfg(), &mut gated_cfgs, &mut loader);
+ expand_crate(ecx, vec![], crate_ast);
}
// macro_use modules should allow macros to escape
"<test>".to_string(),
src,
Vec::new(), &sess).unwrap();
- let mut gated_cfgs = vec![];
- let ecx = ExtCtxt::new(&sess, vec![], test_ecfg(), &mut gated_cfgs);
- expand_crate(ecx, vec![], vec![], crate_ast);
+ let (mut gated_cfgs, mut loader) = (vec![], DummyMacroLoader);
+ let ecx = ExtCtxt::new(&sess, vec![], test_ecfg(), &mut gated_cfgs, &mut loader);
+ expand_crate(ecx, vec![], crate_ast);
}
fn expand_crate_str(crate_str: String) -> ast::Crate {
let ps = parse::ParseSess::new();
let crate_ast = panictry!(string_to_parser(&ps, crate_str).parse_crate_mod());
// the cfg argument actually does matter, here...
- let mut gated_cfgs = vec![];
- let ecx = ExtCtxt::new(&ps, vec![], test_ecfg(), &mut gated_cfgs);
- expand_crate(ecx, vec![], vec![], crate_ast).0
+ let (mut gated_cfgs, mut loader) = (vec![], DummyMacroLoader);
+ let ecx = ExtCtxt::new(&ps, vec![], test_ecfg(), &mut gated_cfgs, &mut loader);
+ expand_crate(ecx, vec![], crate_ast).0
}
// find the pat_ident paths in a crate
use attr;
use codemap::{DUMMY_SP, Span, ExpnInfo, NameAndSpan, MacroAttribute};
use codemap;
-use fold::Folder;
-use fold;
use parse::token::{intern, InternedString, keywords};
use parse::{token, ParseSess};
use ptr::P;
-use util::small_vector::SmallVector;
/// Craft a span that will be ignored by the stability lint's
/// call to codemap's is_internal check.
return sp;
}
-pub fn maybe_inject_crates_ref(krate: ast::Crate, alt_std_name: Option<String>)
- -> ast::Crate {
- if no_core(&krate) {
- krate
- } else {
- let name = if no_std(&krate) {"core"} else {"std"};
- let mut fold = CrateInjector {
- item_name: token::str_to_ident(name),
- crate_name: token::intern(&alt_std_name.unwrap_or(name.to_string())),
- };
- fold.fold_crate(krate)
- }
-}
-
-pub fn maybe_inject_prelude(sess: &ParseSess, krate: ast::Crate) -> ast::Crate {
- if no_core(&krate) {
- krate
- } else {
- let name = if no_std(&krate) {"core"} else {"std"};
- let mut fold = PreludeInjector {
- span: ignored_span(sess, DUMMY_SP),
- crate_identifier: token::str_to_ident(name),
- };
- fold.fold_crate(krate)
- }
-}
-
pub fn no_core(krate: &ast::Crate) -> bool {
attr::contains_name(&krate.attrs, "no_core")
}
attr::contains_name(&krate.attrs, "no_std") || no_core(krate)
}
-fn no_prelude(attrs: &[ast::Attribute]) -> bool {
- attr::contains_name(attrs, "no_implicit_prelude")
-}
-
-struct CrateInjector {
- item_name: ast::Ident,
- crate_name: ast::Name,
-}
-
-impl fold::Folder for CrateInjector {
- fn fold_crate(&mut self, mut krate: ast::Crate) -> ast::Crate {
- krate.module.items.insert(0, P(ast::Item {
- id: ast::DUMMY_NODE_ID,
- ident: self.item_name,
- attrs: vec!(
- attr::mk_attr_outer(attr::mk_attr_id(), attr::mk_word_item(
- InternedString::new("macro_use")))),
- node: ast::ItemKind::ExternCrate(Some(self.crate_name)),
- vis: ast::Visibility::Inherited,
- span: DUMMY_SP
- }));
-
- krate
- }
-}
-
-struct PreludeInjector {
- span: Span,
- crate_identifier: ast::Ident,
-}
-
-impl fold::Folder for PreludeInjector {
- fn fold_crate(&mut self, mut krate: ast::Crate) -> ast::Crate {
- // only add `use std::prelude::*;` if there wasn't a
- // `#![no_implicit_prelude]` at the crate level.
- // fold_mod() will insert glob path.
- if !no_prelude(&krate.attrs) {
- krate.module = self.fold_mod(krate.module);
- }
- krate
- }
-
- fn fold_item(&mut self, item: P<ast::Item>) -> SmallVector<P<ast::Item>> {
- if !no_prelude(&item.attrs) {
- // only recur if there wasn't `#![no_implicit_prelude]`
- // on this item, i.e. this means that the prelude is not
- // implicitly imported though the whole subtree
- fold::noop_fold_item(item, self)
- } else {
- SmallVector::one(item)
- }
+pub fn maybe_inject_crates_ref(sess: &ParseSess,
+ mut krate: ast::Crate,
+ alt_std_name: Option<String>)
+ -> ast::Crate {
+ if no_core(&krate) {
+ return krate;
}
- fn fold_mod(&mut self, mut mod_: ast::Mod) -> ast::Mod {
- let prelude_path = ast::Path {
- span: self.span,
+ let name = if no_std(&krate) { "core" } else { "std" };
+ let crate_name = token::intern(&alt_std_name.unwrap_or(name.to_string()));
+
+ krate.module.items.insert(0, P(ast::Item {
+ attrs: vec![attr::mk_attr_outer(attr::mk_attr_id(),
+ attr::mk_word_item(InternedString::new("macro_use")))],
+ vis: ast::Visibility::Inherited,
+ node: ast::ItemKind::ExternCrate(Some(crate_name)),
+ ident: token::str_to_ident(name),
+ id: ast::DUMMY_NODE_ID,
+ span: DUMMY_SP,
+ }));
+
+ let span = ignored_span(sess, DUMMY_SP);
+ krate.module.items.insert(0, P(ast::Item {
+ attrs: vec![ast::Attribute {
+ node: ast::Attribute_ {
+ style: ast::AttrStyle::Outer,
+ value: P(ast::MetaItem {
+ node: ast::MetaItemKind::Word(token::intern_and_get_ident("prelude_import")),
+ span: span,
+ }),
+ id: attr::mk_attr_id(),
+ is_sugared_doc: false,
+ },
+ span: span,
+ }],
+ vis: ast::Visibility::Inherited,
+ node: ast::ItemKind::Use(P(codemap::dummy_spanned(ast::ViewPathGlob(ast::Path {
global: false,
- segments: vec![
- ast::PathSegment {
- identifier: self.crate_identifier,
- parameters: ast::PathParameters::none(),
- },
- ast::PathSegment {
- identifier: token::str_to_ident("prelude"),
- parameters: ast::PathParameters::none(),
- },
- ast::PathSegment {
- identifier: token::str_to_ident("v1"),
- parameters: ast::PathParameters::none(),
- },
- ],
- };
-
- let vp = P(codemap::dummy_spanned(ast::ViewPathGlob(prelude_path)));
- mod_.items.insert(0, P(ast::Item {
- id: ast::DUMMY_NODE_ID,
- ident: keywords::Invalid.ident(),
- node: ast::ItemKind::Use(vp),
- attrs: vec![ast::Attribute {
- span: self.span,
- node: ast::Attribute_ {
- id: attr::mk_attr_id(),
- style: ast::AttrStyle::Outer,
- value: P(ast::MetaItem {
- span: self.span,
- node: ast::MetaItemKind::Word(
- token::intern_and_get_ident("prelude_import")
- ),
- }),
- is_sugared_doc: false,
- },
- }],
- vis: ast::Visibility::Inherited,
- span: self.span,
- }));
-
- fold::noop_fold_mod(mod_, self)
- }
+ segments: vec![name, "prelude", "v1"].into_iter().map(|name| ast::PathSegment {
+ identifier: token::str_to_ident(name),
+ parameters: ast::PathParameters::none(),
+ }).collect(),
+ span: span,
+ })))),
+ id: ast::DUMMY_NODE_ID,
+ ident: keywords::Invalid.ident(),
+ span: span,
+ }));
+
+ krate
}
use errors;
use config;
use entry::{self, EntryPointType};
-use ext::base::ExtCtxt;
+use ext::base::{ExtCtxt, DummyMacroLoader};
use ext::build::AstBuilder;
use ext::expand::ExpansionConfig;
use fold::Folder;
let krate = cleaner.fold_crate(krate);
let mut feature_gated_cfgs = vec![];
+ let mut loader = DummyMacroLoader;
let mut cx: TestCtxt = TestCtxt {
sess: sess,
span_diagnostic: sd,
ext_cx: ExtCtxt::new(sess, vec![],
ExpansionConfig::default("test".to_string()),
- &mut feature_gated_cfgs),
+ &mut feature_gated_cfgs,
+ &mut loader),
path: Vec::new(),
testfns: Vec::new(),
reexport_test_harness_main: reexport_test_harness_main,
"graphviz 0.0.0",
"log 0.0.0",
"rustc 0.0.0",
+ "rustc_data_structures 0.0.0",
"rustc_mir 0.0.0",
"syntax 0.0.0",
]
"rustc_back 0.0.0",
"rustc_bitflags 0.0.0",
"rustc_metadata 0.0.0",
- "rustc_mir 0.0.0",
"syntax 0.0.0",
]
"rustc_data_structures 0.0.0",
"rustc_incremental 0.0.0",
"rustc_llvm 0.0.0",
- "rustc_mir 0.0.0",
"rustc_platform_intrinsics 0.0.0",
"serialize 0.0.0",
"syntax 0.0.0",
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![feature(rustc_private)]
+macro_rules! m {
+ () => { #[macro_use] extern crate syntax; }
+}
+m!();
+
+fn main() {
+ help!(); //~ ERROR unexpected end of macro invocation
+}
fn main() {
match () {
- Trait { x: 42 } => () //~ ERROR `Trait` does not name a struct
+ Trait { x: 42 } => () //~ ERROR expected variant, struct or type alias, found trait `Trait`
+ //~^ ERROR `Trait` does not name a struct or a struct variant
}
}
fn main() {
let ps = syntax::parse::ParseSess::new();
+ let mut loader = syntax::ext::base::DummyMacroLoader;
let mut cx = syntax::ext::base::ExtCtxt::new(
&ps, vec![],
syntax::ext::expand::ExpansionConfig::default("qquote".to_string()),
- &mut Vec::new());
+ &mut Vec::new(), &mut loader);
cx.bt_push(syntax::codemap::ExpnInfo {
call_site: DUMMY_SP,
callee: syntax::codemap::NameAndSpan {
fn main() {
assert_eq!(1, bar1::Foo::ID);
- //~^ERROR associated const `ID` is private
+ //~^ERROR associated constant `ID` is private
}
fn main() {
let bar = 5;
- //~^ ERROR cannot be named the same
+ //~^ ERROR let bindings cannot shadow structs
use foo::bar;
}
Foo { string: "baz".to_string() }
);
let x: &[Foo] = &x;
- match x {
- [_, tail..] => {
+ match *x {
+ [_, ref tail..] => {
match tail {
- [Foo { string: a },
+ &[Foo { string: a },
//~^ ERROR cannot move out of borrowed content
//~| cannot move out
//~| to prevent move
- Foo { string: b }] => {
+ Foo { string: b }] => {
//~^ NOTE and here
}
_ => {
let vec = vec!(1, 2, 3, 4);
let vec: &[isize] = &vec; //~ ERROR does not live long enough
let tail = match vec {
- [_, tail..] => tail,
+ &[_, ref tail..] => tail,
_ => panic!("a")
};
tail
let vec = vec!(1, 2, 3, 4);
let vec: &[isize] = &vec; //~ ERROR does not live long enough
let init = match vec {
- [init.., _] => init,
+ &[ref init.., _] => init,
_ => panic!("b")
};
init
let vec = vec!(1, 2, 3, 4);
let vec: &[isize] = &vec; //~ ERROR does not live long enough
let slice = match vec {
- [_, slice.., _] => slice,
+ &[_, ref slice.., _] => slice,
_ => panic!("c")
};
slice
let mut v = vec!(1, 2, 3);
let vb: &mut [isize] = &mut v;
match vb {
- [_a, tail..] => {
+ &mut [_a, ref tail..] => {
v.push(tail[0] + tail[1]); //~ ERROR cannot borrow
}
_ => {}
fn main() {
let mut a = [1, 2, 3, 4];
let t = match a {
- [1, 2, tail..] => tail,
+ [1, 2, ref tail..] => tail,
_ => unreachable!()
};
println!("t[0]: {}", t[0]);
let mut vec = vec!(box 1, box 2, box 3);
let vec: &mut [Box<isize>] = &mut vec;
match vec {
- [_b..] => {
+ &mut [ref _b..] => {
//~^ borrow of `vec[..]` occurs here
vec[0] = box 4; //~ ERROR cannot assign
//~^ assignment to borrowed `vec[..]` occurs here
let mut vec = vec!(box 1, box 2, box 3);
let vec: &mut [Box<isize>] = &mut vec;
match vec {
- [_a, //~ ERROR cannot move out
- //~| cannot move out
- //~| to prevent move
- _b..] => {
+ &mut [_a, //~ ERROR cannot move out of borrowed content
+ //~| cannot move out
+ //~| to prevent move
+ ..
+ ] => {
// Note: `_a` is *moved* here, but `b` is borrowing,
// hence illegal.
//
let mut vec = vec!(box 1, box 2, box 3);
let vec: &mut [Box<isize>] = &mut vec;
match vec {
- [_a.., //~ ERROR cannot move out
+ &mut [ //~ ERROR cannot move out
//~^ cannot move out
_b] => {} //~ NOTE to prevent move
_ => {}
let mut vec = vec!(box 1, box 2, box 3);
let vec: &mut [Box<isize>] = &mut vec;
match vec {
- [_a, _b, _c] => {} //~ ERROR cannot move out
+ &mut [_a, _b, _c] => {} //~ ERROR cannot move out
//~| cannot move out
//~| NOTE to prevent move
//~| NOTE and here
let vec = vec!(1, 2, 3, 4);
let vec: &[isize] = &vec; //~ ERROR `vec` does not live long enough
let tail = match vec {
- [_a, tail..] => &tail[0],
+ &[_a, ref tail..] => &tail[0],
_ => panic!("foo")
};
tail
const a: u8 = 2; //~ NOTE is defined here
fn main() {
- let a = 4; //~ ERROR let variables cannot
- //~^ NOTE cannot be named the same as a const variable
- let c = 4; //~ ERROR let variables cannot
- //~^ NOTE cannot be named the same as a const variable
- let d = 4; //~ ERROR let variables cannot
- //~^ NOTE cannot be named the same as a const variable
+ let a = 4; //~ ERROR let bindings cannot shadow constants
+ //~^ NOTE cannot be named the same as a constant
+ let c = 4; //~ ERROR let bindings cannot shadow constants
+ //~^ NOTE cannot be named the same as a constant
+ let d = 4; //~ ERROR let bindings cannot shadow constants
+ //~^ NOTE cannot be named the same as a constant
}
// XEmpty1() => () // ERROR unresolved enum variant, struct or const `XEmpty1`
// }
match e1 {
- Empty1(..) => () //~ ERROR unresolved enum variant, struct or const `Empty1`
+ Empty1(..) => () //~ ERROR unresolved variant or struct `Empty1`
}
match xe1 {
- XEmpty1(..) => () //~ ERROR unresolved enum variant, struct or const `XEmpty1`
+ XEmpty1(..) => () //~ ERROR unresolved variant or struct `XEmpty1`
}
}
struct hello(isize);
fn main() {
- let hello = 0; //~ERROR cannot be named the same
+ let hello = 0; //~ERROR let bindings cannot shadow structs
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-//error-pattern:unresolved enum variant
-
fn main() {
- // a bug in the parser is allowing this:
- let a(1) = 13;
+ let a(1) = 13; //~ ERROR unresolved variant or struct `a`
}
fn main() {
match Foo(true) {
- foo(x) //~ ERROR `foo` is not an enum variant, struct or const
+ foo(x) //~ ERROR expected variant or struct, found function `foo`
=> ()
}
}
fn main() {
let sl = vec![1,2,3];
let v: isize = match &*sl {
- [] => 0,
- [a,b,c] => 3,
- [a, rest..] => a,
- [10,a, rest..] => 10 //~ ERROR: unreachable pattern
+ &[] => 0,
+ &[a,b,c] => 3,
+ &[a, ref rest..] => a,
+ &[10,a, ref rest..] => 10 //~ ERROR: unreachable pattern
};
}
fn match_vecs<'a, T>(l1: &'a [T], l2: &'a [T]) {
match (l1, l2) {
- ([], []) => println!("both empty"),
- ([], [hd, tl..]) | ([hd, tl..], []) => println!("one empty"),
- //~^ ERROR: cannot move out of borrowed content
+ (&[], &[]) => println!("both empty"),
+ (&[], &[hd, ..]) | (&[hd, ..], &[])
+ => println!("one empty"),
//~^^ ERROR: cannot move out of borrowed content
- ([hd1, tl1..], [hd2, tl2..]) => println!("both nonempty"),
- //~^ ERROR: cannot move out of borrowed content
+ //~^^^ ERROR: cannot move out of borrowed content
+ (&[hd1, ..], &[hd2, ..])
+ => println!("both nonempty"),
//~^^ ERROR: cannot move out of borrowed content
+ //~^^^ ERROR: cannot move out of borrowed content
}
}
fn main() {
match () {
- foo::bar => {} //~ ERROR `bar` is not an enum variant, struct or const
+ foo::bar => {} //~ ERROR expected variant, struct or constant, found function `bar`
}
}
fn main() {
let x = [1,2];
let y = match x {
- [] => None,
-//~^ ERROR mismatched types
-//~| expected type `[_#1i; 2]`
-//~| found type `[_#7t; 0]`
-//~| expected an array with a fixed size of 2 elements, found one with 0 elements
+ [] => None, //~ ERROR pattern requires 0 elements but array has 2
[a,_] => Some(a)
};
}
fn main() {
let x = [1,2];
let y = match x {
- [] => None,
- //~^ ERROR mismatched types
- //~| expected type `[_; 2]`
- //~| found type `[_; 0]`
- //~| expected an array with a fixed size of 2 elements
+ [] => None, //~ ERROR pattern requires 0 elements but array has 2
[a,_] => Some(a)
};
}
fn main() {
let values: Vec<u8> = vec![1,2,3,4,5,6,7,8];
- for [x,y,z] in values.chunks(3).filter(|&xs| xs.len() == 3) {
- //~^ ERROR refutable pattern in `for` loop binding: `[]` not covered
+ for &[x,y,z] in values.chunks(3).filter(|&xs| xs.len() == 3) {
+ //~^ ERROR refutable pattern in `for` loop binding: `&[]` not covered
println!("y={}", y);
}
}
fn main() {
let boolValue = match 42 {
externalValue => true,
- //~^ ERROR static variables cannot be referenced in a pattern
+ //~^ ERROR match bindings cannot shadow statics
_ => false
};
}
fn main() {
match Foo::Bar(1) {
- Foo { i } => () //~ ERROR `Foo` does not name a struct or a struct variant
+ Foo { i } => () //~ ERROR expected variant, struct or type alias, found enum `Foo`
+ //~^ ERROR `Foo` does not name a struct or a struct variant
}
}
extern crate issue_17718_const_privacy as other;
-use a::B; //~ ERROR: const `B` is private
+use a::B; //~ ERROR: constant `B` is private
use other::{
FOO,
- BAR, //~ ERROR: const `BAR` is private
+ BAR, //~ ERROR: constant `BAR` is private
FOO2,
};
fn main() {
match 1 {
- A1 => {} //~ ERROR: static variables cannot be referenced in a pattern
- A2 => {} //~ ERROR: static variables cannot be referenced in a pattern
+ A1 => {} //~ ERROR: match bindings cannot shadow statics
+ A2 => {} //~ ERROR: match bindings cannot shadow statics
A3 => {}
_ => {}
}
fn main() {
match 1 {
self::X => { },
- //~^ ERROR static variables cannot be referenced in a pattern, use a `const` instead
+ //~^ ERROR expected variant, struct or constant, found static `X`
_ => { },
}
}
// except according to those terms.
static foo: i32 = 0;
-//~^ NOTE static variable defined here
+//~^ NOTE a static `foo` is defined here
fn bar(foo: i32) {}
-//~^ ERROR static variables cannot be referenced in a pattern, use a `const` instead
-//~| static variable used in pattern
+//~^ ERROR function parameters cannot shadow statics
+//~| cannot be named the same as a static
mod submod {
pub static answer: i32 = 42;
}
use self::submod::answer;
-//~^ NOTE static variable imported here
+//~^ NOTE a static `answer` is imported here
fn question(answer: i32) {}
-//~^ ERROR static variables cannot be referenced in a pattern, use a `const` instead
-//~| static variable used in pattern
+//~^ ERROR function parameters cannot shadow statics
+//~| cannot be named the same as a static
fn main() {
}
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+enum Sexpression {
+ Num(()),
+ Cons(&'static mut Sexpression)
+}
+
+fn causes_ice(mut l: &mut Sexpression) {
+ loop { match l {
+ &mut Sexpression::Num(ref mut n) => {},
+ &mut Sexpression::Cons(ref mut expr) => { //~ ERROR cannot borrow `l.0`
+ //~| ERROR cannot borrow `l.0`
+ l = &mut **expr; //~ ERROR cannot assign to `l`
+ }
+ }}
+}
+
+fn main() {
+}
fn main() {
match 'a' {
char{ch} => true
- //~^ ERROR `char` does not name a struct or a struct variant
+ //~^ ERROR expected variant, struct or type alias, found builtin type `char`
+ //~| ERROR `char` does not name a struct or a struct variant
};
}
fn main() {
match Some(1) {
- None @ _ => {} //~ ERROR cannot be named the same
+ None @ _ => {} //~ ERROR match bindings cannot shadow variants
};
const C: u8 = 1;
match 1 {
- C @ 2 => { //~ ERROR cannot be named the same
+ C @ 2 => { //~ ERROR match bindings cannot shadow constant
println!("{}", C);
}
_ => {}
let u = A { x: 1 }; //~ ERROR `A` does not name a structure
let v = u32 { x: 1 }; //~ ERROR `u32` does not name a structure
match () {
- A { x: 1 } => {} //~ ERROR `A` does not name a struct
- u32 { x: 1 } => {} //~ ERROR `u32` does not name a struct
+ A { x: 1 } => {} //~ ERROR expected variant, struct or type alias, found module `A`
+ //~^ ERROR `A` does not name a struct or a struct variant
+ u32 { x: 1 } => {} //~ ERROR expected variant, struct or type alias, found builtin type `u32
+ //~^ ERROR `u32` does not name a struct or a struct variant
}
}
}
fn main() {
- if let C1(..) = 0 {} //~ ERROR `C1` does not name a tuple variant or a tuple struct
+ if let C1(..) = 0 {} //~ ERROR expected variant or struct, found constant `C1`
if let S::C2(..) = 0 {} //~ ERROR `S::C2` does not name a tuple variant or a tuple struct
}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+fn main() {
+ match "world" { //~ ERROR non-exhaustive patterns: `&_`
+ "hello" => {}
+ }
+
+ match "world" { //~ ERROR non-exhaustive patterns: `&_`
+ ref _x if false => {}
+ "hello" => {}
+ "hello" => {} //~ ERROR unreachable pattern
+ }
+}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+struct S(u8);
+const C: S = S(10);
+
+fn main() {
+ let C(a) = S(11); //~ ERROR expected variant or struct, found constant `C`
+ let C(..) = S(11); //~ ERROR expected variant or struct, found constant `C`
+}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+const C: u8 = 0; //~ NOTE a constant `C` is defined here
+
+fn main() {
+ match 1u8 {
+ mut C => {} //~ ERROR match bindings cannot shadow constants
+ //~^ NOTE cannot be named the same as a constant
+ _ => {}
+ }
+}
fn main() {
let z = match 3 {
- x(1) => x(1) //~ ERROR unresolved enum variant
+ x(1) => x(1) //~ ERROR unresolved variant or struct `x`
//~^ ERROR unresolved name `x`
};
assert!(z == 3);
fn main() {
match () {
[()] => { }
- //~^ ERROR mismatched types
- //~| expected type `()`
- //~| found type `&[_]`
- //~| expected (), found &-ptr
+ //~^ ERROR expected an array or slice, found `()`
}
}
fn main() {
match "foo".to_string() {
- ['f', 'o', ..] => {} //~ ERROR mismatched types
+ ['f', 'o', ..] => {}
+ //~^ ERROR expected an array or slice, found `std::string::String`
_ => { }
- }
+ };
+
+ match &[0, 1, 2] {
+ [..] => {} //~ ERROR expected an array or slice, found `&[_; 3]`
+ };
+
+ match &[0, 1, 2] {
+ &[..] => {} // ok
+ };
+
+ match [0, 1, 2] {
+ [0] => {}, //~ ERROR pattern requires
+
+ [0, 1, x..] => {
+ let a: [_; 1] = x;
+ }
+ [0, 1, 2, 3, x..] => {} //~ ERROR pattern requires
+ };
+
+ match does_not_exist { //~ ERROR unresolved name
+ [] => {}
+ };
+}
+
+fn another_fn_to_avoid_suppression() {
+ match Default::default()
+ {
+ [] => {} //~ ERROR the type of this value
+ };
}
fn main() {
let x: Vec<(isize, isize)> = Vec::new();
let x: &[(isize, isize)] = &x;
- match x {
+ match *x {
[a, (2, 3), _] => (),
[(1, 2), (2, 3), b] => (), //~ ERROR unreachable pattern
_ => ()
"bar".to_string(),
"baz".to_string()];
let x: &[String] = &x;
- match x {
+ match *x {
[a, _, _, ..] => { println!("{}", a); }
[_, _, _, _, _] => { } //~ ERROR unreachable pattern
_ => { }
let x: Vec<char> = vec!('a', 'b', 'c');
let x: &[char] = &x;
- match x {
- ['a', 'b', 'c', _tail..] => {}
+ match *x {
+ ['a', 'b', 'c', ref _tail..] => {}
['a', 'b', 'c'] => {} //~ ERROR unreachable pattern
_ => {}
}
fn main() {
match 0u32 {
<Foo as MyTrait>::trait_bar => {}
- //~^ ERROR `trait_bar` is not an associated const
+ //~^ ERROR expected associated constant, found method `trait_bar`
}
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// error-pattern:cannot be named the same
use std::option::*;
fn main() {
- let None: isize = 42;
+ let None: isize = 42; //~ ERROR let bindings cannot shadow variants
log(debug, None);
+ //~^ ERROR unresolved name `debug`
+ //~| ERROR unresolved name `log`
}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Issue 34101: Circa 2016-06-05, `fn inline` below issued an
+// erroneous warning from the elaborate_drops pass about moving out of
+// a field in `Foo`, which has a destructor (and thus cannot have
+// content moved out of it). The reason that the warning is erroneous
+// in this case is that we are doing a *replace*, not a move, of the
+// content in question, and it is okay to replace fields within `Foo`.
+//
+// Another more subtle problem was that the elaborate_drops was
+// creating a separate drop flag for that internally replaced content,
+// even though the compiler should enforce an invariant that any drop
+// flag for such subcontent of `Foo` will always have the same value
+// as the drop flag for `Foo` itself.
+//
+// This test is structured in a funny way; we cannot test for emission
+// of the warning in question via the lint system, and therefore
+// `#![deny(warnings)]` does nothing to detect it.
+//
+// So instead we use `#[rustc_error]` and put the test into
+// `compile_fail`, where the emitted warning *will* be caught.
+
+#![feature(rustc_attrs)]
+
+struct Foo(String);
+
+impl Drop for Foo {
+ fn drop(&mut self) {}
+}
+
+fn inline() {
+ // (dummy variable so `f` gets assigned `var1` in MIR for both fn's)
+ let _s = ();
+ let mut f = Foo(String::from("foo"));
+ f.0 = String::from("bar");
+}
+
+fn outline() {
+ let _s = String::from("foo");
+ let mut f = Foo(_s);
+ f.0 = String::from("bar");
+}
+
+#[rustc_error]
+fn main() { //~ ERROR compilation successful
+ inline();
+ outline();
+}
enum u { c, d }
fn match_nested_vecs<'a, T>(l1: Option<&'a [T]>, l2: Result<&'a [T], ()>) -> &'static str {
- match (l1, l2) { //~ ERROR non-exhaustive patterns: `(Some([]), Err(_))` not covered
- (Some([]), Ok([])) => "Some(empty), Ok(empty)",
- (Some([_, ..]), Ok(_)) | (Some([_, ..]), Err(())) => "Some(non-empty), any",
- (None, Ok([])) | (None, Err(())) | (None, Ok([_])) => "None, Ok(less than one element)",
- (None, Ok([_, _, ..])) => "None, Ok(at least two elements)"
+ match (l1, l2) { //~ ERROR non-exhaustive patterns: `(Some(&[]), Err(_))` not covered
+ (Some(&[]), Ok(&[])) => "Some(empty), Ok(empty)",
+ (Some(&[_, ..]), Ok(_)) | (Some(&[_, ..]), Err(())) => "Some(non-empty), any",
+ (None, Ok(&[])) | (None, Err(())) | (None, Ok(&[_])) => "None, Ok(less than one element)",
+ (None, Ok(&[_, _, ..])) => "None, Ok(at least two elements)"
}
}
}
let vec = vec!(Some(42), None, Some(21));
let vec: &[Option<isize>] = &vec;
- match vec { //~ ERROR non-exhaustive patterns: `[]` not covered
- [Some(..), None, tail..] => {}
- [Some(..), Some(..), tail..] => {}
+ match *vec { //~ ERROR non-exhaustive patterns: `[]` not covered
+ [Some(..), None, ref tail..] => {}
+ [Some(..), Some(..), ref tail..] => {}
[None] => {}
}
let vec = vec!(1);
let vec: &[isize] = &vec;
- match vec {
- [_, tail..] => (),
+ match *vec {
+ [_, ref tail..] => (),
[] => ()
}
let vec = vec!(0.5f32);
let vec: &[f32] = &vec;
- match vec { //~ ERROR non-exhaustive patterns: `[_, _, _, _]` not covered
+ match *vec { //~ ERROR non-exhaustive patterns: `[_, _, _, _]` not covered
[0.1, 0.2, 0.3] => (),
[0.1, 0.2] => (),
[0.1] => (),
}
let vec = vec!(Some(42), None, Some(21));
let vec: &[Option<isize>] = &vec;
- match vec {
- [Some(..), None, tail..] => {}
- [Some(..), Some(..), tail..] => {}
- [None, None, tail..] => {}
- [None, Some(..), tail..] => {}
+ match *vec {
+ [Some(..), None, ref tail..] => {}
+ [Some(..), Some(..), ref tail..] => {}
+ [None, None, ref tail..] => {}
+ [None, Some(..), ref tail..] => {}
[Some(_)] => {}
[None] => {}
[] => {}
fn vectors_with_nested_enums() {
let x: &'static [Enum] = &[Enum::First, Enum::Second(false)];
- match x {
+ match *x {
//~^ ERROR non-exhaustive patterns: `[Second(true), Second(false)]` not covered
[] => (),
[_] => (),
[Enum::Second(true), Enum::First] => (),
[Enum::Second(true), Enum::Second(true)] => (),
[Enum::Second(false), _] => (),
- [_, _, tail.., _] => ()
+ [_, _, ref tail.., _] => ()
}
}
struct foo(usize);
fn main() {
- let (foo, _) = (2, 3); //~ ERROR `foo` cannot be named the same as
+ let (foo, _) = (2, 3); //~ ERROR let bindings cannot shadow structs
}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![feature(slice_patterns)]
+
+fn slice_pat(x: &[u8]) {
+ // OLD!
+ match x {
+ [a, b..] => {}
+ //~^ ERROR expected an array or slice, found `&[u8]`
+ //~| HELP the semantics of slice patterns changed recently; see issue #23121
+ }
+}
+
+fn main() {}
fn main() {
match 10 {
- <S as Tr>::A::f::<u8> => {} //~ ERROR `f` is not an associated const
+ <S as Tr>::A::f::<u8> => {} //~ ERROR associated items in match patterns must be constants
0 ... <S as Tr>::A::f::<u8> => {} //~ ERROR only char and numeric types are allowed in range
}
}
// instead of spitting out a custom error about some identifier collisions
// (we should allow shadowing)
match 4 {
- a => {} //~ ERROR static variables cannot be referenced in a pattern
+ a => {} //~ ERROR match bindings cannot shadow statics
_ => {}
}
}
match (Foo { bar: Some(Direction::North), baz: NewBool(true) }) {
Foo { bar: None, baz: NewBool(true) } => (),
STATIC_MUT_FOO => (),
- //~^ ERROR static variables cannot be referenced in a pattern
+ //~^ ERROR match bindings cannot shadow statics
Foo { bar: Some(Direction::South), .. } => (),
Foo { bar: Some(EAST), .. } => (),
Foo { bar: Some(Direction::North), baz: NewBool(true) } => (),
trait A {
}
-impl A for a { //~ ERROR type name `a` is undefined or not in scope
+impl A for a { //~ ERROR expected type, found module
}
fn main() {
fn main() {
let ps = syntax::parse::ParseSess::new();
- let mut feature_gated_cfgs = vec![];
+ let (mut feature_gated_cfgs, mut loader) = (vec![], syntax::ext::base::DummyMacroLoader);
let mut cx = syntax::ext::base::ExtCtxt::new(
&ps, vec![],
syntax::ext::expand::ExpansionConfig::default("qquote".to_string()),
- &mut feature_gated_cfgs);
+ &mut feature_gated_cfgs, &mut loader);
cx.bt_push(syntax::codemap::ExpnInfo {
call_site: DUMMY_SP,
callee: syntax::codemap::NameAndSpan {
struct Pass;
impl transform::Pass for Pass {}
+
impl<'tcx> MirPass<'tcx> for Pass {
fn run_pass<'a>(&mut self, _: TyCtxt<'a, 'tcx, 'tcx>,
_: MirSource, mir: &mut Mir<'tcx>) {
fn main() {
let ps = syntax::parse::ParseSess::new();
- let mut feature_gated_cfgs = vec![];
+ let (mut feature_gated_cfgs, mut loader) = (vec![], syntax::ext::base::DummyMacroLoader);
let mut cx = syntax::ext::base::ExtCtxt::new(
&ps, vec![],
syntax::ext::expand::ExpansionConfig::default("qquote".to_string()),
- &mut feature_gated_cfgs);
+ &mut feature_gated_cfgs, &mut loader);
cx.bt_push(syntax::codemap::ExpnInfo {
call_site: DUMMY_SP,
callee: syntax::codemap::NameAndSpan {
let mut result = vec!();
loop {
- x = match x {
- [1, n, 3, rest..] => {
+ x = match *x {
+ [1, n, 3, ref rest..] => {
result.push(n);
rest
}
- [n, rest..] => {
+ [n, ref rest..] => {
result.push(n);
rest
}
}
fn count_members(v: &[usize]) -> usize {
- match v {
+ match *v {
[] => 0,
[_] => 1,
- [_x, xs..] => 1 + count_members(xs)
+ [_, ref xs..] => 1 + count_members(xs)
}
}
// except according to those terms.
-#![feature(slice_patterns)]
+#![feature(slice_patterns, rustc_attrs)]
+#[rustc_mir]
fn main() {
let x: (isize, &[isize]) = (2, &[1, 2]);
assert_eq!(match x {
- (0, [_, _]) => 0,
+ (0, &[_, _]) => 0,
(1, _) => 1,
- (2, [_, _]) => 2,
+ (2, &[_, _]) => 2,
(2, _) => 3,
_ => 4
}, 2);
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+fn main() {
+ let &ref a = &[0i32] as &[_];
+ assert_eq!(a, &[0i32] as &[_]);
+
+ let &ref a = "hello";
+ assert_eq!(a, "hello");
+
+ match "foo" {
+ "fool" => unreachable!(),
+ "foo" => {},
+ ref _x => unreachable!()
+ }
+}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Make sure several unnamed function arguments don't conflict with each other
+
+trait Tr {
+ fn f(u8, u8) {}
+}
+
+fn main() {
+}
#![feature(advanced_slice_patterns)]
#![feature(slice_patterns)]
+#![feature(rustc_attrs)]
use std::ops::Add;
[a, b, b, a]
}
+#[rustc_mir]
fn main() {
assert_eq!(foo([1, 2, 3]), (1, 3, 6));
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+fn main() {
+ let data: &'static str = "Hello, World!";
+ match data {
+ &ref xs => {
+ assert_eq!(data, xs);
+ }
+ }
+}
#![feature(advanced_slice_patterns)]
#![feature(slice_patterns)]
+#![feature(rustc_attrs)]
+#[rustc_mir]
fn match_vecs<'a, T>(l1: &'a [T], l2: &'a [T]) -> &'static str {
match (l1, l2) {
- ([], []) => "both empty",
- ([], [..]) | ([..], []) => "one empty",
- ([..], [..]) => "both non-empty"
+ (&[], &[]) => "both empty",
+ (&[], &[..]) | (&[..], &[]) => "one empty",
+ (&[..], &[..]) => "both non-empty"
}
}
+#[rustc_mir]
fn match_vecs_cons<'a, T>(l1: &'a [T], l2: &'a [T]) -> &'static str {
match (l1, l2) {
- ([], []) => "both empty",
- ([], [_, ..]) | ([_, ..], []) => "one empty",
- ([_, ..], [_, ..]) => "both non-empty"
+ (&[], &[]) => "both empty",
+ (&[], &[_, ..]) | (&[_, ..], &[]) => "one empty",
+ (&[_, ..], &[_, ..]) => "both non-empty"
}
}
+#[rustc_mir]
fn match_vecs_snoc<'a, T>(l1: &'a [T], l2: &'a [T]) -> &'static str {
match (l1, l2) {
- ([], []) => "both empty",
- ([], [.., _]) | ([.., _], []) => "one empty",
- ([.., _], [.., _]) => "both non-empty"
+ (&[], &[]) => "both empty",
+ (&[], &[.., _]) | (&[.., _], &[]) => "one empty",
+ (&[.., _], &[.., _]) => "both non-empty"
}
}
+#[rustc_mir]
fn match_nested_vecs_cons<'a, T>(l1: Option<&'a [T]>, l2: Result<&'a [T], ()>) -> &'static str {
match (l1, l2) {
- (Some([]), Ok([])) => "Some(empty), Ok(empty)",
- (Some([_, ..]), Ok(_)) | (Some([_, ..]), Err(())) => "Some(non-empty), any",
- (None, Ok([])) | (None, Err(())) | (None, Ok([_])) => "None, Ok(less than one element)",
- (None, Ok([_, _, ..])) => "None, Ok(at least two elements)",
+ (Some(&[]), Ok(&[])) => "Some(empty), Ok(empty)",
+ (Some(&[_, ..]), Ok(_)) | (Some(&[_, ..]), Err(())) => "Some(non-empty), any",
+ (None, Ok(&[])) | (None, Err(())) | (None, Ok(&[_])) => "None, Ok(less than one element)",
+ (None, Ok(&[_, _, ..])) => "None, Ok(at least two elements)",
_ => "other"
}
}
+#[rustc_mir]
fn match_nested_vecs_snoc<'a, T>(l1: Option<&'a [T]>, l2: Result<&'a [T], ()>) -> &'static str {
match (l1, l2) {
- (Some([]), Ok([])) => "Some(empty), Ok(empty)",
- (Some([.., _]), Ok(_)) | (Some([.., _]), Err(())) => "Some(non-empty), any",
- (None, Ok([])) | (None, Err(())) | (None, Ok([_])) => "None, Ok(less than one element)",
- (None, Ok([.., _, _])) => "None, Ok(at least two elements)",
+ (Some(&[]), Ok(&[])) => "Some(empty), Ok(empty)",
+ (Some(&[.., _]), Ok(_)) | (Some(&[.., _]), Err(())) => "Some(non-empty), any",
+ (None, Ok(&[])) | (None, Err(())) | (None, Ok(&[_])) => "None, Ok(less than one element)",
+ (None, Ok(&[.., _, _])) => "None, Ok(at least two elements)",
_ => "other"
}
}
#![feature(advanced_slice_patterns)]
#![feature(slice_patterns)]
+#![feature(rustc_attrs)]
+use std::fmt::Debug;
+
+#[rustc_mir(graphviz="mir.gv")]
fn foldl<T, U, F>(values: &[T],
initial: U,
mut function: F)
-> U where
- U: Clone,
+ U: Clone+Debug, T:Debug,
F: FnMut(U, &T) -> U,
-{
- match values {
- [ref head, tail..] =>
+{ match values {
+ &[ref head, ref tail..] =>
foldl(tail, function(initial, head), function),
- [] => initial.clone()
+ &[] => {
+ // FIXME: call guards
+ let res = initial.clone(); res
+ }
}
}
+#[rustc_mir]
fn foldr<T, U, F>(values: &[T],
initial: U,
mut function: F)
F: FnMut(&T, U) -> U,
{
match values {
- [head.., ref tail] =>
+ &[ref head.., ref tail] =>
foldr(head, function(tail, initial), function),
- [] => initial.clone()
+ &[] => {
+ // FIXME: call guards
+ let res = initial.clone(); res
+ }
}
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(slice_patterns)]
+#![feature(slice_patterns, rustc_attrs)]
+#[rustc_mir]
pub fn main() {
let x = &[1, 2, 3, 4, 5];
let x: &[isize] = &[1, 2, 3, 4, 5];
if !x.is_empty() {
let el = match x {
- [1, ref tail..] => &tail[0],
+ &[1, ref tail..] => &tail[0],
_ => unreachable!()
};
println!("{}", *el);
#![feature(advanced_slice_patterns)]
#![feature(slice_patterns)]
+#![feature(rustc_attrs)]
+#[rustc_mir]
fn a() {
let x = [1];
match x {
}
}
+#[rustc_mir]
fn b() {
let x = [1, 2, 3];
match x {
}
}
+
+#[rustc_mir]
+fn b_slice() {
+ let x : &[_] = &[1, 2, 3];
+ match x {
+ &[a, b, ref c..] => {
+ assert_eq!(a, 1);
+ assert_eq!(b, 2);
+ let expected: &[_] = &[3];
+ assert_eq!(c, expected);
+ }
+ _ => unreachable!()
+ }
+ match x {
+ &[ref a.., b, c] => {
+ let expected: &[_] = &[1];
+ assert_eq!(a, expected);
+ assert_eq!(b, 2);
+ assert_eq!(c, 3);
+ }
+ _ => unreachable!()
+ }
+ match x {
+ &[a, ref b.., c] => {
+ assert_eq!(a, 1);
+ let expected: &[_] = &[2];
+ assert_eq!(b, expected);
+ assert_eq!(c, 3);
+ }
+ _ => unreachable!()
+ }
+ match x {
+ &[a, b, c] => {
+ assert_eq!(a, 1);
+ assert_eq!(b, 2);
+ assert_eq!(c, 3);
+ }
+ _ => unreachable!()
+ }
+}
+
+#[rustc_mir]
fn c() {
let x = [1];
match x {
}
}
+#[rustc_mir]
fn d() {
let x = [1, 2, 3];
let branch = match x {
assert_eq!(branch, 1);
}
+#[rustc_mir]
fn e() {
let x: &[isize] = &[1, 2, 3];
- match x {
- [1, 2] => (),
- [..] => ()
- }
+ let a = match *x {
+ [1, 2] => 0,
+ [..] => 1,
+ };
+
+ assert_eq!(a, 1);
+
+ let b = match *x {
+ [2, ..] => 0,
+ [1, 2, ..] => 1,
+ [_] => 2,
+ [..] => 3
+ };
+
+ assert_eq!(b, 1);
+
+
+ let c = match *x {
+ [_, _, _, _, ..] => 0,
+ [1, 2, ..] => 1,
+ [_] => 2,
+ [..] => 3
+ };
+
+ assert_eq!(c, 1);
}
pub fn main() {
a();
b();
+ b_slice();
c();
d();
e();
#![feature(slice_patterns)]
+#![feature(rustc_attrs)]
struct Foo {
- string: String
+ string: &'static str
}
+#[rustc_mir]
pub fn main() {
let x = [
- Foo { string: "foo".to_string() },
- Foo { string: "bar".to_string() },
- Foo { string: "baz".to_string() }
+ Foo { string: "foo" },
+ Foo { string: "bar" },
+ Foo { string: "baz" }
];
match x {
- [ref first, tail..] => {
- assert_eq!(first.string, "foo".to_string());
+ [ref first, ref tail..] => {
+ assert_eq!(first.string, "foo");
assert_eq!(tail.len(), 2);
- assert_eq!(tail[0].string, "bar".to_string());
- assert_eq!(tail[1].string, "baz".to_string());
+ assert_eq!(tail[0].string, "bar");
+ assert_eq!(tail[1].string, "baz");
- match tail {
- [Foo { .. }, _, Foo { .. }, _tail..] => {
+ match *(tail as &[_]) {
+ [Foo { .. }, _, Foo { .. }, ref _tail..] => {
unreachable!();
}
[Foo { string: ref a }, Foo { string: ref b }] => {
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-
+#![feature(rustc_attrs)]
#![feature(slice_patterns)]
+#[rustc_mir]
fn main() {
let x = [(), ()];
// The subslice used to go out of bounds for zero-sized array items, check that this doesn't
// happen anymore
match x {
- [_, y..] => assert_eq!(&x[1] as *const (), &y[0] as *const ())
+ [_, ref y..] => assert_eq!(&x[1] as *const (), &y[0] as *const ())
}
}