% Closures
-So far, we've made lots of functions in Rust, but we've given them all names.
-Rust also allows us to create anonymous functions. Rust's anonymous
-functions are called *closures*. By themselves, closures aren't all that
-interesting, but when you combine them with functions that take closures as
-arguments, really powerful things are possible.
+Rust not only has named functions, but anonymous functions as well. Anonymous
+functions that have an associated environment are called 'closures', because they
+close over an environment. Rust has a really great implementation of them, as
+we'll see.
-Let's make a closure:
+# Syntax
-```{rust}
-let add_one = |x| { 1 + x };
+Closures look like this:
-println!("The sum of 5 plus 1 is {}.", add_one(5));
+```rust
+let plus_one = |x: i32| x + 1;
+
+assert_eq!(2, plus_one(1));
+```
+
+We create a binding, `plus_one`, and assign it to a closure. The closure's
+arguments go between the pipes (`|`), and the body is an expression, in this
+case, `x + 1`. Remember that `{ }` is an expression, so we can have multi-line
+closures too:
+
+```rust
+let plus_two = |x| {
+ let mut result: i32 = x;
+
+ result += 1;
+ result += 1;
+
+ result
+};
+
+assert_eq!(4, plus_two(2));
+```
+
+You'll notice a few things about closures that are a bit different than regular
+functions defined with `fn`. The first of which is that we did not need to
+annotate the types of arguments the closure takes or the values it returns. We
+can:
+
+```rust
+let plus_one = |x: i32| -> i32 { x + 1 };
+
+assert_eq!(2, plus_one(1));
+```
+
+But we don't have to. Why is this? Basically, it was chosen for ergonomic reasons.
+While specifying the full type for named functions is helpful with things like
+documentation and type inference, the types of closures are rarely documented
+since they’re anonymous, and they don’t cause the kinds of error-at-a-distance
+that inferring named function types can.
+
+The second is that the syntax is similar, but a bit different. I've added spaces
+here to make them look a little closer:
+
+```rust
+fn plus_one_v1 ( x: i32 ) -> i32 { x + 1 }
+let plus_one_v2 = |x: i32 | -> i32 { x + 1 };
+let plus_one_v3 = |x: i32 | x + 1 ;
```
-We create a closure using the `|...| { ... }` syntax, and then we create a
-binding so we can use it later. Note that we call the function using the
-binding name and two parentheses, just like we would for a named function.
+Small differences, but they're similar in ways.
-Let's compare syntax. The two are pretty close:
+# Closures and their environment
-```{rust}
-let add_one = |x: i32| -> i32 { 1 + x };
-fn add_one (x: i32) -> i32 { 1 + x }
+Closures are called such because they 'close over their environment.' It
+looks like this:
+
+```rust
+let num = 5;
+let plus_num = |x: i32| x + num;
+
+assert_eq!(10, plus_num(5));
```
-As you may have noticed, closures infer their argument and return types, so you
-don't need to declare one. This is different from named functions, which
-default to returning unit (`()`).
+This closure, `plus_num`, refers to a `let` binding in its scope: `num`. More
+specifically, it borrows the binding. If we do something that would conflict
+with that binding, we get an error. Like this one:
+
+```rust,ignore
+let mut num = 5;
+let plus_num = |x: i32| x + num;
-There's one big difference between a closure and named functions, and it's in
-the name: a closure "closes over its environment." What does that mean? It means
-this:
+let y = &mut num;
+```
-```{rust}
+Which errors with:
+
+```text
+error: cannot borrow `num` as mutable because it is also borrowed as immutable
+ let y = &mut num;
+ ^~~
+note: previous borrow of `num` occurs here due to use in closure; the immutable
+ borrow prevents subsequent moves or mutable borrows of `num` until the borrow
+ ends
+ let plus_num = |x| x + num;
+ ^~~~~~~~~~~
+note: previous borrow ends here
fn main() {
- let x: i32 = 5;
+ let mut num = 5;
+ let plus_num = |x| x + num;
+
+ let y = &mut num;
+}
+^
+```
+
+A verbose yet helpful error message! As it says, we can't take a mutable borrow
+on `num` because the closure is already borrowing it. If we let the closure go
+out of scope, we can:
+
+```rust
+let mut num = 5;
+{
+ let plus_num = |x: i32| x + num;
+
+} // plus_num goes out of scope, borrow of num ends
- let printer = || { println!("x is: {}", x); };
+let y = &mut num;
+```
+
+If your closure requires it, however, Rust will take ownership and move
+the environment instead:
+
+```rust,ignore
+let nums = vec![1, 2, 3];
+
+let takes_nums = || nums;
+
+println!("{:?}", nums);
+```
+
+This gives us:
+
+```text
+note: `nums` moved into closure environment here because it has type
+ `[closure(()) -> collections::vec::Vec<i32>]`, which is non-copyable
+let takes_nums = || nums;
+ ^~~~~~~
+```
+
+`Vec<T>` has ownership over its contents, and therefore, when we refer to it
+in our closure, we have to take ownership of `nums`. It's the same as if we'd
+passed `nums` to a function that took ownership of it.
+
+## `move` closures
+
+We can force our closure to take ownership of its environment with the `move`
+keyword:
- printer(); // prints "x is: 5"
+```rust
+let num = 5;
+
+let owns_num = move |x: i32| x + num;
+```
+
+Now, even though the keyword is `move`, the variables follow normal move semantics.
+In this case, `5` implements `Copy`, and so `owns_num` takes ownership of a copy
+of `num`. So what's the difference?
+
+```rust
+let mut num = 5;
+
+{
+ let mut add_num = |x: i32| num += x;
+
+ add_num(5);
}
+
+assert_eq!(10, num);
```
-The `||` syntax means this is an anonymous closure that takes no arguments.
-Without it, we'd just have a block of code in `{}`s.
+So in this case, our closure took a mutable reference to `num`, and then when
+we called `add_num`, it mutated the underlying value, as we'd expect. We also
+needed to declare `add_num` as `mut` too, because we’re mutating its
+environment.
-In other words, a closure has access to variables in the scope where it's
-defined. The closure borrows any variables it uses, so this will error:
+We also had to declare `add_num` as mut, since we will be modifying its
+environment.
-```{rust,ignore}
-fn main() {
- let mut x: i32 = 5;
+If we change to a `move` closure, it's different:
+
+```rust
+let mut num = 5;
- let printer = || { println!("x is: {}", x); };
+{
+ let mut add_num = move |x: i32| num += x;
- x = 6; // error: cannot assign to `x` because it is borrowed
+ add_num(5);
}
+
+assert_eq!(5, num);
```
-## Moving closures
+We only get `5`. Rather than taking a mutable borrow out on our `num`, we took
+ownership of a copy.
+
+Another way to think about `move` closures: they give a closure its own stack
+frame. Without `move`, a closure may be tied to the stack frame that created
+it, while a `move` closure is self-contained. This means that you cannot
+generally return a non-`move` closure from a function, for example.
+
+But before we talk about taking and returning closures, we should talk some more
+about the way that closures are implemented. As a systems language, Rust gives
+you tons of control over what your code does, and closures are no different.
-Rust has a second type of closure, called a *moving closure*. Moving
-closures are indicated using the `move` keyword (e.g., `move || x *
-x`). The difference between a moving closure and an ordinary closure
-is that a moving closure always takes ownership of all variables that
-it uses. Ordinary closures, in contrast, just create a reference into
-the enclosing stack frame. Moving closures are most useful with Rust's
-concurrency features, and so we'll just leave it at this for
-now. We'll talk about them more in the "Concurrency" chapter of the book.
+# Closure implementation
-## Accepting closures as arguments
+Rust's implementation of closures is a bit different than other languages. They
+are effectively syntax sugar for traits. You'll want to make sure to have read
+the [traits chapter][traits] before this one, as well as the chapter on [static
+and dynamic dispatch][dispatch], which talks about trait objects.
-Closures are most useful as an argument to another function. Here's an example:
+[traits]: traits.html
+[dispatch]: static-and-dynamic-dispatch.html
-```{rust}
-fn twice<F: Fn(i32) -> i32>(x: i32, f: F) -> i32 {
- f(x) + f(x)
+Got all that? Good.
+
+The key to understanding how closures work under the hood is something a bit
+strange: Using `()` to call a function, like `foo()`, is an overloadable
+operator. From this, everything else clicks into place. In Rust, we use the
+trait system to overload operators. Calling functions is no different. We have
+three separate traits to overload with:
+
+```rust
+# mod foo {
+pub trait Fn<Args> : FnMut<Args> {
+ extern "rust-call" fn call(&self, args: Args) -> Self::Output;
}
-fn main() {
- let square = |x: i32| { x * x };
+pub trait FnMut<Args> : FnOnce<Args> {
+ extern "rust-call" fn call_mut(&mut self, args: Args) -> Self::Output;
+}
+
+pub trait FnOnce<Args> {
+ type Output;
- twice(5, square); // evaluates to 50
+ extern "rust-call" fn call_once(self, args: Args) -> Self::Output;
}
+# }
```
-Let's break the example down, starting with `main`:
+You'll notice a few differences between these traits, but a big one is `self`:
+`Fn` takes `&self`, `FnMut` takes `&mut self`, and `FnOnce` takes `self`. This
+covers all three kinds of `self` via the usual method call syntax. But we've
+split them up into three traits, rather than having a single one. This gives us
+a large amount of control over what kind of closures we can take.
-```{rust}
-let square = |x: i32| { x * x };
-```
+The `|| {}` syntax for closures is sugar for these three traits. Rust will
+generate a struct for the environment, `impl` the appropriate trait, and then
+use it.
+
+# Taking closures as arguments
+
+Now that we know that closures are traits, we already know how to accept and
+return closures: just like any other trait!
+
+This also means that we can choose static vs dynamic dispatch as well. First,
+let's write a function which takes something callable, calls it, and returns
+the result:
+
+```rust
+fn call_with_one<F>(some_closure: F) -> i32
+ where F : Fn(i32) -> i32 {
+
+ some_closure(1)
+}
-We've seen this before. We make a closure that takes an integer, and returns
-its square.
+let answer = call_with_one(|x| x + 2);
-```{rust}
-# fn twice<F: Fn(i32) -> i32>(x: i32, f: F) -> i32 { f(x) + f(x) }
-# let square = |x: i32| { x * x };
-twice(5, square); // evaluates to 50
+assert_eq!(3, answer);
```
-This line is more interesting. Here, we call our function, `twice`, and we pass
-it two arguments: an integer, `5`, and our closure, `square`. This is just like
-passing any other two variable bindings to a function, but if you've never
-worked with closures before, it can seem a little complex. Just think: "I'm
-passing two variables: one is an i32, and one is a function."
+We pass our closure, `|x| x + 2`, to `call_with_one`. It just does what it
+suggests: it calls the closure, giving it `1` as an argument.
-Next, let's look at how `twice` is defined:
+Let's examine the signature of `call_with_one` in more depth:
-```{rust,ignore}
-fn twice<F: Fn(i32) -> i32>(x: i32, f: F) -> i32 {
+```rust
+fn call_with_one<F>(some_closure: F) -> i32
+# where F : Fn(i32) -> i32 {
+# some_closure(1) }
```
-`twice` takes two arguments, `x` and `f`. That's why we called it with two
-arguments. `x` is an `i32`, we've done that a ton of times. `f` is a function,
-though, and that function takes an `i32` and returns an `i32`. This is
-what the requirement `Fn(i32) -> i32` for the type parameter `F` says.
-Now `F` represents *any* function that takes an `i32` and returns an `i32`.
+We take one parameter, and it has the type `F`. We also return a `i32`. This part
+isn't interesting. The next part is:
-This is the most complicated function signature we've seen yet! Give it a read
-a few times until you can see how it works. It takes a teeny bit of practice, and
-then it's easy. The good news is that this kind of passing a closure around
-can be very efficient. With all the type information available at compile-time
-the compiler can do wonders.
+```rust
+# fn call_with_one<F>(some_closure: F) -> i32
+ where F : Fn(i32) -> i32 {
+# some_closure(1) }
+```
+
+Because `Fn` is a trait, we can bound our generic with it. In this case, our closure
+takes a `i32` as an argument and returns an `i32`, and so the generic bound we use
+is `Fn(i32) -> i32`.
-Finally, `twice` returns an `i32` as well.
+There's one other key point here: because we're bounding a generic with a
+trait, this will get monomorphized, and therefore, we'll be doing static
+dispatch into the closure. That's pretty neat. In many langauges, closures are
+inherently heap allocated, and will always involve dynamic dispatch. In Rust,
+we can stack allocate our closure environment, and statically dispatch the
+call. This happens quite often with iterators and their adapters, which often
+take closures as arguments.
-Okay, let's look at the body of `twice`:
+Of course, if we want dynamic dispatch, we can get that too. A trait object
+handles this case, as usual:
-```{rust}
-fn twice<F: Fn(i32) -> i32>(x: i32, f: F) -> i32 {
- f(x) + f(x)
+```rust
+fn call_with_one(some_closure: &Fn(i32) -> i32) -> i32 {
+ some_closure(1)
}
+
+let answer = call_with_one(&|x| x + 2);
+
+assert_eq!(3, answer);
```
-Since our closure is named `f`, we can call it just like we called our closures
-before, and we pass in our `x` argument to each one, hence the name `twice`.
+Now we take a trait object, a `&Fn`. And we have to make a reference
+to our closure when we pass it to `call_with_one`, so we use `&||`.
-If you do the math, `(5 * 5) + (5 * 5) == 50`, so that's the output we get.
+# Returning closures
-Play around with this concept until you're comfortable with it. Rust's standard
-library uses lots of closures where appropriate, so you'll be using
-this technique a lot.
+It’s very common for functional-style code to return closures in various
+situations. If you try to return a closure, you may run into an error. At
+first, it may seem strange, but we'll figure it out. Here's how you'd probably
+try to return a closure from a function:
-If we didn't want to give `square` a name, we could just define it inline.
-This example is the same as the previous one:
+```rust,ignore
+fn factory() -> (Fn(i32) -> Vec<i32>) {
+ let vec = vec![1, 2, 3];
-```{rust}
-fn twice<F: Fn(i32) -> i32>(x: i32, f: F) -> i32 {
- f(x) + f(x)
+ |n| vec.push(n)
}
-fn main() {
- twice(5, |x: i32| { x * x }); // evaluates to 50
-}
+let f = factory();
+
+let answer = f(4);
+assert_eq!(vec![1, 2, 3, 4], answer);
```
-A named function's name can be used wherever you'd use a closure. Another
-way of writing the previous example:
+This gives us these long, related errors:
+
+```text
+error: the trait `core::marker::Sized` is not implemented for the type
+`core::ops::Fn(i32) -> collections::vec::Vec<i32>` [E0277]
+f = factory();
+^
+note: `core::ops::Fn(i32) -> collections::vec::Vec<i32>` does not have a
+constant size known at compile-time
+f = factory();
+^
+error: the trait `core::marker::Sized` is not implemented for the type
+`core::ops::Fn(i32) -> collections::vec::Vec<i32>` [E0277]
+factory() -> (Fn(i32) -> Vec<i32>) {
+ ^~~~~~~~~~~~~~~~~~~~~
+note: `core::ops::Fn(i32) -> collections::vec::Vec<i32>` does not have a constant size known at compile-time
+fa ctory() -> (Fn(i32) -> Vec<i32>) {
+ ^~~~~~~~~~~~~~~~~~~~~
-```{rust}
-fn twice<F: Fn(i32) -> i32>(x: i32, f: F) -> i32 {
- f(x) + f(x)
-}
+```
-fn square(x: i32) -> i32 { x * x }
+In order to return something from a function, Rust needs to know what
+size the return type is. But since `Fn` is a trait, it could be various
+things of various sizes: many different types can implement `Fn`. An easy
+way to give something a size is to take a reference to it, as references
+have a known size. So we'd write this:
-fn main() {
- twice(5, square); // evaluates to 50
+```rust,ignore
+fn factory() -> &(Fn(i32) -> Vec<i32>) {
+ let vec = vec![1, 2, 3];
+
+ |n| vec.push(n)
}
+
+let f = factory();
+
+let answer = f(4);
+assert_eq!(vec![1, 2, 3, 4], answer);
+```
+
+But we get another error:
+
+```text
+error: missing lifetime specifier [E0106]
+fn factory() -> &(Fn(i32) -> i32) {
+ ^~~~~~~~~~~~~~~~~
```
-Doing this is not particularly common, but it's useful every once in a while.
+Right. Because we have a reference, we need to give it a lifetime. But
+our `factory()` function takes no arguments, so elision doesn't kick in
+here. What lifetime can we choose? `'static`:
-Before we move on, let us look at a function that accepts two closures.
+```rust,ignore
+fn factory() -> &'static (Fn(i32) -> i32) {
+ let num = 5;
-```{rust}
-fn compose<F, G>(x: i32, f: F, g: G) -> i32
- where F: Fn(i32) -> i32, G: Fn(i32) -> i32 {
- g(f(x))
+ |x| x + num
}
-fn main() {
- compose(5,
- |n: i32| { n + 42 },
- |n: i32| { n * 2 }); // evaluates to 94
+let f = factory();
+
+let answer = f(1);
+assert_eq!(6, answer);
+```
+
+But we get another error:
+
+```text
+error: mismatched types:
+ expected `&'static core::ops::Fn(i32) -> i32`,
+ found `[closure <anon>:7:9: 7:20]`
+(expected &-ptr,
+ found closure) [E0308]
+ |x| x + num
+ ^~~~~~~~~~~
+
+```
+
+This error is letting us know that we don't have a `&'static Fn(i32) -> i32`,
+we have a `[closure <anon>:7:9: 7:20]`. Wait, what?
+
+Because each closure generates its own environment `struct` and implementation
+of `Fn` and friends, these types are anonymous. They exist just solely for
+this closure. So Rust shows them as `closure <anon>`, rather than some
+autogenerated name.
+
+But why doesn't our closure implement `&'static Fn`? Well, as we discussed before,
+closures borrow their environment. And in this case, our environment is based
+on a stack-allocated `5`, the `num` variable binding. So the borrow has a lifetime
+of the stack frame. So if we returned this closure, the function call would be
+over, the stack frame would go away, and our closure is capturing an environment
+of garbage memory!
+
+So what to do? This _almost_ works:
+
+```rust,ignore
+fn factory() -> Box<Fn(i32) -> i32> {
+ let num = 5;
+
+ Box::new(|x| x + num)
}
+# fn main() {
+let f = factory();
+
+let answer = f(1);
+assert_eq!(6, answer);
+# }
```
-You might ask yourself: why do we need to introduce two type
-parameters `F` and `G` here? Evidently, both `f` and `g` have the
-same signature: `Fn(i32) -> i32`.
+We use a trait object, by `Box`ing up the `Fn`. There's just one last problem:
-That is because in Rust each closure has its own unique type.
-So, not only do closures with different signatures have different types,
-but different closures with the *same* signature have *different*
-types, as well!
+```text
+error: `num` does not live long enough
+Box::new(|x| x + num)
+ ^~~~~~~~~~~
+```
+
+We still have a reference to the parent stack frame. With one last fix, we can
+make this work:
-You can think of it this way: the behavior of a closure is part of its
-type. Therefore, using a single type parameter for both closures
-will accept the first of them, rejecting the second. The distinct
-type of the second closure does not allow it to be represented by the
-same type parameter as that of the first. We acknowledge this, and
-use two different type parameters `F` and `G`.
+```rust
+fn factory() -> Box<Fn(i32) -> i32> {
+ let num = 5;
-This also introduces the `where` clause, which lets us describe type
-parameters in a more flexible manner.
+ Box::new(move |x| x + num)
+}
+# fn main() {
+let f = factory();
+
+let answer = f(1);
+assert_eq!(6, answer);
+# }
+```
-That's all you need to get the hang of closures! Closures are a little bit
-strange at first, but once you're used to them, you'll miss them
-in other languages. Passing functions to other functions is
-incredibly powerful, as you will see in the following chapter about iterators.
+By making the inner closure a `move Fn`, we create a new stack frame for our
+closure. By `Box`ing it up, we've given it a known size, and allowing it to
+escape our stack frame.
# Upgrading failures to panics
In certain circumstances, even though a function may fail, we may want to treat
-it as a panic instead. For example, `io::stdin().read_line()` returns an
-`IoResult<String>`, a form of `Result`, when there is an error reading the
-line. This allows us to handle and possibly recover from this sort of error.
+it as a panic instead. For example, `io::stdin().read_line(&mut buffer)` returns
+an `Result<usize>`, when there is an error reading the line. This allows us to
+handle and possibly recover from error.
If we don't want to handle this error, and would rather just abort the program,
we can use the `unwrap()` method:
```{rust,ignore}
-io::stdin().read_line().unwrap();
+io::stdin().read_line(&mut buffer).unwrap();
```
`unwrap()` will `panic!` if the `Option` is `None`. This basically says "Give
There's another way of doing this that's a bit nicer than `unwrap()`:
```{rust,ignore}
-let input = io::stdin().read_line()
+let mut buffer = String::new();
+let input = io::stdin().read_line(&mut buffer)
.ok()
.expect("Failed to read line");
```
-`ok()` converts the `IoResult` into an `Option`, and `expect()` does the same
+`ok()` converts the `Result` into an `Option`, and `expect()` does the same
thing as `unwrap()`, but takes a message. This message is passed along to the
underlying `panic!`, providing a better error message if the code errors.
#[unstable(feature = "alloc")]
pub fn strong_count<T>(this: &Arc<T>) -> usize { this.inner().strong.load(SeqCst) }
+
+/// Try accessing a mutable reference to the contents behind an unique `Arc<T>`.
+///
+/// The access is granted only if this is the only reference to the object.
+/// Otherwise, `None` is returned.
+///
+/// # Examples
+///
+/// ```
+/// # #![feature(alloc)]
+/// use std::alloc::arc;
+///
+/// let mut four = arc::Arc::new(4);
+///
+/// arc::unique(&mut four).map(|num| *num = 5);
+/// ```
+#[inline]
+#[unstable(feature = "alloc")]
+pub fn unique<T>(this: &mut Arc<T>) -> Option<&mut T> {
+ if strong_count(this) == 1 && weak_count(this) == 0 {
+ // This unsafety is ok because we're guaranteed that the pointer
+ // returned is the *only* pointer that will ever be returned to T. Our
+ // reference count is guaranteed to be 1 at this point, and we required
+ // the Arc itself to be `mut`, so we're returning the only possible
+ // reference to the inner data.
+ let inner = unsafe { &mut **this._ptr };
+ Some(&mut inner.data)
+ }else {
+ None
+ }
+}
+
#[stable(feature = "rust1", since = "1.0.0")]
impl<T> Clone for Arc<T> {
/// Makes a clone of the `Arc<T>`.
self.inner().weak.load(SeqCst) != 1 {
*self = Arc::new((**self).clone())
}
- // This unsafety is ok because we're guaranteed that the pointer
- // returned is the *only* pointer that will ever be returned to T. Our
- // reference count is guaranteed to be 1 at this point, and we required
- // the Arc itself to be `mut`, so we're returning the only possible
- // reference to the inner data.
+ // As with `unique()`, the unsafety is ok because our reference was
+ // either unique to begin with, or became one upon cloning the contents.
let inner = unsafe { &mut **self._ptr };
&mut inner.data
}
use std::sync::atomic::Ordering::{Acquire, SeqCst};
use std::thread;
use std::vec::Vec;
- use super::{Arc, Weak, weak_count, strong_count};
+ use super::{Arc, Weak, weak_count, strong_count, unique};
use std::sync::Mutex;
struct Canary(*mut atomic::AtomicUsize);
assert_eq!((*arc_v)[4], 5);
}
+ #[test]
+ fn test_arc_unique() {
+ let mut x = Arc::new(10);
+ assert!(unique(&mut x).is_some());
+ {
+ let y = x.clone();
+ assert!(unique(&mut x).is_none());
+ }
+ {
+ let z = x.downgrade();
+ assert!(unique(&mut x).is_none());
+ }
+ assert!(unique(&mut x).is_some());
+ }
+
#[test]
fn test_cowarc_clone_make_unique() {
let mut cow0 = Arc::new(75);
use core::any::Any;
use core::cmp::Ordering;
use core::default::Default;
-use core::error::Error;
use core::fmt;
use core::hash::{self, Hash};
use core::mem;
use core::ops::{Deref, DerefMut};
-use core::ptr::{self, Unique};
-use core::raw::{TraitObject, Slice};
-
-use heap;
+use core::ptr::{Unique};
+use core::raw::{TraitObject};
/// A value that represents the heap. This is the default place that the `box`
/// keyword allocates into when no place is supplied.
/// See the [module-level documentation](../../std/boxed/index.html) for more.
#[lang = "owned_box"]
#[stable(feature = "rust1", since = "1.0.0")]
+#[fundamental]
pub struct Box<T>(Unique<T>);
impl<T> Box<T> {
}
}
-#[stable(feature = "rust1", since = "1.0.0")]
-impl fmt::Debug for Box<Any> {
- fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
- f.pad("Box<Any>")
- }
-}
-
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: ?Sized> Deref for Box<T> {
type Target = T;
#[stable(feature = "rust1", since = "1.0.0")]
impl<I: ExactSizeIterator + ?Sized> ExactSizeIterator for Box<I> {}
-#[stable(feature = "rust1", since = "1.0.0")]
-impl<'a, E: Error + 'a> From<E> for Box<Error + 'a> {
- fn from(err: E) -> Box<Error + 'a> {
- Box::new(err)
- }
-}
-
-#[stable(feature = "rust1", since = "1.0.0")]
-impl<'a, E: Error + Send + 'a> From<E> for Box<Error + Send + 'a> {
- fn from(err: E) -> Box<Error + Send + 'a> {
- Box::new(err)
- }
-}
-
-#[stable(feature = "rust1", since = "1.0.0")]
-impl<'a, 'b> From<&'b str> for Box<Error + Send + 'a> {
- fn from(err: &'b str) -> Box<Error + Send + 'a> {
- #[derive(Debug)]
- struct StringError(Box<str>);
- impl Error for StringError {
- fn description(&self) -> &str { &self.0 }
- }
- impl fmt::Display for StringError {
- fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
- self.0.fmt(f)
- }
- }
-
- // Unfortunately `String` is located in libcollections, so we construct
- // a `Box<str>` manually here.
- unsafe {
- let alloc = if err.len() == 0 {
- 0 as *mut u8
- } else {
- let ptr = heap::allocate(err.len(), 1);
- if ptr.is_null() { ::oom(); }
- ptr as *mut u8
- };
- ptr::copy(err.as_bytes().as_ptr(), alloc, err.len());
- Box::new(StringError(mem::transmute(Slice {
- data: alloc,
- len: err.len(),
- })))
- }
- }
-}
let b = Box::new(Test) as Box<Any>;
let a_str = format!("{:?}", a);
let b_str = format!("{:?}", b);
- assert_eq!(a_str, "Box<Any>");
- assert_eq!(b_str, "Box<Any>");
+ assert_eq!(a_str, "Any");
+ assert_eq!(b_str, "Any");
static EIGHT: usize = 8;
static TEST: Test = Test;
let a = &EIGHT as &Any;
let b = &TEST as &Any;
let s = format!("{:?}", a);
- assert_eq!(s, "&Any");
+ assert_eq!(s, "Any");
let s = format!("{:?}", b);
- assert_eq!(s, "&Any");
+ assert_eq!(s, "Any");
}
#[test]
#![feature(no_std)]
#![no_std]
#![feature(allocator)]
+#![feature(custom_attribute)]
+#![feature(fundamental)]
#![feature(lang_items, unsafe_destructor)]
#![feature(box_syntax)]
#![feature(optin_builtin_traits)]
use core::prelude::*;
use core::default::Default;
-use core::error::Error;
use core::fmt;
use core::hash;
use core::iter::{IntoIterator, FromIterator};
}
}
-#[stable(feature = "rust1", since = "1.0.0")]
-impl Error for FromUtf8Error {
- fn description(&self) -> &str { "invalid utf-8" }
-}
-
#[stable(feature = "rust1", since = "1.0.0")]
impl fmt::Display for FromUtf16Error {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
}
}
-#[stable(feature = "rust1", since = "1.0.0")]
-impl Error for FromUtf16Error {
- fn description(&self) -> &str { "invalid utf-16" }
-}
-
#[stable(feature = "rust1", since = "1.0.0")]
impl FromIterator<char> for String {
fn from_iter<I: IntoIterator<Item=char>>(iter: I) -> String {
#![stable(feature = "rust1", since = "1.0.0")]
+use fmt;
use marker::Send;
use mem::transmute;
use option::Option::{self, Some, None};
// Extension methods for Any trait objects.
///////////////////////////////////////////////////////////////////////////////
+#[stable(feature = "rust1", since = "1.0.0")]
+impl fmt::Debug for Any {
+ fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+ f.pad("Any")
+ }
+}
+
impl Any {
/// Returns true if the boxed type is the same as `T`
#[stable(feature = "rust1", since = "1.0.0")]
+++ /dev/null
-// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! Traits for working with Errors.
-//!
-//! # The `Error` trait
-//!
-//! `Error` is a trait representing the basic expectations for error values,
-//! i.e. values of type `E` in `Result<T, E>`. At a minimum, errors must provide
-//! a description, but they may optionally provide additional detail (via
-//! `Display`) and cause chain information:
-//!
-//! ```
-//! use std::fmt::Display;
-//!
-//! trait Error: Display {
-//! fn description(&self) -> &str;
-//!
-//! fn cause(&self) -> Option<&Error> { None }
-//! }
-//! ```
-//!
-//! The `cause` method is generally used when errors cross "abstraction
-//! boundaries", i.e. when a one module must report an error that is "caused"
-//! by an error from a lower-level module. This setup makes it possible for the
-//! high-level module to provide its own errors that do not commit to any
-//! particular implementation, but also reveal some of its implementation for
-//! debugging via `cause` chains.
-
-#![stable(feature = "rust1", since = "1.0.0")]
-
-use prelude::*;
-use fmt::{Debug, Display};
-
-/// Base functionality for all errors in Rust.
-#[stable(feature = "rust1", since = "1.0.0")]
-pub trait Error: Debug + Display {
- /// A short description of the error.
- ///
- /// The description should not contain newlines or sentence-ending
- /// punctuation, to facilitate embedding in larger user-facing
- /// strings.
- #[stable(feature = "rust1", since = "1.0.0")]
- fn description(&self) -> &str;
-
- /// The lower-level cause of this error, if any.
- #[stable(feature = "rust1", since = "1.0.0")]
- fn cause(&self) -> Option<&Error> { None }
-}
#![stable(feature = "rust1", since = "1.0.0")]
-use any;
use cell::{Cell, RefCell, Ref, RefMut, BorrowState};
use char::CharExt;
use iter::Iterator;
tuple! { T0, T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, }
-#[stable(feature = "rust1", since = "1.0.0")]
-impl<'a> Debug for &'a (any::Any+'a) {
- fn fmt(&self, f: &mut Formatter) -> Result { f.pad("&Any") }
-}
-
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: Debug> Debug for [T] {
fn fmt(&self, f: &mut Formatter) -> Result {
//! let mut it = values.into_iter();
//! loop {
//! match it.next() {
-//! Some(x) => {
-//! println!("{}", x);
-//! }
-//! None => { break }
+//! Some(x) => println!("{}", x),
+//! None => break,
//! }
//! }
//! ```
#![feature(unboxed_closures)]
#![feature(rustc_attrs)]
#![feature(optin_builtin_traits)]
+#![feature(fundamental)]
#![feature(concat_idents)]
#![feature(reflect)]
+#![feature(custom_attribute)]
#[macro_use]
mod macros;
pub mod str;
pub mod hash;
pub mod fmt;
-pub mod error;
#[doc(primitive = "bool")]
mod bool {
#[stable(feature = "rust1", since = "1.0.0")]
#[lang="sized"]
#[rustc_on_unimplemented = "`{Self}` does not have a constant size known at compile-time"]
+#[fundamental] // for Default, for example, which requires that `[T]: !Default` be evaluatable
pub trait Sized : MarkerTrait {
// Empty.
}
#[stable(feature = "rust1", since = "1.0.0")]
pub trait PhantomFn<A:?Sized,R:?Sized=()> { }
-/// `PhantomData` is a way to tell the compiler about fake fields.
-/// Phantom data is required whenever type parameters are not used.
-/// The idea is that if the compiler encounters a `PhantomData<T>`
-/// instance, it will behave *as if* an instance of the type `T` were
-/// present for the purpose of various automatic analyses.
+/// `PhantomData<T>` allows you to describe that a type acts as if it stores a value of type `T`,
+/// even though it does not. This allows you to inform the compiler about certain safety properties
+/// of your code.
+///
+/// Though they both have scary names, `PhantomData<T>` and "phantom types" are unrelated. 👻👻👻
///
/// # Examples
///
/// When handling external resources over a foreign function interface, `PhantomData<T>` can
-/// prevent mismatches by enforcing types in the method implementations, although the struct
-/// doesn't actually contain values of the resource type.
+/// prevent mismatches by enforcing types in the method implementations:
///
/// ```
/// # trait ResType { fn foo(&self); };
/// commonly necessary if the structure is using an unsafe pointer
/// like `*mut T` whose referent may be dropped when the type is
/// dropped, as a `*mut T` is otherwise not treated as owned.
-///
-/// FIXME. Better documentation and examples of common patterns needed
-/// here! For now, please see [RFC 738][738] for more information.
-///
-/// [738]: https://github.com/rust-lang/rfcs/blob/master/text/0738-variance.md
#[lang="phantom_data"]
#[stable(feature = "rust1", since = "1.0.0")]
pub struct PhantomData<T:?Sized>;
use char::CharExt;
use clone::Clone;
use cmp::{PartialEq, Eq, PartialOrd, Ord};
-use error::Error;
use fmt;
use intrinsics;
use iter::Iterator;
Underflow,
}
-#[stable(feature = "rust1", since = "1.0.0")]
-impl fmt::Display for ParseIntError {
- fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
- self.description().fmt(f)
- }
-}
-
-#[stable(feature = "rust1", since = "1.0.0")]
-impl Error for ParseIntError {
- fn description(&self) -> &str {
+impl ParseIntError {
+ #[unstable(feature = "core", reason = "available through Error trait")]
+ pub fn description(&self) -> &str {
match self.kind {
IntErrorKind::Empty => "cannot parse integer from empty string",
IntErrorKind::InvalidDigit => "invalid digit found in string",
}
}
+#[stable(feature = "rust1", since = "1.0.0")]
+impl fmt::Display for ParseIntError {
+ fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+ self.description().fmt(f)
+ }
+}
+
/// An error which can be returned when parsing a float.
#[derive(Debug, Clone, PartialEq)]
#[stable(feature = "rust1", since = "1.0.0")]
Invalid,
}
-#[stable(feature = "rust1", since = "1.0.0")]
-impl fmt::Display for ParseFloatError {
- fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
- self.description().fmt(f)
- }
-}
-
-#[stable(feature = "rust1", since = "1.0.0")]
-impl Error for ParseFloatError {
- fn description(&self) -> &str {
+impl ParseFloatError {
+ #[unstable(feature = "core", reason = "available through Error trait")]
+ pub fn description(&self) -> &str {
match self.kind {
FloatErrorKind::Empty => "cannot parse float from empty string",
FloatErrorKind::Invalid => "invalid float literal",
}
}
}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl fmt::Display for ParseFloatError {
+ fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+ self.description().fmt(f)
+ }
+}
#[lang="fn"]
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_paren_sugar]
+#[fundamental] // so that regex can rely that `&str: !FnMut`
pub trait Fn<Args> : FnMut<Args> {
/// This is called when the call operator is used.
extern "rust-call" fn call(&self, args: Args) -> Self::Output;
#[lang="fn_mut"]
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_paren_sugar]
+#[fundamental] // so that regex can rely that `&str: !FnMut`
pub trait FnMut<Args> : FnOnce<Args> {
/// This is called when the call operator is used.
extern "rust-call" fn call_mut(&mut self, args: Args) -> Self::Output;
#[lang="fn_once"]
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_paren_sugar]
+#[fundamental] // so that regex can rely that `&str: !FnMut`
pub trait FnOnce<Args> {
/// The returned type after the call operator is used.
type Output;
/// This is called when the call operator is used.
extern "rust-call" fn call_once(self, args: Args) -> Self::Output;
}
+
+#[cfg(not(stage0))]
+mod impls {
+ use marker::Sized;
+ use super::{Fn, FnMut, FnOnce};
+
+ impl<'a,A,F:?Sized> Fn<A> for &'a F
+ where F : Fn<A>
+ {
+ extern "rust-call" fn call(&self, args: A) -> F::Output {
+ (**self).call(args)
+ }
+ }
+
+ impl<'a,A,F:?Sized> FnMut<A> for &'a F
+ where F : Fn<A>
+ {
+ extern "rust-call" fn call_mut(&mut self, args: A) -> F::Output {
+ (**self).call(args)
+ }
+ }
+
+ impl<'a,A,F:?Sized> FnOnce<A> for &'a F
+ where F : Fn<A>
+ {
+ type Output = F::Output;
+
+ extern "rust-call" fn call_once(self, args: A) -> F::Output {
+ (*self).call(args)
+ }
+ }
+
+ impl<'a,A,F:?Sized> FnMut<A> for &'a mut F
+ where F : FnMut<A>
+ {
+ extern "rust-call" fn call_mut(&mut self, args: A) -> F::Output {
+ (*self).call_mut(args)
+ }
+ }
+
+ impl<'a,A,F:?Sized> FnOnce<A> for &'a mut F
+ where F : FnMut<A>
+ {
+ type Output = F::Output;
+ extern "rust-call" fn call_once(mut self, args: A) -> F::Output {
+ (*self).call_mut(args)
+ }
+ }
+}
use clone::Clone;
use cmp::{self, Eq};
use default::Default;
-use error::Error;
use fmt;
use iter::ExactSizeIterator;
use iter::{Map, Iterator, DoubleEndedIterator};
}
}
-#[stable(feature = "rust1", since = "1.0.0")]
-impl Error for ParseBoolError {
- fn description(&self) -> &str { "failed to parse bool" }
-}
-
/*
Section: Creating a string
*/
mem::transmute(v)
}
-#[stable(feature = "rust1", since = "1.0.0")]
-impl Error for Utf8Error {
- fn description(&self) -> &str {
- match *self {
- Utf8Error::TooShort => "invalid utf-8: not enough bytes",
- Utf8Error::InvalidByte(..) => "invalid utf-8: corrupt contents",
- }
- }
-}
-
#[stable(feature = "rust1", since = "1.0.0")]
impl fmt::Display for Utf8Error {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
pub mod traits;
pub mod ty;
pub mod ty_fold;
+ pub mod ty_match;
+ pub mod ty_relate;
pub mod ty_walk;
pub mod weak_lang_items;
}
use std::io::{Cursor, SeekFrom};
use syntax::abi;
use syntax::ast::{self, DefId, NodeId};
-use syntax::ast_map::{PathElem, PathElems};
-use syntax::ast_map;
+use syntax::ast_map::{self, LinkedPath, PathElem, PathElems};
use syntax::ast_util::*;
use syntax::ast_util;
use syntax::attr;
&krate.module,
&[],
ast::CRATE_NODE_ID,
- [].iter().cloned().chain(None),
+ [].iter().cloned().chain(LinkedPath::empty()),
syntax::parse::token::special_idents::invalid,
ast::Public);
}
// Encode reexports for the root module.
- encode_reexports(ecx, rbml_w, 0, [].iter().cloned().chain(None));
+ encode_reexports(ecx, rbml_w, 0, [].iter().cloned().chain(LinkedPath::empty()));
rbml_w.end_tag();
rbml_w.end_tag();
pprust::NodeIdent(_) | pprust::NodeName(_) => 0,
pprust::NodeExpr(expr) => expr.id,
pprust::NodeBlock(blk) => blk.id,
- pprust::NodeItem(_) => 0,
+ pprust::NodeItem(_) | pprust::NodeSubItem(_) => 0,
pprust::NodePat(pat) => pat.id
};
/// <T as Trait>::AssocX::AssocY::MethodOrAssocType
/// ^~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~~~~~~~~~
/// base_def depth = 2
-#[derive(Copy, Debug)]
+#[derive(Copy, Clone, Debug)]
pub struct PathResolution {
pub base_def: Def,
pub last_private: LastPrivate,
pub fn def_id(&self) -> ast::DefId {
self.full_def().def_id()
}
+
+ pub fn new(base_def: Def,
+ last_private: LastPrivate,
+ depth: usize)
+ -> PathResolution {
+ PathResolution {
+ base_def: base_def,
+ last_private: last_private,
+ depth: depth,
+ }
+ }
}
// Definition mapping
//! In particular, it might be enough to say (A,B) are bivariant for
//! all (A,B).
-use middle::ty::BuiltinBounds;
+use super::combine::{self, CombineFields};
+use super::type_variable::{BiTo};
+
use middle::ty::{self, Ty};
use middle::ty::TyVar;
-use middle::infer::combine::*;
-use middle::infer::cres;
-use middle::infer::type_variable::BiTo;
-use util::ppaux::Repr;
+use middle::ty_relate::{Relate, RelateResult, TypeRelation};
+use util::ppaux::{Repr};
-pub struct Bivariate<'f, 'tcx: 'f> {
- fields: CombineFields<'f, 'tcx>
+pub struct Bivariate<'a, 'tcx: 'a> {
+ fields: CombineFields<'a, 'tcx>
}
-#[allow(non_snake_case)]
-pub fn Bivariate<'f, 'tcx>(cf: CombineFields<'f, 'tcx>) -> Bivariate<'f, 'tcx> {
- Bivariate { fields: cf }
+impl<'a, 'tcx> Bivariate<'a, 'tcx> {
+ pub fn new(fields: CombineFields<'a, 'tcx>) -> Bivariate<'a, 'tcx> {
+ Bivariate { fields: fields }
+ }
}
-impl<'f, 'tcx> Combine<'tcx> for Bivariate<'f, 'tcx> {
- fn tag(&self) -> String { "Bivariate".to_string() }
- fn fields<'a>(&'a self) -> &'a CombineFields<'a, 'tcx> { &self.fields }
+impl<'a, 'tcx> TypeRelation<'a, 'tcx> for Bivariate<'a, 'tcx> {
+ fn tag(&self) -> &'static str { "Bivariate" }
- fn tys_with_variance(&self, v: ty::Variance, a: Ty<'tcx>, b: Ty<'tcx>)
- -> cres<'tcx, Ty<'tcx>>
- {
- match v {
- ty::Invariant => self.equate().tys(a, b),
- ty::Covariant => self.tys(a, b),
- ty::Contravariant => self.tys(a, b),
- ty::Bivariant => self.tys(a, b),
- }
- }
+ fn tcx(&self) -> &'a ty::ctxt<'tcx> { self.fields.tcx() }
- fn regions_with_variance(&self, v: ty::Variance, a: ty::Region, b: ty::Region)
- -> cres<'tcx, ty::Region>
- {
- match v {
- ty::Invariant => self.equate().regions(a, b),
- ty::Covariant => self.regions(a, b),
- ty::Contravariant => self.regions(a, b),
- ty::Bivariant => self.regions(a, b),
- }
- }
+ fn a_is_expected(&self) -> bool { self.fields.a_is_expected }
- fn regions(&self, a: ty::Region, _: ty::Region) -> cres<'tcx, ty::Region> {
- Ok(a)
- }
-
- fn builtin_bounds(&self,
- a: BuiltinBounds,
- b: BuiltinBounds)
- -> cres<'tcx, BuiltinBounds>
+ fn relate_with_variance<T:Relate<'a,'tcx>>(&mut self,
+ variance: ty::Variance,
+ a: &T,
+ b: &T)
+ -> RelateResult<'tcx, T>
{
- if a != b {
- Err(ty::terr_builtin_bounds(expected_found(self, a, b)))
- } else {
- Ok(a)
+ match variance {
+ // If we have Foo<A> and Foo is invariant w/r/t A,
+ // and we want to assert that
+ //
+ // Foo<A> <: Foo<B> ||
+ // Foo<B> <: Foo<A>
+ //
+ // then still A must equal B.
+ ty::Invariant => self.relate(a, b),
+
+ ty::Covariant => self.relate(a, b),
+ ty::Bivariant => self.relate(a, b),
+ ty::Contravariant => self.relate(a, b),
}
}
- fn tys(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> cres<'tcx, Ty<'tcx>> {
+ fn tys(&mut self, a: Ty<'tcx>, b: Ty<'tcx>) -> RelateResult<'tcx, Ty<'tcx>> {
debug!("{}.tys({}, {})", self.tag(),
a.repr(self.fields.infcx.tcx), b.repr(self.fields.infcx.tcx));
if a == b { return Ok(a); }
}
_ => {
- super_tys(self, a, b)
+ combine::super_combine_tys(self.fields.infcx, self, a, b)
}
}
}
- fn binders<T>(&self, a: &ty::Binder<T>, b: &ty::Binder<T>) -> cres<'tcx, ty::Binder<T>>
- where T : Combineable<'tcx>
+ fn regions(&mut self, a: ty::Region, _: ty::Region) -> RelateResult<'tcx, ty::Region> {
+ Ok(a)
+ }
+
+ fn binders<T>(&mut self, a: &ty::Binder<T>, b: &ty::Binder<T>)
+ -> RelateResult<'tcx, ty::Binder<T>>
+ where T: Relate<'a,'tcx>
{
let a1 = ty::erase_late_bound_regions(self.tcx(), a);
let b1 = ty::erase_late_bound_regions(self.tcx(), b);
- let c = try!(Combineable::combine(self, &a1, &b1));
+ let c = try!(self.relate(&a1, &b1));
Ok(ty::Binder(c))
}
}
use super::glb::Glb;
use super::lub::Lub;
use super::sub::Sub;
-use super::unify::InferCtxtMethodsForSimplyUnifiableTypes;
-use super::{InferCtxt, cres};
+use super::{InferCtxt};
use super::{MiscVariable, TypeTrace};
use super::type_variable::{RelationDir, BiTo, EqTo, SubtypeOf, SupertypeOf};
-use middle::subst;
-use middle::subst::{ErasedRegions, NonerasedRegions, Substs};
-use middle::ty::{FloatVar, FnSig, IntVar, TyVar};
+use middle::ty::{TyVar};
use middle::ty::{IntType, UintType};
-use middle::ty::BuiltinBounds;
use middle::ty::{self, Ty};
use middle::ty_fold;
use middle::ty_fold::{TypeFolder, TypeFoldable};
+use middle::ty_relate::{self, Relate, RelateResult, TypeRelation};
use util::ppaux::Repr;
-use std::rc::Rc;
-use syntax::ast::Unsafety;
use syntax::ast;
-use syntax::abi;
use syntax::codemap::Span;
-pub trait Combine<'tcx> : Sized {
- fn tcx<'a>(&'a self) -> &'a ty::ctxt<'tcx> { self.infcx().tcx }
- fn tag(&self) -> String;
-
- fn fields<'a>(&'a self) -> &'a CombineFields<'a, 'tcx>;
-
- fn infcx<'a>(&'a self) -> &'a InferCtxt<'a, 'tcx> { self.fields().infcx }
- fn a_is_expected(&self) -> bool { self.fields().a_is_expected }
- fn trace(&self) -> TypeTrace<'tcx> { self.fields().trace.clone() }
- fn equate<'a>(&'a self) -> Equate<'a, 'tcx> { self.fields().equate() }
- fn bivariate<'a>(&'a self) -> Bivariate<'a, 'tcx> { self.fields().bivariate() }
-
- fn sub<'a>(&'a self) -> Sub<'a, 'tcx> { self.fields().sub() }
- fn lub<'a>(&'a self) -> Lub<'a, 'tcx> { Lub(self.fields().clone()) }
- fn glb<'a>(&'a self) -> Glb<'a, 'tcx> { Glb(self.fields().clone()) }
-
- fn mts(&self, a: &ty::mt<'tcx>, b: &ty::mt<'tcx>) -> cres<'tcx, ty::mt<'tcx>> {
- debug!("{}.mts({}, {})",
- self.tag(),
- a.repr(self.tcx()),
- b.repr(self.tcx()));
-
- if a.mutbl != b.mutbl {
- Err(ty::terr_mutability)
- } else {
- let mutbl = a.mutbl;
- let variance = match mutbl {
- ast::MutImmutable => ty::Covariant,
- ast::MutMutable => ty::Invariant,
- };
- let ty = try!(self.tys_with_variance(variance, a.ty, b.ty));
- Ok(ty::mt {ty: ty, mutbl: mutbl})
- }
- }
-
- fn tys_with_variance(&self, variance: ty::Variance, a: Ty<'tcx>, b: Ty<'tcx>)
- -> cres<'tcx, Ty<'tcx>>;
-
- fn tys(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> cres<'tcx, Ty<'tcx>>;
-
- fn regions_with_variance(&self, variance: ty::Variance, a: ty::Region, b: ty::Region)
- -> cres<'tcx, ty::Region>;
-
- fn regions(&self, a: ty::Region, b: ty::Region) -> cres<'tcx, ty::Region>;
-
- fn substs(&self,
- item_def_id: ast::DefId,
- a_subst: &subst::Substs<'tcx>,
- b_subst: &subst::Substs<'tcx>)
- -> cres<'tcx, subst::Substs<'tcx>>
- {
- debug!("substs: item_def_id={} a_subst={} b_subst={}",
- item_def_id.repr(self.infcx().tcx),
- a_subst.repr(self.infcx().tcx),
- b_subst.repr(self.infcx().tcx));
-
- let variances = if self.infcx().tcx.variance_computed.get() {
- Some(ty::item_variances(self.infcx().tcx, item_def_id))
- } else {
- None
- };
- self.substs_variances(variances.as_ref().map(|v| &**v), a_subst, b_subst)
- }
-
- fn substs_variances(&self,
- variances: Option<&ty::ItemVariances>,
- a_subst: &subst::Substs<'tcx>,
- b_subst: &subst::Substs<'tcx>)
- -> cres<'tcx, subst::Substs<'tcx>>
- {
- let mut substs = subst::Substs::empty();
-
- for &space in &subst::ParamSpace::all() {
- let a_tps = a_subst.types.get_slice(space);
- let b_tps = b_subst.types.get_slice(space);
- let t_variances = variances.map(|v| v.types.get_slice(space));
- let tps = try!(relate_type_params(self, t_variances, a_tps, b_tps));
- substs.types.replace(space, tps);
- }
-
- match (&a_subst.regions, &b_subst.regions) {
- (&ErasedRegions, _) | (_, &ErasedRegions) => {
- substs.regions = ErasedRegions;
- }
-
- (&NonerasedRegions(ref a), &NonerasedRegions(ref b)) => {
- for &space in &subst::ParamSpace::all() {
- let a_regions = a.get_slice(space);
- let b_regions = b.get_slice(space);
- let r_variances = variances.map(|v| v.regions.get_slice(space));
- let regions = try!(relate_region_params(self,
- r_variances,
- a_regions,
- b_regions));
- substs.mut_regions().replace(space, regions);
- }
- }
- }
-
- return Ok(substs);
-
- fn relate_type_params<'tcx, C: Combine<'tcx>>(this: &C,
- variances: Option<&[ty::Variance]>,
- a_tys: &[Ty<'tcx>],
- b_tys: &[Ty<'tcx>])
- -> cres<'tcx, Vec<Ty<'tcx>>>
- {
- if a_tys.len() != b_tys.len() {
- return Err(ty::terr_ty_param_size(expected_found(this,
- a_tys.len(),
- b_tys.len())));
- }
-
- (0.. a_tys.len()).map(|i| {
- let a_ty = a_tys[i];
- let b_ty = b_tys[i];
- let v = variances.map_or(ty::Invariant, |v| v[i]);
- this.tys_with_variance(v, a_ty, b_ty)
- }).collect()
- }
-
- fn relate_region_params<'tcx, C: Combine<'tcx>>(this: &C,
- variances: Option<&[ty::Variance]>,
- a_rs: &[ty::Region],
- b_rs: &[ty::Region])
- -> cres<'tcx, Vec<ty::Region>>
- {
- let tcx = this.infcx().tcx;
- let num_region_params = a_rs.len();
-
- debug!("relate_region_params(\
- a_rs={}, \
- b_rs={},
- variances={})",
- a_rs.repr(tcx),
- b_rs.repr(tcx),
- variances.repr(tcx));
-
- assert_eq!(num_region_params,
- variances.map_or(num_region_params,
- |v| v.len()));
-
- assert_eq!(num_region_params, b_rs.len());
-
- (0..a_rs.len()).map(|i| {
- let a_r = a_rs[i];
- let b_r = b_rs[i];
- let variance = variances.map_or(ty::Invariant, |v| v[i]);
- this.regions_with_variance(variance, a_r, b_r)
- }).collect()
- }
- }
-
- fn bare_fn_tys(&self, a: &ty::BareFnTy<'tcx>,
- b: &ty::BareFnTy<'tcx>) -> cres<'tcx, ty::BareFnTy<'tcx>> {
- let unsafety = try!(self.unsafeties(a.unsafety, b.unsafety));
- let abi = try!(self.abi(a.abi, b.abi));
- let sig = try!(self.binders(&a.sig, &b.sig));
- Ok(ty::BareFnTy {unsafety: unsafety,
- abi: abi,
- sig: sig})
- }
-
- fn fn_sigs(&self, a: &ty::FnSig<'tcx>, b: &ty::FnSig<'tcx>) -> cres<'tcx, ty::FnSig<'tcx>> {
- if a.variadic != b.variadic {
- return Err(ty::terr_variadic_mismatch(expected_found(self, a.variadic, b.variadic)));
- }
-
- let inputs = try!(argvecs(self,
- &a.inputs,
- &b.inputs));
-
- let output = try!(match (a.output, b.output) {
- (ty::FnConverging(a_ty), ty::FnConverging(b_ty)) =>
- Ok(ty::FnConverging(try!(self.tys(a_ty, b_ty)))),
- (ty::FnDiverging, ty::FnDiverging) =>
- Ok(ty::FnDiverging),
- (a, b) =>
- Err(ty::terr_convergence_mismatch(
- expected_found(self, a != ty::FnDiverging, b != ty::FnDiverging))),
- });
-
- return Ok(ty::FnSig {inputs: inputs,
- output: output,
- variadic: a.variadic});
-
-
- fn argvecs<'tcx, C>(combiner: &C,
- a_args: &[Ty<'tcx>],
- b_args: &[Ty<'tcx>])
- -> cres<'tcx, Vec<Ty<'tcx>>>
- where C: Combine<'tcx> {
- if a_args.len() == b_args.len() {
- a_args.iter().zip(b_args.iter())
- .map(|(a, b)| combiner.args(*a, *b)).collect()
- } else {
- Err(ty::terr_arg_count)
- }
- }
- }
-
- fn args(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> cres<'tcx, Ty<'tcx>> {
- self.tys_with_variance(ty::Contravariant, a, b).and_then(|t| Ok(t))
- }
-
- fn unsafeties(&self, a: Unsafety, b: Unsafety) -> cres<'tcx, Unsafety> {
- if a != b {
- Err(ty::terr_unsafety_mismatch(expected_found(self, a, b)))
- } else {
- Ok(a)
- }
- }
-
- fn abi(&self, a: abi::Abi, b: abi::Abi) -> cres<'tcx, abi::Abi> {
- if a == b {
- Ok(a)
- } else {
- Err(ty::terr_abi_mismatch(expected_found(self, a, b)))
- }
- }
-
- fn projection_tys(&self,
- a: &ty::ProjectionTy<'tcx>,
- b: &ty::ProjectionTy<'tcx>)
- -> cres<'tcx, ty::ProjectionTy<'tcx>>
- {
- if a.item_name != b.item_name {
- Err(ty::terr_projection_name_mismatched(
- expected_found(self, a.item_name, b.item_name)))
- } else {
- let trait_ref = try!(self.trait_refs(&*a.trait_ref, &*b.trait_ref));
- Ok(ty::ProjectionTy { trait_ref: Rc::new(trait_ref), item_name: a.item_name })
- }
- }
-
- fn projection_predicates(&self,
- a: &ty::ProjectionPredicate<'tcx>,
- b: &ty::ProjectionPredicate<'tcx>)
- -> cres<'tcx, ty::ProjectionPredicate<'tcx>>
- {
- let projection_ty = try!(self.projection_tys(&a.projection_ty, &b.projection_ty));
- let ty = try!(self.tys(a.ty, b.ty));
- Ok(ty::ProjectionPredicate { projection_ty: projection_ty, ty: ty })
- }
-
- fn projection_bounds(&self,
- a: &Vec<ty::PolyProjectionPredicate<'tcx>>,
- b: &Vec<ty::PolyProjectionPredicate<'tcx>>)
- -> cres<'tcx, Vec<ty::PolyProjectionPredicate<'tcx>>>
- {
- // To be compatible, `a` and `b` must be for precisely the
- // same set of traits and item names. We always require that
- // projection bounds lists are sorted by trait-def-id and item-name,
- // so we can just iterate through the lists pairwise, so long as they are the
- // same length.
- if a.len() != b.len() {
- Err(ty::terr_projection_bounds_length(expected_found(self, a.len(), b.len())))
- } else {
- a.iter()
- .zip(b.iter())
- .map(|(a, b)| self.binders(a, b))
- .collect()
- }
- }
-
- fn existential_bounds(&self,
- a: &ty::ExistentialBounds<'tcx>,
- b: &ty::ExistentialBounds<'tcx>)
- -> cres<'tcx, ty::ExistentialBounds<'tcx>>
- {
- let r = try!(self.regions_with_variance(ty::Contravariant, a.region_bound, b.region_bound));
- let nb = try!(self.builtin_bounds(a.builtin_bounds, b.builtin_bounds));
- let pb = try!(self.projection_bounds(&a.projection_bounds, &b.projection_bounds));
- Ok(ty::ExistentialBounds { region_bound: r,
- builtin_bounds: nb,
- projection_bounds: pb })
- }
-
- fn builtin_bounds(&self,
- a: BuiltinBounds,
- b: BuiltinBounds)
- -> cres<'tcx, BuiltinBounds>
- {
- // Two sets of builtin bounds are only relatable if they are
- // precisely the same (but see the coercion code).
- if a != b {
- Err(ty::terr_builtin_bounds(expected_found(self, a, b)))
- } else {
- Ok(a)
- }
- }
-
- fn trait_refs(&self,
- a: &ty::TraitRef<'tcx>,
- b: &ty::TraitRef<'tcx>)
- -> cres<'tcx, ty::TraitRef<'tcx>>
- {
- // Different traits cannot be related
- if a.def_id != b.def_id {
- Err(ty::terr_traits(expected_found(self, a.def_id, b.def_id)))
- } else {
- let substs = try!(self.substs(a.def_id, a.substs, b.substs));
- Ok(ty::TraitRef { def_id: a.def_id, substs: self.tcx().mk_substs(substs) })
- }
- }
-
- fn binders<T>(&self, a: &ty::Binder<T>, b: &ty::Binder<T>) -> cres<'tcx, ty::Binder<T>>
- where T : Combineable<'tcx>;
- // this must be overridden to do correctly, so as to account for higher-ranked
- // behavior
-}
-
-pub trait Combineable<'tcx> : Repr<'tcx> + TypeFoldable<'tcx> {
- fn combine<C:Combine<'tcx>>(combiner: &C, a: &Self, b: &Self) -> cres<'tcx, Self>;
-}
-
-impl<'tcx,T> Combineable<'tcx> for Rc<T>
- where T : Combineable<'tcx>
-{
- fn combine<C>(combiner: &C,
- a: &Rc<T>,
- b: &Rc<T>)
- -> cres<'tcx, Rc<T>>
- where C: Combine<'tcx> {
- Ok(Rc::new(try!(Combineable::combine(combiner, &**a, &**b))))
- }
-}
-
-impl<'tcx> Combineable<'tcx> for ty::TraitRef<'tcx> {
- fn combine<C>(combiner: &C,
- a: &ty::TraitRef<'tcx>,
- b: &ty::TraitRef<'tcx>)
- -> cres<'tcx, ty::TraitRef<'tcx>>
- where C: Combine<'tcx> {
- combiner.trait_refs(a, b)
- }
-}
-
-impl<'tcx> Combineable<'tcx> for Ty<'tcx> {
- fn combine<C>(combiner: &C,
- a: &Ty<'tcx>,
- b: &Ty<'tcx>)
- -> cres<'tcx, Ty<'tcx>>
- where C: Combine<'tcx> {
- combiner.tys(*a, *b)
- }
-}
-
-impl<'tcx> Combineable<'tcx> for ty::ProjectionPredicate<'tcx> {
- fn combine<C>(combiner: &C,
- a: &ty::ProjectionPredicate<'tcx>,
- b: &ty::ProjectionPredicate<'tcx>)
- -> cres<'tcx, ty::ProjectionPredicate<'tcx>>
- where C: Combine<'tcx> {
- combiner.projection_predicates(a, b)
- }
-}
-
-impl<'tcx> Combineable<'tcx> for ty::FnSig<'tcx> {
- fn combine<C>(combiner: &C,
- a: &ty::FnSig<'tcx>,
- b: &ty::FnSig<'tcx>)
- -> cres<'tcx, ty::FnSig<'tcx>>
- where C: Combine<'tcx> {
- combiner.fn_sigs(a, b)
- }
-}
-
#[derive(Clone)]
pub struct CombineFields<'a, 'tcx: 'a> {
pub infcx: &'a InferCtxt<'a, 'tcx>,
pub trace: TypeTrace<'tcx>,
}
-pub fn expected_found<'tcx, C, T>(this: &C,
- a: T,
- b: T)
- -> ty::expected_found<T>
- where C: Combine<'tcx> {
- if this.a_is_expected() {
- ty::expected_found {expected: a, found: b}
- } else {
- ty::expected_found {expected: b, found: a}
- }
-}
-
-pub fn super_tys<'tcx, C>(this: &C,
- a: Ty<'tcx>,
- b: Ty<'tcx>)
- -> cres<'tcx, Ty<'tcx>>
- where C: Combine<'tcx> {
- let tcx = this.infcx().tcx;
- let a_sty = &a.sty;
- let b_sty = &b.sty;
- debug!("super_tys: a_sty={:?} b_sty={:?}", a_sty, b_sty);
- return match (a_sty, b_sty) {
- // The "subtype" ought to be handling cases involving var:
- (&ty::ty_infer(TyVar(_)), _)
- | (_, &ty::ty_infer(TyVar(_))) =>
- tcx.sess.bug(
- &format!("{}: bot and var types should have been handled ({},{})",
- this.tag(),
- a.repr(this.infcx().tcx),
- b.repr(this.infcx().tcx))),
-
- (&ty::ty_err, _) | (_, &ty::ty_err) => Ok(tcx.types.err),
+pub fn super_combine_tys<'a,'tcx:'a,R>(infcx: &InferCtxt<'a, 'tcx>,
+ relation: &mut R,
+ a: Ty<'tcx>,
+ b: Ty<'tcx>)
+ -> RelateResult<'tcx, Ty<'tcx>>
+ where R: TypeRelation<'a,'tcx>
+{
+ let a_is_expected = relation.a_is_expected();
+ match (&a.sty, &b.sty) {
// Relate integral variables to other types
- (&ty::ty_infer(IntVar(a_id)), &ty::ty_infer(IntVar(b_id))) => {
- try!(this.infcx().simple_vars(this.a_is_expected(),
- a_id, b_id));
+ (&ty::ty_infer(ty::IntVar(a_id)), &ty::ty_infer(ty::IntVar(b_id))) => {
+ try!(infcx.int_unification_table
+ .borrow_mut()
+ .unify_var_var(a_id, b_id)
+ .map_err(|e| int_unification_error(a_is_expected, e)));
Ok(a)
}
- (&ty::ty_infer(IntVar(v_id)), &ty::ty_int(v)) => {
- unify_integral_variable(this, this.a_is_expected(),
- v_id, IntType(v))
+ (&ty::ty_infer(ty::IntVar(v_id)), &ty::ty_int(v)) => {
+ unify_integral_variable(infcx, a_is_expected, v_id, IntType(v))
}
- (&ty::ty_int(v), &ty::ty_infer(IntVar(v_id))) => {
- unify_integral_variable(this, !this.a_is_expected(),
- v_id, IntType(v))
+ (&ty::ty_int(v), &ty::ty_infer(ty::IntVar(v_id))) => {
+ unify_integral_variable(infcx, !a_is_expected, v_id, IntType(v))
}
- (&ty::ty_infer(IntVar(v_id)), &ty::ty_uint(v)) => {
- unify_integral_variable(this, this.a_is_expected(),
- v_id, UintType(v))
+ (&ty::ty_infer(ty::IntVar(v_id)), &ty::ty_uint(v)) => {
+ unify_integral_variable(infcx, a_is_expected, v_id, UintType(v))
}
- (&ty::ty_uint(v), &ty::ty_infer(IntVar(v_id))) => {
- unify_integral_variable(this, !this.a_is_expected(),
- v_id, UintType(v))
+ (&ty::ty_uint(v), &ty::ty_infer(ty::IntVar(v_id))) => {
+ unify_integral_variable(infcx, !a_is_expected, v_id, UintType(v))
}
// Relate floating-point variables to other types
- (&ty::ty_infer(FloatVar(a_id)), &ty::ty_infer(FloatVar(b_id))) => {
- try!(this.infcx().simple_vars(this.a_is_expected(), a_id, b_id));
+ (&ty::ty_infer(ty::FloatVar(a_id)), &ty::ty_infer(ty::FloatVar(b_id))) => {
+ try!(infcx.float_unification_table
+ .borrow_mut()
+ .unify_var_var(a_id, b_id)
+ .map_err(|e| float_unification_error(relation.a_is_expected(), e)));
Ok(a)
}
- (&ty::ty_infer(FloatVar(v_id)), &ty::ty_float(v)) => {
- unify_float_variable(this, this.a_is_expected(), v_id, v)
- }
- (&ty::ty_float(v), &ty::ty_infer(FloatVar(v_id))) => {
- unify_float_variable(this, !this.a_is_expected(), v_id, v)
+ (&ty::ty_infer(ty::FloatVar(v_id)), &ty::ty_float(v)) => {
+ unify_float_variable(infcx, a_is_expected, v_id, v)
}
-
- (&ty::ty_char, _)
- | (&ty::ty_bool, _)
- | (&ty::ty_int(_), _)
- | (&ty::ty_uint(_), _)
- | (&ty::ty_float(_), _) => {
- if a == b {
- Ok(a)
- } else {
- Err(ty::terr_sorts(expected_found(this, a, b)))
- }
+ (&ty::ty_float(v), &ty::ty_infer(ty::FloatVar(v_id))) => {
+ unify_float_variable(infcx, !a_is_expected, v_id, v)
}
- (&ty::ty_param(ref a_p), &ty::ty_param(ref b_p)) if
- a_p.idx == b_p.idx && a_p.space == b_p.space => Ok(a),
-
- (&ty::ty_enum(a_id, a_substs), &ty::ty_enum(b_id, b_substs))
- if a_id == b_id => {
- let substs = try!(this.substs(a_id, a_substs, b_substs));
- Ok(ty::mk_enum(tcx, a_id, tcx.mk_substs(substs)))
+ // All other cases of inference are errors
+ (&ty::ty_infer(_), _) |
+ (_, &ty::ty_infer(_)) => {
+ Err(ty::terr_sorts(ty_relate::expected_found(relation, &a, &b)))
}
- (&ty::ty_trait(ref a_), &ty::ty_trait(ref b_)) => {
- debug!("Trying to match traits {:?} and {:?}", a, b);
- let principal = try!(this.binders(&a_.principal, &b_.principal));
- let bounds = try!(this.existential_bounds(&a_.bounds, &b_.bounds));
- Ok(ty::mk_trait(tcx, principal, bounds))
- }
-
- (&ty::ty_struct(a_id, a_substs), &ty::ty_struct(b_id, b_substs))
- if a_id == b_id => {
- let substs = try!(this.substs(a_id, a_substs, b_substs));
- Ok(ty::mk_struct(tcx, a_id, tcx.mk_substs(substs)))
- }
-
- (&ty::ty_closure(a_id, a_substs),
- &ty::ty_closure(b_id, b_substs))
- if a_id == b_id => {
- // All ty_closure types with the same id represent
- // the (anonymous) type of the same closure expression. So
- // all of their regions should be equated.
- let substs = try!(this.substs_variances(None, a_substs, b_substs));
- Ok(ty::mk_closure(tcx, a_id, tcx.mk_substs(substs)))
- }
- (&ty::ty_uniq(a_inner), &ty::ty_uniq(b_inner)) => {
- let typ = try!(this.tys(a_inner, b_inner));
- Ok(ty::mk_uniq(tcx, typ))
- }
-
- (&ty::ty_ptr(ref a_mt), &ty::ty_ptr(ref b_mt)) => {
- let mt = try!(this.mts(a_mt, b_mt));
- Ok(ty::mk_ptr(tcx, mt))
- }
-
- (&ty::ty_rptr(a_r, ref a_mt), &ty::ty_rptr(b_r, ref b_mt)) => {
- let r = try!(this.regions_with_variance(ty::Contravariant, *a_r, *b_r));
- let mt = try!(this.mts(a_mt, b_mt));
- Ok(ty::mk_rptr(tcx, tcx.mk_region(r), mt))
- }
-
- (&ty::ty_vec(a_t, Some(sz_a)), &ty::ty_vec(b_t, Some(sz_b))) => {
- this.tys(a_t, b_t).and_then(|t| {
- if sz_a == sz_b {
- Ok(ty::mk_vec(tcx, t, Some(sz_a)))
- } else {
- Err(ty::terr_fixed_array_size(expected_found(this, sz_a, sz_b)))
- }
- })
- }
-
- (&ty::ty_vec(a_t, sz_a), &ty::ty_vec(b_t, sz_b)) => {
- this.tys(a_t, b_t).and_then(|t| {
- if sz_a == sz_b {
- Ok(ty::mk_vec(tcx, t, sz_a))
- } else {
- Err(ty::terr_sorts(expected_found(this, a, b)))
- }
- })
- }
-
- (&ty::ty_str, &ty::ty_str) => Ok(ty::mk_str(tcx)),
-
- (&ty::ty_tup(ref as_), &ty::ty_tup(ref bs)) => {
- if as_.len() == bs.len() {
- as_.iter().zip(bs.iter())
- .map(|(a, b)| this.tys(*a, *b))
- .collect::<Result<_, _>>()
- .map(|ts| ty::mk_tup(tcx, ts))
- } else if as_.len() != 0 && bs.len() != 0 {
- Err(ty::terr_tuple_size(
- expected_found(this, as_.len(), bs.len())))
- } else {
- Err(ty::terr_sorts(expected_found(this, a, b)))
- }
- }
-
- (&ty::ty_bare_fn(a_opt_def_id, a_fty), &ty::ty_bare_fn(b_opt_def_id, b_fty))
- if a_opt_def_id == b_opt_def_id =>
- {
- let fty = try!(this.bare_fn_tys(a_fty, b_fty));
- Ok(ty::mk_bare_fn(tcx, a_opt_def_id, tcx.mk_bare_fn(fty)))
- }
-
- (&ty::ty_projection(ref a_data), &ty::ty_projection(ref b_data)) => {
- let projection_ty = try!(this.projection_tys(a_data, b_data));
- Ok(ty::mk_projection(tcx, projection_ty.trait_ref, projection_ty.item_name))
- }
-
- _ => Err(ty::terr_sorts(expected_found(this, a, b))),
- };
-
- fn unify_integral_variable<'tcx, C>(this: &C,
- vid_is_expected: bool,
- vid: ty::IntVid,
- val: ty::IntVarValue)
- -> cres<'tcx, Ty<'tcx>>
- where C: Combine<'tcx> {
- try!(this.infcx().simple_var_t(vid_is_expected, vid, val));
- match val {
- IntType(v) => Ok(ty::mk_mach_int(this.tcx(), v)),
- UintType(v) => Ok(ty::mk_mach_uint(this.tcx(), v)),
+ _ => {
+ ty_relate::super_relate_tys(relation, a, b)
}
}
+}
- fn unify_float_variable<'tcx, C>(this: &C,
- vid_is_expected: bool,
- vid: ty::FloatVid,
- val: ast::FloatTy)
- -> cres<'tcx, Ty<'tcx>>
- where C: Combine<'tcx> {
- try!(this.infcx().simple_var_t(vid_is_expected, vid, val));
- Ok(ty::mk_mach_float(this.tcx(), val))
+fn unify_integral_variable<'a,'tcx>(infcx: &InferCtxt<'a,'tcx>,
+ vid_is_expected: bool,
+ vid: ty::IntVid,
+ val: ty::IntVarValue)
+ -> RelateResult<'tcx, Ty<'tcx>>
+{
+ try!(infcx
+ .int_unification_table
+ .borrow_mut()
+ .unify_var_value(vid, val)
+ .map_err(|e| int_unification_error(vid_is_expected, e)));
+ match val {
+ IntType(v) => Ok(ty::mk_mach_int(infcx.tcx, v)),
+ UintType(v) => Ok(ty::mk_mach_uint(infcx.tcx, v)),
}
}
-impl<'f, 'tcx> CombineFields<'f, 'tcx> {
- pub fn switch_expected(&self) -> CombineFields<'f, 'tcx> {
+fn unify_float_variable<'a,'tcx>(infcx: &InferCtxt<'a,'tcx>,
+ vid_is_expected: bool,
+ vid: ty::FloatVid,
+ val: ast::FloatTy)
+ -> RelateResult<'tcx, Ty<'tcx>>
+{
+ try!(infcx
+ .float_unification_table
+ .borrow_mut()
+ .unify_var_value(vid, val)
+ .map_err(|e| float_unification_error(vid_is_expected, e)));
+ Ok(ty::mk_mach_float(infcx.tcx, val))
+}
+
+impl<'a, 'tcx> CombineFields<'a, 'tcx> {
+ pub fn tcx(&self) -> &'a ty::ctxt<'tcx> {
+ self.infcx.tcx
+ }
+
+ pub fn switch_expected(&self) -> CombineFields<'a, 'tcx> {
CombineFields {
a_is_expected: !self.a_is_expected,
..(*self).clone()
}
}
- fn equate(&self) -> Equate<'f, 'tcx> {
- Equate((*self).clone())
+ pub fn equate(&self) -> Equate<'a, 'tcx> {
+ Equate::new(self.clone())
+ }
+
+ pub fn bivariate(&self) -> Bivariate<'a, 'tcx> {
+ Bivariate::new(self.clone())
+ }
+
+ pub fn sub(&self) -> Sub<'a, 'tcx> {
+ Sub::new(self.clone())
}
- fn bivariate(&self) -> Bivariate<'f, 'tcx> {
- Bivariate((*self).clone())
+ pub fn lub(&self) -> Lub<'a, 'tcx> {
+ Lub::new(self.clone())
}
- fn sub(&self) -> Sub<'f, 'tcx> {
- Sub((*self).clone())
+ pub fn glb(&self) -> Glb<'a, 'tcx> {
+ Glb::new(self.clone())
}
pub fn instantiate(&self,
a_ty: Ty<'tcx>,
dir: RelationDir,
b_vid: ty::TyVid)
- -> cres<'tcx, ()>
+ -> RelateResult<'tcx, ()>
{
let tcx = self.infcx.tcx;
let mut stack = Vec::new();
// relations wind up attributed to the same spans. We need
// to associate causes/spans with each of the relations in
// the stack to get this right.
- match dir {
- BiTo => try!(self.bivariate().tys(a_ty, b_ty)),
-
- EqTo => try!(self.equate().tys(a_ty, b_ty)),
-
- SubtypeOf => try!(self.sub().tys(a_ty, b_ty)),
-
- SupertypeOf => try!(self.sub().tys_with_variance(ty::Contravariant, a_ty, b_ty)),
- };
+ try!(match dir {
+ BiTo => self.bivariate().relate(&a_ty, &b_ty),
+ EqTo => self.equate().relate(&a_ty, &b_ty),
+ SubtypeOf => self.sub().relate(&a_ty, &b_ty),
+ SupertypeOf => self.sub().relate_with_variance(ty::Contravariant, &a_ty, &b_ty),
+ });
}
Ok(())
ty: Ty<'tcx>,
for_vid: ty::TyVid,
make_region_vars: bool)
- -> cres<'tcx, Ty<'tcx>>
+ -> RelateResult<'tcx, Ty<'tcx>>
{
let mut generalize = Generalizer {
infcx: self.infcx,
self.infcx.next_region_var(MiscVariable(self.span))
}
}
+
+pub trait RelateResultCompare<'tcx, T> {
+ fn compare<F>(&self, t: T, f: F) -> RelateResult<'tcx, T> where
+ F: FnOnce() -> ty::type_err<'tcx>;
+}
+
+impl<'tcx, T:Clone + PartialEq> RelateResultCompare<'tcx, T> for RelateResult<'tcx, T> {
+ fn compare<F>(&self, t: T, f: F) -> RelateResult<'tcx, T> where
+ F: FnOnce() -> ty::type_err<'tcx>,
+ {
+ self.clone().and_then(|s| {
+ if s == t {
+ self.clone()
+ } else {
+ Err(f())
+ }
+ })
+ }
+}
+
+fn int_unification_error<'tcx>(a_is_expected: bool, v: (ty::IntVarValue, ty::IntVarValue))
+ -> ty::type_err<'tcx>
+{
+ let (a, b) = v;
+ ty::terr_int_mismatch(ty_relate::expected_found_bool(a_is_expected, &a, &b))
+}
+
+fn float_unification_error<'tcx>(a_is_expected: bool,
+ v: (ast::FloatTy, ast::FloatTy))
+ -> ty::type_err<'tcx>
+{
+ let (a, b) = v;
+ ty::terr_float_mismatch(ty_relate::expected_found_bool(a_is_expected, &a, &b))
+}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+use super::combine::{self, CombineFields};
+use super::higher_ranked::HigherRankedRelations;
+use super::{Subtype};
+use super::type_variable::{EqTo};
+
use middle::ty::{self, Ty};
use middle::ty::TyVar;
-use middle::infer::combine::*;
-use middle::infer::cres;
-use middle::infer::Subtype;
-use middle::infer::type_variable::EqTo;
-use util::ppaux::Repr;
+use middle::ty_relate::{Relate, RelateResult, TypeRelation};
+use util::ppaux::{Repr};
-pub struct Equate<'f, 'tcx: 'f> {
- fields: CombineFields<'f, 'tcx>
+pub struct Equate<'a, 'tcx: 'a> {
+ fields: CombineFields<'a, 'tcx>
}
-#[allow(non_snake_case)]
-pub fn Equate<'f, 'tcx>(cf: CombineFields<'f, 'tcx>) -> Equate<'f, 'tcx> {
- Equate { fields: cf }
+impl<'a, 'tcx> Equate<'a, 'tcx> {
+ pub fn new(fields: CombineFields<'a, 'tcx>) -> Equate<'a, 'tcx> {
+ Equate { fields: fields }
+ }
}
-impl<'f, 'tcx> Combine<'tcx> for Equate<'f, 'tcx> {
- fn tag(&self) -> String { "Equate".to_string() }
- fn fields<'a>(&'a self) -> &'a CombineFields<'a, 'tcx> { &self.fields }
+impl<'a, 'tcx> TypeRelation<'a,'tcx> for Equate<'a, 'tcx> {
+ fn tag(&self) -> &'static str { "Equate" }
- fn tys_with_variance(&self, _: ty::Variance, a: Ty<'tcx>, b: Ty<'tcx>)
- -> cres<'tcx, Ty<'tcx>>
- {
- // Once we're equating, it doesn't matter what the variance is.
- self.tys(a, b)
- }
+ fn tcx(&self) -> &'a ty::ctxt<'tcx> { self.fields.tcx() }
- fn regions_with_variance(&self, _: ty::Variance, a: ty::Region, b: ty::Region)
- -> cres<'tcx, ty::Region>
- {
- // Once we're equating, it doesn't matter what the variance is.
- self.regions(a, b)
- }
+ fn a_is_expected(&self) -> bool { self.fields.a_is_expected }
- fn regions(&self, a: ty::Region, b: ty::Region) -> cres<'tcx, ty::Region> {
- debug!("{}.regions({}, {})",
- self.tag(),
- a.repr(self.fields.infcx.tcx),
- b.repr(self.fields.infcx.tcx));
- self.infcx().region_vars.make_eqregion(Subtype(self.trace()), a, b);
- Ok(a)
+ fn relate_with_variance<T:Relate<'a,'tcx>>(&mut self,
+ _: ty::Variance,
+ a: &T,
+ b: &T)
+ -> RelateResult<'tcx, T>
+ {
+ self.relate(a, b)
}
- fn tys(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> cres<'tcx, Ty<'tcx>> {
+ fn tys(&mut self, a: Ty<'tcx>, b: Ty<'tcx>) -> RelateResult<'tcx, Ty<'tcx>> {
debug!("{}.tys({}, {})", self.tag(),
a.repr(self.fields.infcx.tcx), b.repr(self.fields.infcx.tcx));
if a == b { return Ok(a); }
}
_ => {
- super_tys(self, a, b)
+ combine::super_combine_tys(self.fields.infcx, self, a, b)
}
}
}
- fn binders<T>(&self, a: &ty::Binder<T>, b: &ty::Binder<T>) -> cres<'tcx, ty::Binder<T>>
- where T : Combineable<'tcx>
+ fn regions(&mut self, a: ty::Region, b: ty::Region) -> RelateResult<'tcx, ty::Region> {
+ debug!("{}.regions({}, {})",
+ self.tag(),
+ a.repr(self.fields.infcx.tcx),
+ b.repr(self.fields.infcx.tcx));
+ let origin = Subtype(self.fields.trace.clone());
+ self.fields.infcx.region_vars.make_eqregion(origin, a, b);
+ Ok(a)
+ }
+
+ fn binders<T>(&mut self, a: &ty::Binder<T>, b: &ty::Binder<T>)
+ -> RelateResult<'tcx, ty::Binder<T>>
+ where T: Relate<'a, 'tcx>
{
- try!(self.sub().binders(a, b));
- self.sub().binders(b, a)
+ try!(self.fields.higher_ranked_sub(a, b));
+ self.fields.higher_ranked_sub(b, a)
}
}
use std::collections::hash_map::{self, Entry};
use super::InferCtxt;
-use super::unify::InferCtxtMethodsForSimplyUnifiableTypes;
+use super::unify::ToType;
pub struct TypeFreshener<'a, 'tcx:'a> {
infcx: &'a InferCtxt<'a, 'tcx>,
}
fn fold_ty(&mut self, t: Ty<'tcx>) -> Ty<'tcx> {
+ let tcx = self.infcx.tcx;
+
match t.sty {
ty::ty_infer(ty::TyVar(v)) => {
- self.freshen(self.infcx.type_variables.borrow().probe(v),
- ty::TyVar(v),
- ty::FreshTy)
+ self.freshen(
+ self.infcx.type_variables.borrow().probe(v),
+ ty::TyVar(v),
+ ty::FreshTy)
}
ty::ty_infer(ty::IntVar(v)) => {
- self.freshen(self.infcx.probe_var(v),
- ty::IntVar(v),
- ty::FreshIntTy)
+ self.freshen(
+ self.infcx.int_unification_table.borrow_mut()
+ .probe(v)
+ .map(|v| v.to_type(tcx)),
+ ty::IntVar(v),
+ ty::FreshIntTy)
}
ty::ty_infer(ty::FloatVar(v)) => {
- self.freshen(self.infcx.probe_var(v),
- ty::FloatVar(v),
- ty::FreshIntTy)
+ self.freshen(
+ self.infcx.float_unification_table.borrow_mut()
+ .probe(v)
+ .map(|v| v.to_type(tcx)),
+ ty::FloatVar(v),
+ ty::FreshIntTy)
}
ty::ty_infer(ty::FreshTy(c)) |
ty::ty_infer(ty::FreshIntTy(c)) => {
if c >= self.freshen_count {
- self.tcx().sess.bug(
+ tcx.sess.bug(
&format!("Encountered a freshend type with id {} \
but our counter is only at {}",
c,
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-use super::combine::*;
-use super::lattice::*;
+use super::combine::CombineFields;
use super::higher_ranked::HigherRankedRelations;
-use super::cres;
+use super::InferCtxt;
+use super::lattice::{self, LatticeDir};
use super::Subtype;
use middle::ty::{self, Ty};
+use middle::ty_relate::{Relate, RelateResult, TypeRelation};
use util::ppaux::Repr;
/// "Greatest lower bound" (common subtype)
-pub struct Glb<'f, 'tcx: 'f> {
- fields: CombineFields<'f, 'tcx>
+pub struct Glb<'a, 'tcx: 'a> {
+ fields: CombineFields<'a, 'tcx>
}
-#[allow(non_snake_case)]
-pub fn Glb<'f, 'tcx>(cf: CombineFields<'f, 'tcx>) -> Glb<'f, 'tcx> {
- Glb { fields: cf }
+impl<'a, 'tcx> Glb<'a, 'tcx> {
+ pub fn new(fields: CombineFields<'a, 'tcx>) -> Glb<'a, 'tcx> {
+ Glb { fields: fields }
+ }
}
-impl<'f, 'tcx> Combine<'tcx> for Glb<'f, 'tcx> {
- fn tag(&self) -> String { "Glb".to_string() }
- fn fields<'a>(&'a self) -> &'a CombineFields<'a, 'tcx> { &self.fields }
+impl<'a, 'tcx> TypeRelation<'a, 'tcx> for Glb<'a, 'tcx> {
+ fn tag(&self) -> &'static str { "Glb" }
+
+ fn tcx(&self) -> &'a ty::ctxt<'tcx> { self.fields.tcx() }
+
+ fn a_is_expected(&self) -> bool { self.fields.a_is_expected }
- fn tys_with_variance(&self, v: ty::Variance, a: Ty<'tcx>, b: Ty<'tcx>)
- -> cres<'tcx, Ty<'tcx>>
+ fn relate_with_variance<T:Relate<'a,'tcx>>(&mut self,
+ variance: ty::Variance,
+ a: &T,
+ b: &T)
+ -> RelateResult<'tcx, T>
{
- match v {
- ty::Invariant => self.equate().tys(a, b),
- ty::Covariant => self.tys(a, b),
- ty::Bivariant => self.bivariate().tys(a, b),
- ty::Contravariant => self.lub().tys(a, b),
+ match variance {
+ ty::Invariant => self.fields.equate().relate(a, b),
+ ty::Covariant => self.relate(a, b),
+ ty::Bivariant => self.fields.bivariate().relate(a, b),
+ ty::Contravariant => self.fields.lub().relate(a, b),
}
}
- fn regions_with_variance(&self, v: ty::Variance, a: ty::Region, b: ty::Region)
- -> cres<'tcx, ty::Region>
- {
- match v {
- ty::Invariant => self.equate().regions(a, b),
- ty::Covariant => self.regions(a, b),
- ty::Bivariant => self.bivariate().regions(a, b),
- ty::Contravariant => self.lub().regions(a, b),
- }
+ fn tys(&mut self, a: Ty<'tcx>, b: Ty<'tcx>) -> RelateResult<'tcx, Ty<'tcx>> {
+ lattice::super_lattice_tys(self, a, b)
}
- fn regions(&self, a: ty::Region, b: ty::Region) -> cres<'tcx, ty::Region> {
+ fn regions(&mut self, a: ty::Region, b: ty::Region) -> RelateResult<'tcx, ty::Region> {
debug!("{}.regions({}, {})",
self.tag(),
a.repr(self.fields.infcx.tcx),
b.repr(self.fields.infcx.tcx));
- Ok(self.fields.infcx.region_vars.glb_regions(Subtype(self.trace()), a, b))
+ let origin = Subtype(self.fields.trace.clone());
+ Ok(self.fields.infcx.region_vars.glb_regions(origin, a, b))
}
- fn tys(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> cres<'tcx, Ty<'tcx>> {
- super_lattice_tys(self, a, b)
+ fn binders<T>(&mut self, a: &ty::Binder<T>, b: &ty::Binder<T>)
+ -> RelateResult<'tcx, ty::Binder<T>>
+ where T: Relate<'a, 'tcx>
+ {
+ self.fields.higher_ranked_glb(a, b)
}
+}
- fn binders<T>(&self, a: &ty::Binder<T>, b: &ty::Binder<T>) -> cres<'tcx, ty::Binder<T>>
- where T : Combineable<'tcx>
- {
- self.higher_ranked_glb(a, b)
+impl<'a, 'tcx> LatticeDir<'a,'tcx> for Glb<'a, 'tcx> {
+ fn infcx(&self) -> &'a InferCtxt<'a,'tcx> {
+ self.fields.infcx
+ }
+
+ fn relate_bound(&self, v: Ty<'tcx>, a: Ty<'tcx>, b: Ty<'tcx>) -> RelateResult<'tcx, ()> {
+ let mut sub = self.fields.sub();
+ try!(sub.relate(&v, &a));
+ try!(sub.relate(&v, &b));
+ Ok(())
}
}
//! Helper routines for higher-ranked things. See the `doc` module at
//! the end of the file for details.
-use super::{CombinedSnapshot, cres, InferCtxt, HigherRankedType, SkolemizationMap};
-use super::combine::{Combine, Combineable};
+use super::{CombinedSnapshot, InferCtxt, HigherRankedType, SkolemizationMap};
+use super::combine::CombineFields;
use middle::subst;
use middle::ty::{self, Binder};
use middle::ty_fold::{self, TypeFoldable};
+use middle::ty_relate::{Relate, RelateResult, TypeRelation};
use syntax::codemap::Span;
use util::nodemap::{FnvHashMap, FnvHashSet};
use util::ppaux::Repr;
-pub trait HigherRankedRelations<'tcx> {
- fn higher_ranked_sub<T>(&self, a: &Binder<T>, b: &Binder<T>) -> cres<'tcx, Binder<T>>
- where T : Combineable<'tcx>;
+pub trait HigherRankedRelations<'a,'tcx> {
+ fn higher_ranked_sub<T>(&self, a: &Binder<T>, b: &Binder<T>) -> RelateResult<'tcx, Binder<T>>
+ where T: Relate<'a,'tcx>;
- fn higher_ranked_lub<T>(&self, a: &Binder<T>, b: &Binder<T>) -> cres<'tcx, Binder<T>>
- where T : Combineable<'tcx>;
+ fn higher_ranked_lub<T>(&self, a: &Binder<T>, b: &Binder<T>) -> RelateResult<'tcx, Binder<T>>
+ where T: Relate<'a,'tcx>;
- fn higher_ranked_glb<T>(&self, a: &Binder<T>, b: &Binder<T>) -> cres<'tcx, Binder<T>>
- where T : Combineable<'tcx>;
+ fn higher_ranked_glb<T>(&self, a: &Binder<T>, b: &Binder<T>) -> RelateResult<'tcx, Binder<T>>
+ where T: Relate<'a,'tcx>;
}
trait InferCtxtExt {
-> Vec<ty::RegionVid>;
}
-impl<'tcx,C> HigherRankedRelations<'tcx> for C
- where C : Combine<'tcx>
-{
+impl<'a,'tcx> HigherRankedRelations<'a,'tcx> for CombineFields<'a,'tcx> {
fn higher_ranked_sub<T>(&self, a: &Binder<T>, b: &Binder<T>)
- -> cres<'tcx, Binder<T>>
- where T : Combineable<'tcx>
+ -> RelateResult<'tcx, Binder<T>>
+ where T: Relate<'a,'tcx>
{
+ let tcx = self.infcx.tcx;
+
debug!("higher_ranked_sub(a={}, b={})",
- a.repr(self.tcx()), b.repr(self.tcx()));
+ a.repr(tcx), b.repr(tcx));
// Rather than checking the subtype relationship between `a` and `b`
// as-is, we need to do some extra work here in order to make sure
// Start a snapshot so we can examine "all bindings that were
// created as part of this type comparison".
- return self.infcx().try(|snapshot| {
+ return self.infcx.commit_if_ok(|snapshot| {
// First, we instantiate each bound region in the subtype with a fresh
// region variable.
let (a_prime, _) =
- self.infcx().replace_late_bound_regions_with_fresh_var(
- self.trace().origin.span(),
+ self.infcx.replace_late_bound_regions_with_fresh_var(
+ self.trace.origin.span(),
HigherRankedType,
a);
// Second, we instantiate each bound region in the supertype with a
// fresh concrete region.
let (b_prime, skol_map) =
- self.infcx().skolemize_late_bound_regions(b, snapshot);
+ self.infcx.skolemize_late_bound_regions(b, snapshot);
- debug!("a_prime={}", a_prime.repr(self.tcx()));
- debug!("b_prime={}", b_prime.repr(self.tcx()));
+ debug!("a_prime={}", a_prime.repr(tcx));
+ debug!("b_prime={}", b_prime.repr(tcx));
// Compare types now that bound regions have been replaced.
- let result = try!(Combineable::combine(self, &a_prime, &b_prime));
+ let result = try!(self.sub().relate(&a_prime, &b_prime));
// Presuming type comparison succeeds, we need to check
// that the skolemized regions do not "leak".
- match leak_check(self.infcx(), &skol_map, snapshot) {
+ match leak_check(self.infcx, &skol_map, snapshot) {
Ok(()) => { }
Err((skol_br, tainted_region)) => {
- if self.a_is_expected() {
+ if self.a_is_expected {
debug!("Not as polymorphic!");
return Err(ty::terr_regions_insufficiently_polymorphic(skol_br,
tainted_region));
}
debug!("higher_ranked_sub: OK result={}",
- result.repr(self.tcx()));
+ result.repr(tcx));
Ok(ty::Binder(result))
});
}
- fn higher_ranked_lub<T>(&self, a: &Binder<T>, b: &Binder<T>) -> cres<'tcx, Binder<T>>
- where T : Combineable<'tcx>
+ fn higher_ranked_lub<T>(&self, a: &Binder<T>, b: &Binder<T>) -> RelateResult<'tcx, Binder<T>>
+ where T: Relate<'a,'tcx>
{
// Start a snapshot so we can examine "all bindings that were
// created as part of this type comparison".
- return self.infcx().try(|snapshot| {
+ return self.infcx.commit_if_ok(|snapshot| {
// Instantiate each bound region with a fresh region variable.
- let span = self.trace().origin.span();
+ let span = self.trace.origin.span();
let (a_with_fresh, a_map) =
- self.infcx().replace_late_bound_regions_with_fresh_var(
+ self.infcx.replace_late_bound_regions_with_fresh_var(
span, HigherRankedType, a);
let (b_with_fresh, _) =
- self.infcx().replace_late_bound_regions_with_fresh_var(
+ self.infcx.replace_late_bound_regions_with_fresh_var(
span, HigherRankedType, b);
// Collect constraints.
let result0 =
- try!(Combineable::combine(self, &a_with_fresh, &b_with_fresh));
+ try!(self.lub().relate(&a_with_fresh, &b_with_fresh));
let result0 =
- self.infcx().resolve_type_vars_if_possible(&result0);
+ self.infcx.resolve_type_vars_if_possible(&result0);
debug!("lub result0 = {}", result0.repr(self.tcx()));
// Generalize the regions appearing in result0 if possible
- let new_vars = self.infcx().region_vars_confined_to_snapshot(snapshot);
- let span = self.trace().origin.span();
+ let new_vars = self.infcx.region_vars_confined_to_snapshot(snapshot);
+ let span = self.trace.origin.span();
let result1 =
fold_regions_in(
self.tcx(),
&result0,
- |r, debruijn| generalize_region(self.infcx(), span, snapshot, debruijn,
+ |r, debruijn| generalize_region(self.infcx, span, snapshot, debruijn,
&new_vars, &a_map, r));
debug!("lub({},{}) = {}",
}
}
- fn higher_ranked_glb<T>(&self, a: &Binder<T>, b: &Binder<T>) -> cres<'tcx, Binder<T>>
- where T : Combineable<'tcx>
+ fn higher_ranked_glb<T>(&self, a: &Binder<T>, b: &Binder<T>) -> RelateResult<'tcx, Binder<T>>
+ where T: Relate<'a,'tcx>
{
- debug!("{}.higher_ranked_glb({}, {})",
- self.tag(), a.repr(self.tcx()), b.repr(self.tcx()));
+ debug!("higher_ranked_glb({}, {})",
+ a.repr(self.tcx()), b.repr(self.tcx()));
// Make a snapshot so we can examine "all bindings that were
// created as part of this type comparison".
- return self.infcx().try(|snapshot| {
+ return self.infcx.commit_if_ok(|snapshot| {
// Instantiate each bound region with a fresh region variable.
let (a_with_fresh, a_map) =
- self.infcx().replace_late_bound_regions_with_fresh_var(
- self.trace().origin.span(), HigherRankedType, a);
+ self.infcx.replace_late_bound_regions_with_fresh_var(
+ self.trace.origin.span(), HigherRankedType, a);
let (b_with_fresh, b_map) =
- self.infcx().replace_late_bound_regions_with_fresh_var(
- self.trace().origin.span(), HigherRankedType, b);
+ self.infcx.replace_late_bound_regions_with_fresh_var(
+ self.trace.origin.span(), HigherRankedType, b);
let a_vars = var_ids(self, &a_map);
let b_vars = var_ids(self, &b_map);
// Collect constraints.
let result0 =
- try!(Combineable::combine(self, &a_with_fresh, &b_with_fresh));
+ try!(self.glb().relate(&a_with_fresh, &b_with_fresh));
let result0 =
- self.infcx().resolve_type_vars_if_possible(&result0);
+ self.infcx.resolve_type_vars_if_possible(&result0);
debug!("glb result0 = {}", result0.repr(self.tcx()));
// Generalize the regions appearing in result0 if possible
- let new_vars = self.infcx().region_vars_confined_to_snapshot(snapshot);
- let span = self.trace().origin.span();
+ let new_vars = self.infcx.region_vars_confined_to_snapshot(snapshot);
+ let span = self.trace.origin.span();
let result1 =
fold_regions_in(
self.tcx(),
&result0,
- |r, debruijn| generalize_region(self.infcx(), span, snapshot, debruijn,
+ |r, debruijn| generalize_region(self.infcx, span, snapshot, debruijn,
&new_vars,
&a_map, &a_vars, &b_vars,
r));
}
}
-fn var_ids<'tcx, T: Combine<'tcx>>(combiner: &T,
- map: &FnvHashMap<ty::BoundRegion, ty::Region>)
- -> Vec<ty::RegionVid> {
- map.iter().map(|(_, r)| match *r {
- ty::ReInfer(ty::ReVar(r)) => { r }
- r => {
- combiner.infcx().tcx.sess.span_bug(
- combiner.trace().origin.span(),
- &format!("found non-region-vid: {:?}", r));
- }
- }).collect()
+fn var_ids<'a, 'tcx>(fields: &CombineFields<'a, 'tcx>,
+ map: &FnvHashMap<ty::BoundRegion, ty::Region>)
+ -> Vec<ty::RegionVid> {
+ map.iter()
+ .map(|(_, r)| match *r {
+ ty::ReInfer(ty::ReVar(r)) => { r }
+ r => {
+ fields.tcx().sess.span_bug(
+ fields.trace.origin.span(),
+ &format!("found non-region-vid: {:?}", r));
+ }
+ })
+ .collect()
}
fn is_var_in_set(new_vars: &[ty::RegionVid], r: ty::Region) -> bool {
unbound_value: &T,
mut fldr: F)
-> T
- where T : Combineable<'tcx>,
- F : FnMut(ty::Region, ty::DebruijnIndex) -> ty::Region,
+ where T: TypeFoldable<'tcx>,
+ F: FnMut(ty::Region, ty::DebruijnIndex) -> ty::Region,
{
unbound_value.fold_with(&mut ty_fold::RegionFolder::new(tcx, &mut |region, current_depth| {
// we should only be encountering "escaping" late-bound regions here,
//! over a `LatticeValue`, which is a value defined with respect to
//! a lattice.
-use super::*;
-use super::combine::*;
-use super::glb::Glb;
-use super::lub::Lub;
+use super::combine;
+use super::InferCtxt;
use middle::ty::TyVar;
use middle::ty::{self, Ty};
+use middle::ty_relate::{RelateResult, TypeRelation};
use util::ppaux::Repr;
-pub trait LatticeDir<'tcx> {
+pub trait LatticeDir<'f,'tcx> : TypeRelation<'f,'tcx> {
+ fn infcx(&self) -> &'f InferCtxt<'f, 'tcx>;
+
// Relates the type `v` to `a` and `b` such that `v` represents
// the LUB/GLB of `a` and `b` as appropriate.
- fn relate_bound(&self, v: Ty<'tcx>, a: Ty<'tcx>, b: Ty<'tcx>) -> cres<'tcx, ()>;
-}
-
-impl<'a, 'tcx> LatticeDir<'tcx> for Lub<'a, 'tcx> {
- fn relate_bound(&self, v: Ty<'tcx>, a: Ty<'tcx>, b: Ty<'tcx>) -> cres<'tcx, ()> {
- let sub = self.sub();
- try!(sub.tys(a, v));
- try!(sub.tys(b, v));
- Ok(())
- }
-}
-
-impl<'a, 'tcx> LatticeDir<'tcx> for Glb<'a, 'tcx> {
- fn relate_bound(&self, v: Ty<'tcx>, a: Ty<'tcx>, b: Ty<'tcx>) -> cres<'tcx, ()> {
- let sub = self.sub();
- try!(sub.tys(v, a));
- try!(sub.tys(v, b));
- Ok(())
- }
+ fn relate_bound(&self, v: Ty<'tcx>, a: Ty<'tcx>, b: Ty<'tcx>) -> RelateResult<'tcx, ()>;
}
-pub fn super_lattice_tys<'tcx, L:LatticeDir<'tcx>+Combine<'tcx>>(this: &L,
- a: Ty<'tcx>,
- b: Ty<'tcx>)
- -> cres<'tcx, Ty<'tcx>>
+pub fn super_lattice_tys<'a,'tcx,L:LatticeDir<'a,'tcx>>(this: &mut L,
+ a: Ty<'tcx>,
+ b: Ty<'tcx>)
+ -> RelateResult<'tcx, Ty<'tcx>>
+ where 'tcx: 'a
{
debug!("{}.lattice_tys({}, {})",
this.tag(),
- a.repr(this.infcx().tcx),
- b.repr(this.infcx().tcx));
+ a.repr(this.tcx()),
+ b.repr(this.tcx()));
if a == b {
return Ok(a);
}
_ => {
- super_tys(this, a, b)
+ combine::super_combine_tys(this.infcx(), this, a, b)
}
}
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-use super::combine::*;
+use super::combine::CombineFields;
use super::higher_ranked::HigherRankedRelations;
-use super::lattice::*;
-use super::cres;
+use super::InferCtxt;
+use super::lattice::{self, LatticeDir};
use super::Subtype;
use middle::ty::{self, Ty};
+use middle::ty_relate::{Relate, RelateResult, TypeRelation};
use util::ppaux::Repr;
/// "Least upper bound" (common supertype)
-pub struct Lub<'f, 'tcx: 'f> {
- fields: CombineFields<'f, 'tcx>
+pub struct Lub<'a, 'tcx: 'a> {
+ fields: CombineFields<'a, 'tcx>
}
-#[allow(non_snake_case)]
-pub fn Lub<'f, 'tcx>(cf: CombineFields<'f, 'tcx>) -> Lub<'f, 'tcx> {
- Lub { fields: cf }
+impl<'a, 'tcx> Lub<'a, 'tcx> {
+ pub fn new(fields: CombineFields<'a, 'tcx>) -> Lub<'a, 'tcx> {
+ Lub { fields: fields }
+ }
}
-impl<'f, 'tcx> Combine<'tcx> for Lub<'f, 'tcx> {
- fn tag(&self) -> String { "Lub".to_string() }
- fn fields<'a>(&'a self) -> &'a CombineFields<'a, 'tcx> { &self.fields }
+impl<'a, 'tcx> TypeRelation<'a, 'tcx> for Lub<'a, 'tcx> {
+ fn tag(&self) -> &'static str { "Lub" }
+
+ fn tcx(&self) -> &'a ty::ctxt<'tcx> { self.fields.tcx() }
+
+ fn a_is_expected(&self) -> bool { self.fields.a_is_expected }
- fn tys_with_variance(&self, v: ty::Variance, a: Ty<'tcx>, b: Ty<'tcx>)
- -> cres<'tcx, Ty<'tcx>>
+ fn relate_with_variance<T:Relate<'a,'tcx>>(&mut self,
+ variance: ty::Variance,
+ a: &T,
+ b: &T)
+ -> RelateResult<'tcx, T>
{
- match v {
- ty::Invariant => self.equate().tys(a, b),
- ty::Covariant => self.tys(a, b),
- ty::Bivariant => self.bivariate().tys(a, b),
- ty::Contravariant => self.glb().tys(a, b),
+ match variance {
+ ty::Invariant => self.fields.equate().relate(a, b),
+ ty::Covariant => self.relate(a, b),
+ ty::Bivariant => self.fields.bivariate().relate(a, b),
+ ty::Contravariant => self.fields.glb().relate(a, b),
}
}
- fn regions_with_variance(&self, v: ty::Variance, a: ty::Region, b: ty::Region)
- -> cres<'tcx, ty::Region>
- {
- match v {
- ty::Invariant => self.equate().regions(a, b),
- ty::Covariant => self.regions(a, b),
- ty::Bivariant => self.bivariate().regions(a, b),
- ty::Contravariant => self.glb().regions(a, b),
- }
+ fn tys(&mut self, a: Ty<'tcx>, b: Ty<'tcx>) -> RelateResult<'tcx, Ty<'tcx>> {
+ lattice::super_lattice_tys(self, a, b)
}
- fn regions(&self, a: ty::Region, b: ty::Region) -> cres<'tcx, ty::Region> {
+ fn regions(&mut self, a: ty::Region, b: ty::Region) -> RelateResult<'tcx, ty::Region> {
debug!("{}.regions({}, {})",
self.tag(),
a.repr(self.tcx()),
b.repr(self.tcx()));
- Ok(self.infcx().region_vars.lub_regions(Subtype(self.trace()), a, b))
+ let origin = Subtype(self.fields.trace.clone());
+ Ok(self.fields.infcx.region_vars.lub_regions(origin, a, b))
}
- fn tys(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> cres<'tcx, Ty<'tcx>> {
- super_lattice_tys(self, a, b)
+ fn binders<T>(&mut self, a: &ty::Binder<T>, b: &ty::Binder<T>)
+ -> RelateResult<'tcx, ty::Binder<T>>
+ where T: Relate<'a, 'tcx>
+ {
+ self.fields.higher_ranked_lub(a, b)
}
+}
- fn binders<T>(&self, a: &ty::Binder<T>, b: &ty::Binder<T>) -> cres<'tcx, ty::Binder<T>>
- where T : Combineable<'tcx>
- {
- self.higher_ranked_lub(a, b)
+impl<'a, 'tcx> LatticeDir<'a,'tcx> for Lub<'a, 'tcx> {
+ fn infcx(&self) -> &'a InferCtxt<'a,'tcx> {
+ self.fields.infcx
+ }
+
+ fn relate_bound(&self, v: Ty<'tcx>, a: Ty<'tcx>, b: Ty<'tcx>) -> RelateResult<'tcx, ()> {
+ let mut sub = self.fields.sub();
+ try!(sub.relate(&a, &v));
+ try!(sub.relate(&b, &v));
+ Ok(())
}
}
+
use middle::ty::replace_late_bound_regions;
use middle::ty::{self, Ty};
use middle::ty_fold::{TypeFolder, TypeFoldable};
-use std::cell::RefCell;
+use middle::ty_relate::{Relate, RelateResult, TypeRelation};
+use std::cell::{RefCell};
use std::fmt;
use std::rc::Rc;
use syntax::ast;
use util::ppaux::ty_to_string;
use util::ppaux::{Repr, UserString};
-use self::combine::{Combine, Combineable, CombineFields};
+use self::combine::CombineFields;
use self::region_inference::{RegionVarBindings, RegionSnapshot};
-use self::equate::Equate;
-use self::sub::Sub;
-use self::lub::Lub;
-use self::unify::{UnificationTable, InferCtxtMethodsForSimplyUnifiableTypes};
+use self::unify::{ToType, UnificationTable};
use self::error_reporting::ErrorReporting;
pub mod bivariate;
pub mod unify;
pub type Bound<T> = Option<T>;
-
-pub type cres<'tcx, T> = Result<T,ty::type_err<'tcx>>; // "combine result"
-pub type ures<'tcx> = cres<'tcx, ()>; // "unify result"
+pub type UnitResult<'tcx> = RelateResult<'tcx, ()>; // "unify result"
pub type fres<T> = Result<T, fixup_err>; // "fixup result"
pub struct InferCtxt<'a, 'tcx: 'a> {
///
/// See `error_reporting.rs` for more details
#[derive(Clone, Debug)]
-pub enum RegionVariableOrigin<'tcx> {
+pub enum RegionVariableOrigin {
// Region variables created for ill-categorized reasons,
// mostly indicates places in need of refactoring
MiscVariable(Span),
Autoref(Span),
// Regions created as part of an automatic coercion
- Coercion(TypeTrace<'tcx>),
+ Coercion(Span),
// Region variables created as the values for early-bound regions
EarlyBoundRegion(Span, ast::Name),
values: Types(expected_found(a_is_expected, a, b))
};
- let result =
- cx.commit_if_ok(|| cx.lub(a_is_expected, trace.clone()).tys(a, b));
+ let result = cx.commit_if_ok(|_| cx.lub(a_is_expected, trace.clone()).relate(&a, &b));
match result {
Ok(t) => t,
Err(ref err) => {
origin: TypeOrigin,
a: Ty<'tcx>,
b: Ty<'tcx>)
- -> ures<'tcx>
+ -> UnitResult<'tcx>
{
debug!("mk_subty({} <: {})", a.repr(cx.tcx), b.repr(cx.tcx));
- cx.commit_if_ok(|| {
- cx.sub_types(a_is_expected, origin, a, b)
- })
+ cx.sub_types(a_is_expected, origin, a, b)
}
pub fn can_mk_subty<'a, 'tcx>(cx: &InferCtxt<'a, 'tcx>,
a: Ty<'tcx>,
b: Ty<'tcx>)
- -> ures<'tcx> {
+ -> UnitResult<'tcx> {
debug!("can_mk_subty({} <: {})", a.repr(cx.tcx), b.repr(cx.tcx));
cx.probe(|_| {
let trace = TypeTrace {
origin: Misc(codemap::DUMMY_SP),
values: Types(expected_found(true, a, b))
};
- cx.sub(true, trace).tys(a, b).to_ures()
+ cx.sub(true, trace).relate(&a, &b).map(|_| ())
})
}
-pub fn can_mk_eqty<'a, 'tcx>(cx: &InferCtxt<'a, 'tcx>, a: Ty<'tcx>, b: Ty<'tcx>) -> ures<'tcx>
+pub fn can_mk_eqty<'a, 'tcx>(cx: &InferCtxt<'a, 'tcx>, a: Ty<'tcx>, b: Ty<'tcx>)
+ -> UnitResult<'tcx>
{
cx.can_equate(&a, &b)
}
origin: TypeOrigin,
a: Ty<'tcx>,
b: Ty<'tcx>)
- -> ures<'tcx>
+ -> UnitResult<'tcx>
{
debug!("mk_eqty({} <: {})", a.repr(cx.tcx), b.repr(cx.tcx));
- cx.commit_if_ok(
- || cx.eq_types(a_is_expected, origin, a, b))
+ cx.commit_if_ok(|_| cx.eq_types(a_is_expected, origin, a, b))
}
pub fn mk_sub_poly_trait_refs<'a, 'tcx>(cx: &InferCtxt<'a, 'tcx>,
origin: TypeOrigin,
a: ty::PolyTraitRef<'tcx>,
b: ty::PolyTraitRef<'tcx>)
- -> ures<'tcx>
+ -> UnitResult<'tcx>
{
debug!("mk_sub_trait_refs({} <: {})",
a.repr(cx.tcx), b.repr(cx.tcx));
- cx.commit_if_ok(
- || cx.sub_poly_trait_refs(a_is_expected, origin, a.clone(), b.clone()))
+ cx.commit_if_ok(|_| cx.sub_poly_trait_refs(a_is_expected, origin, a.clone(), b.clone()))
}
fn expected_found<T>(a_is_expected: bool,
}
}
-trait then<'tcx> {
- fn then<T, F>(&self, f: F) -> Result<T, ty::type_err<'tcx>> where
- T: Clone,
- F: FnOnce() -> Result<T, ty::type_err<'tcx>>;
-}
-
-impl<'tcx> then<'tcx> for ures<'tcx> {
- fn then<T, F>(&self, f: F) -> Result<T, ty::type_err<'tcx>> where
- T: Clone,
- F: FnOnce() -> Result<T, ty::type_err<'tcx>>,
- {
- self.and_then(move |_| f())
- }
-}
-
-trait ToUres<'tcx> {
- fn to_ures(&self) -> ures<'tcx>;
-}
-
-impl<'tcx, T> ToUres<'tcx> for cres<'tcx, T> {
- fn to_ures(&self) -> ures<'tcx> {
- match *self {
- Ok(ref _v) => Ok(()),
- Err(ref e) => Err((*e))
- }
- }
-}
-
-trait CresCompare<'tcx, T> {
- fn compare<F>(&self, t: T, f: F) -> cres<'tcx, T> where
- F: FnOnce() -> ty::type_err<'tcx>;
-}
-
-impl<'tcx, T:Clone + PartialEq> CresCompare<'tcx, T> for cres<'tcx, T> {
- fn compare<F>(&self, t: T, f: F) -> cres<'tcx, T> where
- F: FnOnce() -> ty::type_err<'tcx>,
- {
- (*self).clone().and_then(move |s| {
- if s == t {
- (*self).clone()
- } else {
- Err(f())
- }
- })
- }
-}
-
-pub fn uok<'tcx>() -> ures<'tcx> {
- Ok(())
-}
-
#[must_use = "once you start a snapshot, you should always consume it"]
pub struct CombinedSnapshot {
type_snapshot: type_variable::Snapshot,
use middle::ty::UnconstrainedNumeric::{Neither, UnconstrainedInt, UnconstrainedFloat};
match ty.sty {
ty::ty_infer(ty::IntVar(vid)) => {
- match self.int_unification_table.borrow_mut().get(self.tcx, vid).value {
- None => UnconstrainedInt,
- _ => Neither,
+ if self.int_unification_table.borrow_mut().has_value(vid) {
+ Neither
+ } else {
+ UnconstrainedInt
}
},
ty::ty_infer(ty::FloatVar(vid)) => {
- match self.float_unification_table.borrow_mut().get(self.tcx, vid).value {
- None => return UnconstrainedFloat,
- _ => Neither,
+ if self.float_unification_table.borrow_mut().has_value(vid) {
+ Neither
+ } else {
+ UnconstrainedFloat
}
},
_ => Neither,
}
}
- pub fn combine_fields<'b>(&'b self, a_is_expected: bool, trace: TypeTrace<'tcx>)
- -> CombineFields<'b, 'tcx> {
+ fn combine_fields(&'a self, a_is_expected: bool, trace: TypeTrace<'tcx>)
+ -> CombineFields<'a, 'tcx> {
CombineFields {infcx: self,
a_is_expected: a_is_expected,
trace: trace}
}
- pub fn equate<'b>(&'b self, a_is_expected: bool, trace: TypeTrace<'tcx>)
- -> Equate<'b, 'tcx> {
- Equate(self.combine_fields(a_is_expected, trace))
+ // public so that it can be used from the rustc_driver unit tests
+ pub fn equate(&'a self, a_is_expected: bool, trace: TypeTrace<'tcx>)
+ -> equate::Equate<'a, 'tcx>
+ {
+ self.combine_fields(a_is_expected, trace).equate()
}
- pub fn sub<'b>(&'b self, a_is_expected: bool, trace: TypeTrace<'tcx>)
- -> Sub<'b, 'tcx> {
- Sub(self.combine_fields(a_is_expected, trace))
+ // public so that it can be used from the rustc_driver unit tests
+ pub fn sub(&'a self, a_is_expected: bool, trace: TypeTrace<'tcx>)
+ -> sub::Sub<'a, 'tcx>
+ {
+ self.combine_fields(a_is_expected, trace).sub()
}
- pub fn lub<'b>(&'b self, a_is_expected: bool, trace: TypeTrace<'tcx>)
- -> Lub<'b, 'tcx> {
- Lub(self.combine_fields(a_is_expected, trace))
+ // public so that it can be used from the rustc_driver unit tests
+ pub fn lub(&'a self, a_is_expected: bool, trace: TypeTrace<'tcx>)
+ -> lub::Lub<'a, 'tcx>
+ {
+ self.combine_fields(a_is_expected, trace).lub()
+ }
+
+ // public so that it can be used from the rustc_driver unit tests
+ pub fn glb(&'a self, a_is_expected: bool, trace: TypeTrace<'tcx>)
+ -> glb::Glb<'a, 'tcx>
+ {
+ self.combine_fields(a_is_expected, trace).glb()
}
fn start_snapshot(&self) -> CombinedSnapshot {
r
}
- /// Execute `f` and commit the bindings if successful
+ /// Execute `f` and commit the bindings if closure `f` returns `Ok(_)`
pub fn commit_if_ok<T, E, F>(&self, f: F) -> Result<T, E> where
- F: FnOnce() -> Result<T, E>
+ F: FnOnce(&CombinedSnapshot) -> Result<T, E>
{
- self.commit_unconditionally(move || self.try(move |_| f()))
+ debug!("commit_if_ok()");
+ let snapshot = self.start_snapshot();
+ let r = f(&snapshot);
+ debug!("commit_if_ok() -- r.is_ok() = {}", r.is_ok());
+ match r {
+ Ok(_) => { self.commit_from(snapshot); }
+ Err(_) => { self.rollback_to(snapshot); }
+ }
+ r
}
/// Execute `f` and commit only the region bindings if successful.
float_snapshot,
region_vars_snapshot } = self.start_snapshot();
- let r = self.try(move |_| f());
+ let r = self.commit_if_ok(|_| f());
// Roll back any non-region bindings - they should be resolved
// inside `f`, with, e.g. `resolve_type_vars_if_possible`.
r
}
- /// Execute `f`, unroll bindings on panic
- pub fn try<T, E, F>(&self, f: F) -> Result<T, E> where
- F: FnOnce(&CombinedSnapshot) -> Result<T, E>
- {
- debug!("try()");
- let snapshot = self.start_snapshot();
- let r = f(&snapshot);
- debug!("try() -- r.is_ok() = {}", r.is_ok());
- match r {
- Ok(_) => {
- self.commit_from(snapshot);
- }
- Err(_) => {
- self.rollback_to(snapshot);
- }
- }
- r
- }
-
/// Execute `f` then unroll any bindings it creates
pub fn probe<R, F>(&self, f: F) -> R where
F: FnOnce(&CombinedSnapshot) -> R,
origin: TypeOrigin,
a: Ty<'tcx>,
b: Ty<'tcx>)
- -> ures<'tcx>
+ -> UnitResult<'tcx>
{
debug!("sub_types({} <: {})", a.repr(self.tcx), b.repr(self.tcx));
- self.commit_if_ok(|| {
+ self.commit_if_ok(|_| {
let trace = TypeTrace::types(origin, a_is_expected, a, b);
- self.sub(a_is_expected, trace).tys(a, b).to_ures()
+ self.sub(a_is_expected, trace).relate(&a, &b).map(|_| ())
})
}
origin: TypeOrigin,
a: Ty<'tcx>,
b: Ty<'tcx>)
- -> ures<'tcx>
+ -> UnitResult<'tcx>
{
- self.commit_if_ok(|| {
+ self.commit_if_ok(|_| {
let trace = TypeTrace::types(origin, a_is_expected, a, b);
- self.equate(a_is_expected, trace).tys(a, b).to_ures()
+ self.equate(a_is_expected, trace).relate(&a, &b).map(|_| ())
})
}
origin: TypeOrigin,
a: Rc<ty::TraitRef<'tcx>>,
b: Rc<ty::TraitRef<'tcx>>)
- -> ures<'tcx>
+ -> UnitResult<'tcx>
{
debug!("sub_trait_refs({} <: {})",
a.repr(self.tcx),
b.repr(self.tcx));
- self.commit_if_ok(|| {
+ self.commit_if_ok(|_| {
let trace = TypeTrace {
origin: origin,
values: TraitRefs(expected_found(a_is_expected, a.clone(), b.clone()))
};
- self.sub(a_is_expected, trace).trait_refs(&*a, &*b).to_ures()
+ self.sub(a_is_expected, trace).relate(&*a, &*b).map(|_| ())
})
}
origin: TypeOrigin,
a: ty::PolyTraitRef<'tcx>,
b: ty::PolyTraitRef<'tcx>)
- -> ures<'tcx>
+ -> UnitResult<'tcx>
{
debug!("sub_poly_trait_refs({} <: {})",
a.repr(self.tcx),
b.repr(self.tcx));
- self.commit_if_ok(|| {
+ self.commit_if_ok(|_| {
let trace = TypeTrace {
origin: origin,
values: PolyTraitRefs(expected_found(a_is_expected, a.clone(), b.clone()))
};
- self.sub(a_is_expected, trace).binders(&a, &b).to_ures()
+ self.sub(a_is_expected, trace).relate(&a, &b).map(|_| ())
})
}
pub fn leak_check(&self,
skol_map: &SkolemizationMap,
snapshot: &CombinedSnapshot)
- -> ures<'tcx>
+ -> UnitResult<'tcx>
{
/*! See `higher_ranked::leak_check` */
pub fn equality_predicate(&self,
span: Span,
predicate: &ty::PolyEquatePredicate<'tcx>)
- -> ures<'tcx> {
- self.try(|snapshot| {
+ -> UnitResult<'tcx> {
+ self.commit_if_ok(|snapshot| {
let (ty::EquatePredicate(a, b), skol_map) =
self.skolemize_late_bound_regions(predicate, snapshot);
let origin = EquatePredicate(span);
pub fn region_outlives_predicate(&self,
span: Span,
predicate: &ty::PolyRegionOutlivesPredicate)
- -> ures<'tcx> {
- self.try(|snapshot| {
+ -> UnitResult<'tcx> {
+ self.commit_if_ok(|snapshot| {
let (ty::OutlivesPredicate(r_a, r_b), skol_map) =
self.skolemize_late_bound_regions(predicate, snapshot);
let origin = RelateRegionParamBound(span);
.new_key(None)
}
- pub fn next_region_var(&self, origin: RegionVariableOrigin<'tcx>) -> ty::Region {
+ pub fn next_region_var(&self, origin: RegionVariableOrigin) -> ty::Region {
ty::ReInfer(ty::ReVar(self.region_vars.new_region_var(origin)))
}
}
ty::ty_infer(ty::IntVar(v)) => {
- self.probe_var(v)
+ self.int_unification_table
+ .borrow_mut()
+ .probe(v)
+ .map(|v| v.to_type(self.tcx))
.unwrap_or(typ)
}
ty::ty_infer(ty::FloatVar(v)) => {
- self.probe_var(v)
+ self.float_unification_table
+ .borrow_mut()
+ .probe(v)
+ .map(|v| v.to_type(self.tcx))
.unwrap_or(typ)
}
self.region_vars.verify_generic_bound(origin, kind, a, bs);
}
- pub fn can_equate<T>(&self, a: &T, b: &T) -> ures<'tcx>
- where T : Combineable<'tcx> + Repr<'tcx>
+ pub fn can_equate<'b,T>(&'b self, a: &T, b: &T) -> UnitResult<'tcx>
+ where T: Relate<'b,'tcx> + Repr<'tcx>
{
debug!("can_equate({}, {})", a.repr(self.tcx), b.repr(self.tcx));
self.probe(|_| {
let e = self.tcx.types.err;
let trace = TypeTrace { origin: Misc(codemap::DUMMY_SP),
values: Types(expected_found(true, e, e)) };
- let eq = self.equate(true, trace);
- Combineable::combine(&eq, a, b)
- }).to_ures()
+ self.equate(true, trace).relate(a, b)
+ }).map(|_| ())
}
}
}
}
-impl<'tcx> RegionVariableOrigin<'tcx> {
+impl RegionVariableOrigin {
pub fn span(&self) -> Span {
match *self {
MiscVariable(a) => a,
PatternRegion(a) => a,
AddrOfRegion(a) => a,
Autoref(a) => a,
- Coercion(ref a) => a.span(),
+ Coercion(a) => a,
EarlyBoundRegion(a, _) => a,
LateBoundRegion(a, _, _) => a,
BoundRegionInCoherence(_) => codemap::DUMMY_SP,
}
}
-impl<'tcx> Repr<'tcx> for RegionVariableOrigin<'tcx> {
+impl<'tcx> Repr<'tcx> for RegionVariableOrigin {
fn repr(&self, tcx: &ty::ctxt<'tcx>) -> String {
match *self {
MiscVariable(a) => {
format!("AddrOfRegion({})", a.repr(tcx))
}
Autoref(a) => format!("Autoref({})", a.repr(tcx)),
- Coercion(ref a) => format!("Coercion({})", a.repr(tcx)),
+ Coercion(a) => format!("Coercion({})", a.repr(tcx)),
EarlyBoundRegion(a, b) => {
format!("EarlyBoundRegion({},{})", a.repr(tcx), b.repr(tcx))
}
the borrow expression, we must issue sufficient restrictions to ensure
that the pointee remains valid.
-## Adding closures
-
-The other significant complication to the region hierarchy is
-closures. I will describe here how closures should work, though some
-of the work to implement this model is ongoing at the time of this
-writing.
-
-The body of closures are type-checked along with the function that
-creates them. However, unlike other expressions that appear within the
-function body, it is not entirely obvious when a closure body executes
-with respect to the other expressions. This is because the closure
-body will execute whenever the closure is called; however, we can
-never know precisely when the closure will be called, especially
-without some sort of alias analysis.
-
-However, we can place some sort of limits on when the closure
-executes. In particular, the type of every closure `fn:'r K` includes
-a region bound `'r`. This bound indicates the maximum lifetime of that
-closure; once we exit that region, the closure cannot be called
-anymore. Therefore, we say that the lifetime of the closure body is a
-sublifetime of the closure bound, but the closure body itself is unordered
-with respect to other parts of the code.
-
-For example, consider the following fragment of code:
-
- 'a: {
- let closure: fn:'a() = || 'b: {
- 'c: ...
- };
- 'd: ...
- }
-
-Here we have four lifetimes, `'a`, `'b`, `'c`, and `'d`. The closure
-`closure` is bounded by the lifetime `'a`. The lifetime `'b` is the
-lifetime of the closure body, and `'c` is some statement within the
-closure body. Finally, `'d` is a statement within the outer block that
-created the closure.
-
-We can say that the closure body `'b` is a sublifetime of `'a` due to
-the closure bound. By the usual lexical scoping conventions, the
-statement `'c` is clearly a sublifetime of `'b`, and `'d` is a
-sublifetime of `'d`. However, there is no ordering between `'c` and
-`'d` per se (this kind of ordering between statements is actually only
-an issue for dataflow; passes like the borrow checker must assume that
-closures could execute at any time from the moment they are created
-until they go out of scope).
-
-### Complications due to closure bound inference
-
-There is only one problem with the above model: in general, we do not
-actually *know* the closure bounds during region inference! In fact,
-closure bounds are almost always region variables! This is very tricky
-because the inference system implicitly assumes that we can do things
-like compute the LUB of two scoped lifetimes without needing to know
-the values of any variables.
-
-Here is an example to illustrate the problem:
-
- fn identify<T>(x: T) -> T { x }
-
- fn foo() { // 'foo is the function body
- 'a: {
- let closure = identity(|| 'b: {
- 'c: ...
- });
- 'd: closure();
- }
- 'e: ...;
- }
-
-In this example, the closure bound is not explicit. At compile time,
-we will create a region variable (let's call it `V0`) to represent the
-closure bound.
-
-The primary difficulty arises during the constraint propagation phase.
-Imagine there is some variable with incoming edges from `'c` and `'d`.
-This means that the value of the variable must be `LUB('c,
-'d)`. However, without knowing what the closure bound `V0` is, we
-can't compute the LUB of `'c` and `'d`! Any we don't know the closure
-bound until inference is done.
-
-The solution is to rely on the fixed point nature of inference.
-Basically, when we must compute `LUB('c, 'd)`, we just use the current
-value for `V0` as the closure's bound. If `V0`'s binding should
-change, then we will do another round of inference, and the result of
-`LUB('c, 'd)` will change.
-
-One minor implication of this is that the graph does not in fact track
-the full set of dependencies between edges. We cannot easily know
-whether the result of a LUB computation will change, since there may
-be indirect dependencies on other variables that are not reflected on
-the graph. Therefore, we must *always* iterate over all edges when
-doing the fixed point calculation, not just those adjacent to nodes
-whose values have changed.
-
-Were it not for this requirement, we could in fact avoid fixed-point
-iteration altogether. In that universe, we could instead first
-identify and remove strongly connected components (SCC) in the graph.
-Note that such components must consist solely of region variables; all
-of these variables can effectively be unified into a single variable.
-Once SCCs are removed, we are left with a DAG. At this point, we
-could walk the DAG in topological order once to compute the expanding
-nodes, and again in reverse topological order to compute the
-contracting nodes. However, as I said, this does not work given the
-current treatment of closure bounds, but perhaps in the future we can
-address this problem somehow and make region inference somewhat more
-efficient. Note that this is solely a matter of performance, not
-expressiveness.
+## Modeling closures
+
+Integrating closures properly into the model is a bit of
+work-in-progress. In an ideal world, we would model closures as
+closely as possible after their desugared equivalents. That is, a
+closure type would be modeled as a struct, and the region hierarchy of
+different closure bodies would be completely distinct from all other
+fns. We are generally moving in that direction but there are
+complications in terms of the implementation.
+
+In practice what we currently do is somewhat different. The basis for
+the current approach is the observation that the only time that
+regions from distinct fn bodies interact with one another is through
+an upvar or the type of a fn parameter (since closures live in the fn
+body namespace, they can in fact have fn parameters whose types
+include regions from the surrounding fn body). For these cases, there
+are separate mechanisms which ensure that the regions that appear in
+upvars/parameters outlive the dynamic extent of each call to the
+closure:
+
+1. Types must outlive the region of any expression where they are used.
+ For a closure type `C` to outlive a region `'r`, that implies that the
+ types of all its upvars must outlive `'r`.
+2. Parameters must outlive the region of any fn that they are passed to.
+
+Therefore, we can -- sort of -- assume that any region from an
+enclosing fns is larger than any region from one of its enclosed
+fn. And that is precisely what we do: when building the region
+hierarchy, each region lives in its own distinct subtree, but if we
+are asked to compute the `LUB(r1, r2)` of two regions, and those
+regions are in disjoint subtrees, we compare the lexical nesting of
+the two regions.
+
+*Ideas for improving the situation:* (FIXME #3696) The correctness
+argument here is subtle and a bit hand-wavy. The ideal, as stated
+earlier, would be to model things in such a way that it corresponds
+more closely to the desugared code. The best approach for doing this
+is a bit unclear: it may in fact be possible to *actually* desugar
+before we start, but I don't think so. The main option that I've been
+thinking through is imposing a "view shift" as we enter the fn body,
+so that regions appearing in the types of fn parameters and upvars are
+translated from being regions in the outer fn into free region
+parameters, just as they would be if we applied the desugaring. The
+challenge here is that type inference may not have fully run, so the
+types may not be fully known: we could probably do this translation
+lazilly, as type variables are instantiated. We would also have to
+apply a kind of inverse translation to the return value. This would be
+a good idea anyway, as right now it is possible for free regions
+instantiated within the closure to leak into the parent: this
+currently leads to type errors, since those regions cannot outlive any
+expressions within the parent hierarchy. Much like the current
+handling of closures, there are no known cases where this leads to a
+type-checking accepting incorrect code (though it sometimes rejects
+what might be considered correct code; see rust-lang/rust#22557), but
+it still doesn't feel like the right approach.
### Skolemization
pub use self::VarValue::*;
use self::Classification::*;
-use super::cres;
use super::{RegionVariableOrigin, SubregionOrigin, TypeTrace, MiscVariable};
use middle::region;
use middle::ty::{BoundRegion, FreeRegion, Region, RegionVid};
use middle::ty::{ReEmpty, ReStatic, ReInfer, ReFree, ReEarlyBound};
use middle::ty::{ReLateBound, ReScope, ReVar, ReSkolemized, BrFresh};
+use middle::ty_relate::RelateResult;
use middle::graph;
use middle::graph::{Direction, NodeIndex};
use util::common::indenter;
/// Could not infer a value for `v` because `sub_r <= v` (due to
/// `sub_origin`) but `v <= sup_r` (due to `sup_origin`) and
/// `sub_r <= sup_r` does not hold.
- SubSupConflict(RegionVariableOrigin<'tcx>,
+ SubSupConflict(RegionVariableOrigin,
SubregionOrigin<'tcx>, Region,
SubregionOrigin<'tcx>, Region),
/// Could not infer a value for `v` because `v <= r1` (due to
/// `origin1`) and `v <= r2` (due to `origin2`) and
/// `r1` and `r2` have no intersection.
- SupSupConflict(RegionVariableOrigin<'tcx>,
+ SupSupConflict(RegionVariableOrigin,
SubregionOrigin<'tcx>, Region,
SubregionOrigin<'tcx>, Region),
/// more specific errors message by suggesting to the user where they
/// should put a lifetime. In those cases we process and put those errors
/// into `ProcessedErrors` before we do any reporting.
- ProcessedErrors(Vec<RegionVariableOrigin<'tcx>>,
+ ProcessedErrors(Vec<RegionVariableOrigin>,
Vec<(TypeTrace<'tcx>, ty::type_err<'tcx>)>,
Vec<SameRegions>),
}
pub struct RegionVarBindings<'a, 'tcx: 'a> {
tcx: &'a ty::ctxt<'tcx>,
- var_origins: RefCell<Vec<RegionVariableOrigin<'tcx>>>,
+ var_origins: RefCell<Vec<RegionVariableOrigin>>,
// Constraints of the form `A <= B` introduced by the region
// checker. Here at least one of `A` and `B` must be a region
len as u32
}
- pub fn new_region_var(&self, origin: RegionVariableOrigin<'tcx>) -> RegionVid {
+ pub fn new_region_var(&self, origin: RegionVariableOrigin) -> RegionVid {
let id = self.num_vars();
self.var_origins.borrow_mut().push(origin.clone());
let vid = RegionVid { index: id };
// at least as big as the block fr.scope_id". So, we can
// reasonably compare free regions and scopes:
let fr_scope = fr.scope.to_code_extent();
- match self.tcx.region_maps.nearest_common_ancestor(fr_scope, s_id) {
+ let r_id = self.tcx.region_maps.nearest_common_ancestor(fr_scope, s_id);
+
+ if r_id == fr_scope {
// if the free region's scope `fr.scope_id` is bigger than
// the scope region `s_id`, then the LUB is the free
// region itself:
- Some(r_id) if r_id == fr_scope => f,
-
+ f
+ } else {
// otherwise, we don't know what the free region is,
// so we must conservatively say the LUB is static:
- _ => ReStatic
+ ReStatic
}
}
// The region corresponding to an outer block is a
// subtype of the region corresponding to an inner
// block.
- match self.tcx.region_maps.nearest_common_ancestor(a_id, b_id) {
- Some(r_id) => ReScope(r_id),
- _ => ReStatic
- }
+ ReScope(self.tcx.region_maps.nearest_common_ancestor(a_id, b_id))
}
(ReFree(ref a_fr), ReFree(ref b_fr)) => {
/// regions are given as argument, in any order, a consistent result is returned.
fn lub_free_regions(&self,
a: &FreeRegion,
- b: &FreeRegion) -> ty::Region
+ b: &FreeRegion)
+ -> ty::Region
{
return match a.cmp(b) {
Less => helper(self, a, b),
fn glb_concrete_regions(&self,
a: Region,
b: Region)
- -> cres<'tcx, Region> {
+ -> RelateResult<'tcx, Region>
+ {
debug!("glb_concrete_regions({:?}, {:?})", a, b);
match (a, b) {
(ReLateBound(..), _) |
// is the scope `s_id`. Otherwise, as we do not know
// big the free region is precisely, the GLB is undefined.
let fr_scope = fr.scope.to_code_extent();
- match self.tcx.region_maps.nearest_common_ancestor(fr_scope, s_id) {
- Some(r_id) if r_id == fr_scope => Ok(s),
- _ => Err(ty::terr_regions_no_overlap(b, a))
+ if self.tcx.region_maps.nearest_common_ancestor(fr_scope, s_id) == fr_scope {
+ Ok(s)
+ } else {
+ Err(ty::terr_regions_no_overlap(b, a))
}
}
/// returned.
fn glb_free_regions(&self,
a: &FreeRegion,
- b: &FreeRegion) -> cres<'tcx, ty::Region>
+ b: &FreeRegion)
+ -> RelateResult<'tcx, ty::Region>
{
return match a.cmp(b) {
Less => helper(self, a, b),
fn helper<'a, 'tcx>(this: &RegionVarBindings<'a, 'tcx>,
a: &FreeRegion,
- b: &FreeRegion) -> cres<'tcx, ty::Region>
+ b: &FreeRegion) -> RelateResult<'tcx, ty::Region>
{
if this.tcx.region_maps.sub_free_region(*a, *b) {
Ok(ty::ReFree(*a))
region_a: ty::Region,
region_b: ty::Region,
scope_a: region::CodeExtent,
- scope_b: region::CodeExtent) -> cres<'tcx, Region>
+ scope_b: region::CodeExtent)
+ -> RelateResult<'tcx, Region>
{
// We want to generate the intersection of two
// scopes or two free regions. So, if one of
// it. Otherwise fail.
debug!("intersect_scopes(scope_a={:?}, scope_b={:?}, region_a={:?}, region_b={:?})",
scope_a, scope_b, region_a, region_b);
- match self.tcx.region_maps.nearest_common_ancestor(scope_a, scope_b) {
- Some(r_id) if scope_a == r_id => Ok(ReScope(scope_b)),
- Some(r_id) if scope_b == r_id => Ok(ReScope(scope_a)),
- _ => Err(ty::terr_regions_no_overlap(region_a, region_b))
+ let r_id = self.tcx.region_maps.nearest_common_ancestor(scope_a, scope_b);
+ if r_id == scope_a {
+ Ok(ReScope(scope_b))
+ } else if r_id == scope_b {
+ Ok(ReScope(scope_a))
+ } else {
+ Err(ty::terr_regions_no_overlap(region_a, region_b))
}
}
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-use super::combine::*;
-use super::cres;
+use super::combine::{self, CombineFields};
use super::higher_ranked::HigherRankedRelations;
use super::Subtype;
use super::type_variable::{SubtypeOf, SupertypeOf};
use middle::ty::{self, Ty};
use middle::ty::TyVar;
-use util::ppaux::Repr;
+use middle::ty_relate::{Relate, RelateResult, TypeRelation};
+use util::ppaux::{Repr};
/// "Greatest lower bound" (common subtype)
-pub struct Sub<'f, 'tcx: 'f> {
- fields: CombineFields<'f, 'tcx>
+pub struct Sub<'a, 'tcx: 'a> {
+ fields: CombineFields<'a, 'tcx>
}
-#[allow(non_snake_case)]
-pub fn Sub<'f, 'tcx>(cf: CombineFields<'f, 'tcx>) -> Sub<'f, 'tcx> {
- Sub { fields: cf }
+impl<'a, 'tcx> Sub<'a, 'tcx> {
+ pub fn new(f: CombineFields<'a, 'tcx>) -> Sub<'a, 'tcx> {
+ Sub { fields: f }
+ }
}
-impl<'f, 'tcx> Combine<'tcx> for Sub<'f, 'tcx> {
- fn tag(&self) -> String { "Sub".to_string() }
- fn fields<'a>(&'a self) -> &'a CombineFields<'a, 'tcx> { &self.fields }
-
- fn tys_with_variance(&self, v: ty::Variance, a: Ty<'tcx>, b: Ty<'tcx>)
- -> cres<'tcx, Ty<'tcx>>
- {
- match v {
- ty::Invariant => self.equate().tys(a, b),
- ty::Covariant => self.tys(a, b),
- ty::Bivariant => self.bivariate().tys(a, b),
- ty::Contravariant => Sub(self.fields.switch_expected()).tys(b, a),
- }
- }
+impl<'a, 'tcx> TypeRelation<'a, 'tcx> for Sub<'a, 'tcx> {
+ fn tag(&self) -> &'static str { "Sub" }
+ fn tcx(&self) -> &'a ty::ctxt<'tcx> { self.fields.infcx.tcx }
+ fn a_is_expected(&self) -> bool { self.fields.a_is_expected }
- fn regions_with_variance(&self, v: ty::Variance, a: ty::Region, b: ty::Region)
- -> cres<'tcx, ty::Region>
+ fn relate_with_variance<T:Relate<'a,'tcx>>(&mut self,
+ variance: ty::Variance,
+ a: &T,
+ b: &T)
+ -> RelateResult<'tcx, T>
{
- match v {
- ty::Invariant => self.equate().regions(a, b),
- ty::Covariant => self.regions(a, b),
- ty::Bivariant => self.bivariate().regions(a, b),
- ty::Contravariant => Sub(self.fields.switch_expected()).regions(b, a),
+ match variance {
+ ty::Invariant => self.fields.equate().relate(a, b),
+ ty::Covariant => self.relate(a, b),
+ ty::Bivariant => self.fields.bivariate().relate(a, b),
+ ty::Contravariant => self.fields.switch_expected().sub().relate(b, a),
}
}
- fn regions(&self, a: ty::Region, b: ty::Region) -> cres<'tcx, ty::Region> {
- debug!("{}.regions({}, {})",
- self.tag(),
- a.repr(self.tcx()),
- b.repr(self.tcx()));
- self.infcx().region_vars.make_subregion(Subtype(self.trace()), a, b);
- Ok(a)
- }
+ fn tys(&mut self, a: Ty<'tcx>, b: Ty<'tcx>) -> RelateResult<'tcx, Ty<'tcx>> {
+ debug!("{}.tys({}, {})", self.tag(), a.repr(self.tcx()), b.repr(self.tcx()));
- fn tys(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> cres<'tcx, Ty<'tcx>> {
- debug!("{}.tys({}, {})", self.tag(),
- a.repr(self.tcx()), b.repr(self.tcx()));
if a == b { return Ok(a); }
let infcx = self.fields.infcx;
}
(&ty::ty_infer(TyVar(a_id)), _) => {
try!(self.fields
- .switch_expected()
- .instantiate(b, SupertypeOf, a_id));
+ .switch_expected()
+ .instantiate(b, SupertypeOf, a_id));
Ok(a)
}
(_, &ty::ty_infer(TyVar(b_id))) => {
}
_ => {
- super_tys(self, a, b)
+ combine::super_combine_tys(self.fields.infcx, self, a, b)
}
}
}
- fn binders<T>(&self, a: &ty::Binder<T>, b: &ty::Binder<T>) -> cres<'tcx, ty::Binder<T>>
- where T : Combineable<'tcx>
+ fn regions(&mut self, a: ty::Region, b: ty::Region) -> RelateResult<'tcx, ty::Region> {
+ debug!("{}.regions({}, {})",
+ self.tag(),
+ a.repr(self.tcx()),
+ b.repr(self.tcx()));
+ let origin = Subtype(self.fields.trace.clone());
+ self.fields.infcx.region_vars.make_subregion(origin, a, b);
+ Ok(a)
+ }
+
+ fn binders<T>(&mut self, a: &ty::Binder<T>, b: &ty::Binder<T>)
+ -> RelateResult<'tcx, ty::Binder<T>>
+ where T: Relate<'a,'tcx>
{
- self.higher_ranked_sub(a, b)
+ self.fields.higher_ranked_sub(a, b)
}
}
use std::marker;
-use middle::ty::{expected_found, IntVarValue};
+use middle::ty::{IntVarValue};
use middle::ty::{self, Ty};
-use middle::infer::{uok, ures};
-use middle::infer::InferCtxt;
-use std::cell::RefCell;
use std::fmt::Debug;
use std::marker::PhantomData;
use syntax::ast;
pub trait UnifyKey : Clone + Debug + PartialEq {
type Value : UnifyValue;
- fn index(&self) -> usize;
+ fn index(&self) -> u32;
- fn from_index(u: usize) -> Self;
-
- // Given an inference context, returns the unification table
- // appropriate to this key type.
- fn unification_table<'v>(infcx: &'v InferCtxt)
- -> &'v RefCell<UnificationTable<Self>>;
+ fn from_index(u: u32) -> Self;
fn tag(k: Option<Self>) -> &'static str;
}
pub fn new_key(&mut self, value: K::Value) -> K {
let index = self.values.push(Root(value, 0));
- let k = UnifyKey::from_index(index);
+ let k = UnifyKey::from_index(index as u32);
debug!("{}: created new key: {:?}",
UnifyKey::tag(None::<K>),
k);
k
}
- /// Find the root node for `vid`. This uses the standard union-find algorithm with path
- /// compression: http://en.wikipedia.org/wiki/Disjoint-set_data_structure
- pub fn get(&mut self, tcx: &ty::ctxt, vid: K) -> Node<K> {
- let index = vid.index();
+ /// Find the root node for `vid`. This uses the standard
+ /// union-find algorithm with path compression:
+ /// <http://en.wikipedia.org/wiki/Disjoint-set_data_structure>.
+ ///
+ /// NB. This is a building-block operation and you would probably
+ /// prefer to call `probe` below.
+ fn get(&mut self, vid: K) -> Node<K> {
+ let index = vid.index() as usize;
let value = (*self.values.get(index)).clone();
match value {
Redirect(redirect) => {
- let node: Node<K> = self.get(tcx, redirect.clone());
+ let node: Node<K> = self.get(redirect.clone());
if node.key != redirect {
// Path compression
self.values.set(index, Redirect(node.key.clone()));
}
fn is_root(&self, key: &K) -> bool {
- match *self.values.get(key.index()) {
+ let index = key.index() as usize;
+ match *self.values.get(index) {
Redirect(..) => false,
Root(..) => true,
}
}
- /// Sets the value for `vid` to `new_value`. `vid` MUST be a root node! Also, we must be in the
- /// middle of a snapshot.
- pub fn set<'tcx>(&mut self,
- _tcx: &ty::ctxt<'tcx>,
- key: K,
- new_value: VarValue<K>)
- {
+ /// Sets the value for `vid` to `new_value`. `vid` MUST be a root
+ /// node! This is an internal operation used to impl other things.
+ fn set(&mut self, key: K, new_value: VarValue<K>) {
assert!(self.is_root(&key));
debug!("Updating variable {:?} to {:?}",
key, new_value);
- self.values.set(key.index(), new_value);
+ let index = key.index() as usize;
+ self.values.set(index, new_value);
}
- /// Either redirects node_a to node_b or vice versa, depending on the relative rank. Returns
- /// the new root and rank. You should then update the value of the new root to something
- /// suitable.
- pub fn unify<'tcx>(&mut self,
- tcx: &ty::ctxt<'tcx>,
- node_a: &Node<K>,
- node_b: &Node<K>)
- -> (K, usize)
- {
+ /// Either redirects `node_a` to `node_b` or vice versa, depending
+ /// on the relative rank. The value associated with the new root
+ /// will be `new_value`.
+ ///
+ /// NB: This is the "union" operation of "union-find". It is
+ /// really more of a building block. If the values associated with
+ /// your key are non-trivial, you would probably prefer to call
+ /// `unify_var_var` below.
+ fn unify(&mut self, node_a: &Node<K>, node_b: &Node<K>, new_value: K::Value) {
debug!("unify(node_a(id={:?}, rank={:?}), node_b(id={:?}, rank={:?}))",
node_a.key,
node_a.rank,
node_b.key,
node_b.rank);
- if node_a.rank > node_b.rank {
+ let (new_root, new_rank) = if node_a.rank > node_b.rank {
// a has greater rank, so a should become b's parent,
// i.e., b should redirect to a.
- self.set(tcx, node_b.key.clone(), Redirect(node_a.key.clone()));
+ self.set(node_b.key.clone(), Redirect(node_a.key.clone()));
(node_a.key.clone(), node_a.rank)
} else if node_a.rank < node_b.rank {
// b has greater rank, so a should redirect to b.
- self.set(tcx, node_a.key.clone(), Redirect(node_b.key.clone()));
+ self.set(node_a.key.clone(), Redirect(node_b.key.clone()));
(node_b.key.clone(), node_b.rank)
} else {
// If equal, redirect one to the other and increment the
// other's rank.
assert_eq!(node_a.rank, node_b.rank);
- self.set(tcx, node_b.key.clone(), Redirect(node_a.key.clone()));
+ self.set(node_b.key.clone(), Redirect(node_a.key.clone()));
(node_a.key.clone(), node_a.rank + 1)
- }
+ };
+
+ self.set(new_root, Root(new_value, new_rank));
}
}
}
///////////////////////////////////////////////////////////////////////////
-// Code to handle simple keys like ints, floats---anything that
-// doesn't have a subtyping relationship we need to worry about.
-
-/// Indicates a type that does not have any kind of subtyping
-/// relationship.
-pub trait SimplyUnifiable<'tcx> : Clone + PartialEq + Debug {
- fn to_type(&self, tcx: &ty::ctxt<'tcx>) -> Ty<'tcx>;
- fn to_type_err(expected_found<Self>) -> ty::type_err<'tcx>;
-}
-
-pub fn err<'tcx, V:SimplyUnifiable<'tcx>>(a_is_expected: bool,
- a_t: V,
- b_t: V)
- -> ures<'tcx> {
- if a_is_expected {
- Err(SimplyUnifiable::to_type_err(
- ty::expected_found {expected: a_t, found: b_t}))
- } else {
- Err(SimplyUnifiable::to_type_err(
- ty::expected_found {expected: b_t, found: a_t}))
- }
-}
-
-pub trait InferCtxtMethodsForSimplyUnifiableTypes<'tcx,K,V>
- where K : UnifyKey<Value=Option<V>>,
- V : SimplyUnifiable<'tcx>,
- Option<V> : UnifyValue,
-{
- fn simple_vars(&self,
- a_is_expected: bool,
- a_id: K,
- b_id: K)
- -> ures<'tcx>;
- fn simple_var_t(&self,
- a_is_expected: bool,
- a_id: K,
- b: V)
- -> ures<'tcx>;
- fn probe_var(&self, a_id: K) -> Option<Ty<'tcx>>;
-}
-
-impl<'a,'tcx,V,K> InferCtxtMethodsForSimplyUnifiableTypes<'tcx,K,V> for InferCtxt<'a,'tcx>
- where K : UnifyKey<Value=Option<V>>,
- V : SimplyUnifiable<'tcx>,
- Option<V> : UnifyValue,
+// Code to handle keys which carry a value, like ints,
+// floats---anything that doesn't have a subtyping relationship we
+// need to worry about.
+
+impl<'tcx,K,V> UnificationTable<K>
+ where K: UnifyKey<Value=Option<V>>,
+ V: Clone+PartialEq,
+ Option<V>: UnifyValue,
{
- /// Unifies two simple keys. Because simple keys do not have any subtyping relationships, if
- /// both keys have already been associated with a value, then those two values must be the
- /// same.
- fn simple_vars(&self,
- a_is_expected: bool,
- a_id: K,
- b_id: K)
- -> ures<'tcx>
+ pub fn unify_var_var(&mut self,
+ a_id: K,
+ b_id: K)
+ -> Result<(),(V,V)>
{
- let tcx = self.tcx;
- let table = UnifyKey::unification_table(self);
- let node_a: Node<K> = table.borrow_mut().get(tcx, a_id);
- let node_b: Node<K> = table.borrow_mut().get(tcx, b_id);
+ let node_a = self.get(a_id);
+ let node_b = self.get(b_id);
let a_id = node_a.key.clone();
let b_id = node_b.key.clone();
- if a_id == b_id { return uok(); }
+ if a_id == b_id { return Ok(()); }
let combined = {
match (&node_a.value, &node_b.value) {
None
}
(&Some(ref v), &None) | (&None, &Some(ref v)) => {
- Some((*v).clone())
+ Some(v.clone())
}
(&Some(ref v1), &Some(ref v2)) => {
if *v1 != *v2 {
- return err(a_is_expected, (*v1).clone(), (*v2).clone())
+ return Err((v1.clone(), v2.clone()));
}
- Some((*v1).clone())
+ Some(v1.clone())
}
}
};
- let (new_root, new_rank) = table.borrow_mut().unify(tcx,
- &node_a,
- &node_b);
- table.borrow_mut().set(tcx, new_root, Root(combined, new_rank));
- return Ok(())
+ Ok(self.unify(&node_a, &node_b, combined))
}
/// Sets the value of the key `a_id` to `b`. Because simple keys do not have any subtyping
/// relationships, if `a_id` already has a value, it must be the same as `b`.
- fn simple_var_t(&self,
- a_is_expected: bool,
- a_id: K,
- b: V)
- -> ures<'tcx>
+ pub fn unify_var_value(&mut self,
+ a_id: K,
+ b: V)
+ -> Result<(),(V,V)>
{
- let tcx = self.tcx;
- let table = UnifyKey::unification_table(self);
- let node_a = table.borrow_mut().get(tcx, a_id);
+ let node_a = self.get(a_id);
let a_id = node_a.key.clone();
match node_a.value {
None => {
- table.borrow_mut().set(tcx, a_id, Root(Some(b), node_a.rank));
- return Ok(());
+ self.set(a_id, Root(Some(b), node_a.rank));
+ Ok(())
}
Some(ref a_t) => {
if *a_t == b {
- return Ok(());
+ Ok(())
} else {
- return err(a_is_expected, (*a_t).clone(), b);
+ Err((a_t.clone(), b))
}
}
}
}
- fn probe_var(&self, a_id: K) -> Option<Ty<'tcx>> {
- let tcx = self.tcx;
- let table = UnifyKey::unification_table(self);
- let node_a = table.borrow_mut().get(tcx, a_id);
- match node_a.value {
- None => None,
- Some(ref a_t) => Some(a_t.to_type(tcx))
- }
+ pub fn has_value(&mut self, id: K) -> bool {
+ self.get(id).value.is_some()
+ }
+
+ pub fn probe(&mut self, a_id: K) -> Option<V> {
+ self.get(a_id).value.clone()
}
}
// Integral type keys
+pub trait ToType<'tcx> {
+ fn to_type(&self, tcx: &ty::ctxt<'tcx>) -> Ty<'tcx>;
+}
+
impl UnifyKey for ty::IntVid {
type Value = Option<IntVarValue>;
-
- fn index(&self) -> usize { self.index as usize }
-
- fn from_index(i: usize) -> ty::IntVid { ty::IntVid { index: i as u32 } }
-
- fn unification_table<'v>(infcx: &'v InferCtxt) -> &'v RefCell<UnificationTable<ty::IntVid>> {
- return &infcx.int_unification_table;
- }
-
- fn tag(_: Option<ty::IntVid>) -> &'static str {
- "IntVid"
- }
+ fn index(&self) -> u32 { self.index }
+ fn from_index(i: u32) -> ty::IntVid { ty::IntVid { index: i } }
+ fn tag(_: Option<ty::IntVid>) -> &'static str { "IntVid" }
}
-impl<'tcx> SimplyUnifiable<'tcx> for IntVarValue {
+impl<'tcx> ToType<'tcx> for IntVarValue {
fn to_type(&self, tcx: &ty::ctxt<'tcx>) -> Ty<'tcx> {
match *self {
ty::IntType(i) => ty::mk_mach_int(tcx, i),
ty::UintType(i) => ty::mk_mach_uint(tcx, i),
}
}
-
- fn to_type_err(err: expected_found<IntVarValue>) -> ty::type_err<'tcx> {
- return ty::terr_int_mismatch(err);
- }
}
impl UnifyValue for Option<IntVarValue> { }
impl UnifyKey for ty::FloatVid {
type Value = Option<ast::FloatTy>;
-
- fn index(&self) -> usize { self.index as usize }
-
- fn from_index(i: usize) -> ty::FloatVid { ty::FloatVid { index: i as u32 } }
-
- fn unification_table<'v>(infcx: &'v InferCtxt) -> &'v RefCell<UnificationTable<ty::FloatVid>> {
- return &infcx.float_unification_table;
- }
-
- fn tag(_: Option<ty::FloatVid>) -> &'static str {
- "FloatVid"
- }
+ fn index(&self) -> u32 { self.index }
+ fn from_index(i: u32) -> ty::FloatVid { ty::FloatVid { index: i } }
+ fn tag(_: Option<ty::FloatVid>) -> &'static str { "FloatVid" }
}
impl UnifyValue for Option<ast::FloatTy> {
}
-impl<'tcx> SimplyUnifiable<'tcx> for ast::FloatTy {
+impl<'tcx> ToType<'tcx> for ast::FloatTy {
fn to_type(&self, tcx: &ty::ctxt<'tcx>) -> Ty<'tcx> {
ty::mk_mach_float(tcx, *self)
}
-
- fn to_type_err(err: expected_found<ast::FloatTy>) -> ty::type_err<'tcx> {
- ty::terr_float_mismatch(err)
- }
}
/// reexporting a public struct doesn't inline the doc).
pub type PublicItems = NodeSet;
-#[derive(Copy, Debug)]
+#[derive(Copy, Clone, Debug)]
pub enum LastPrivate {
LastMod(PrivateDep),
// `use` directives (imports) can refer to two separate definitions in the
type_used: ImportUse},
}
-#[derive(Copy, Debug)]
+#[derive(Copy, Clone, Debug)]
pub enum PrivateDep {
AllPublic,
DependsOn(ast::DefId),
}
// How an import is used.
-#[derive(Copy, PartialEq, Debug)]
+#[derive(Copy, Clone, PartialEq, Debug)]
pub enum ImportUse {
Unused, // The import is not used.
Used, // The import is used.
}
/// The region maps encode information about region relationships.
-///
-/// - `scope_map` maps from a scope id to the enclosing scope id; this is
-/// usually corresponding to the lexical nesting, though in the case of
-/// closures the parent scope is the innermost conditional expression or repeating
-/// block. (Note that the enclosing scope id for the block
-/// associated with a closure is the closure itself.)
-///
-/// - `var_map` maps from a variable or binding id to the block in which
-/// that variable is declared.
-///
-/// - `free_region_map` maps from a free region `a` to a list of free
-/// regions `bs` such that `a <= b for all b in bs`
-/// - the free region map is populated during type check as we check
-/// each function. See the function `relate_free_regions` for
-/// more information.
-///
-/// - `rvalue_scopes` includes entries for those expressions whose cleanup
-/// scope is larger than the default. The map goes from the expression
-/// id to the cleanup scope id. For rvalues not present in this table,
-/// the appropriate cleanup scope is the innermost enclosing statement,
-/// conditional expression, or repeating block (see `terminating_scopes`).
-///
-/// - `terminating_scopes` is a set containing the ids of each statement,
-/// or conditional/repeating expression. These scopes are calling "terminating
-/// scopes" because, when attempting to find the scope of a temporary, by
-/// default we search up the enclosing scopes until we encounter the
-/// terminating scope. A conditional/repeating
-/// expression is one which is not guaranteed to execute exactly once
-/// upon entering the parent scope. This could be because the expression
-/// only executes conditionally, such as the expression `b` in `a && b`,
-/// or because the expression may execute many times, such as a loop
-/// body. The reason that we distinguish such expressions is that, upon
-/// exiting the parent scope, we cannot statically know how many times
-/// the expression executed, and thus if the expression creates
-/// temporaries we cannot know statically how many such temporaries we
-/// would have to cleanup. Therefore we ensure that the temporaries never
-/// outlast the conditional/repeating expression, preventing the need
-/// for dynamic checks and/or arbitrary amounts of stack space.
pub struct RegionMaps {
+ /// `scope_map` maps from a scope id to the enclosing scope id;
+ /// this is usually corresponding to the lexical nesting, though
+ /// in the case of closures the parent scope is the innermost
+ /// conditional expression or repeating block. (Note that the
+ /// enclosing scope id for the block associated with a closure is
+ /// the closure itself.)
scope_map: RefCell<FnvHashMap<CodeExtent, CodeExtent>>,
+
+ /// `var_map` maps from a variable or binding id to the block in
+ /// which that variable is declared.
var_map: RefCell<NodeMap<CodeExtent>>,
+
+ /// `free_region_map` maps from a free region `a` to a list of
+ /// free regions `bs` such that `a <= b for all b in bs`
+ ///
+ /// NB. the free region map is populated during type check as we
+ /// check each function. See the function `relate_free_regions`
+ /// for more information.
free_region_map: RefCell<FnvHashMap<FreeRegion, Vec<FreeRegion>>>,
+
+ /// `rvalue_scopes` includes entries for those expressions whose cleanup scope is
+ /// larger than the default. The map goes from the expression id
+ /// to the cleanup scope id. For rvalues not present in this
+ /// table, the appropriate cleanup scope is the innermost
+ /// enclosing statement, conditional expression, or repeating
+ /// block (see `terminating_scopes`).
rvalue_scopes: RefCell<NodeMap<CodeExtent>>,
+
+ /// `terminating_scopes` is a set containing the ids of each
+ /// statement, or conditional/repeating expression. These scopes
+ /// are calling "terminating scopes" because, when attempting to
+ /// find the scope of a temporary, by default we search up the
+ /// enclosing scopes until we encounter the terminating scope. A
+ /// conditional/repeating expression is one which is not
+ /// guaranteed to execute exactly once upon entering the parent
+ /// scope. This could be because the expression only executes
+ /// conditionally, such as the expression `b` in `a && b`, or
+ /// because the expression may execute many times, such as a loop
+ /// body. The reason that we distinguish such expressions is that,
+ /// upon exiting the parent scope, we cannot statically know how
+ /// many times the expression executed, and thus if the expression
+ /// creates temporaries we cannot know statically how many such
+ /// temporaries we would have to cleanup. Therefore we ensure that
+ /// the temporaries never outlast the conditional/repeating
+ /// expression, preventing the need for dynamic checks and/or
+ /// arbitrary amounts of stack space.
terminating_scopes: RefCell<FnvHashSet<CodeExtent>>,
+
+ /// Encodes the hierarchy of fn bodies. Every fn body (including
+ /// closures) forms its own distinct region hierarchy, rooted in
+ /// the block that is the fn body. This map points from the id of
+ /// that root block to the id of the root block for the enclosing
+ /// fn, if any. Thus the map structures the fn bodies into a
+ /// hierarchy based on their lexical mapping. This is used to
+ /// handle the relationships between regions in a fn and in a
+ /// closure defined by that fn. See the "Modeling closures"
+ /// section of the README in middle::infer::region_inference for
+ /// more details.
+ fn_tree: RefCell<NodeMap<ast::NodeId>>,
}
/// Carries the node id for the innermost block or match expression,
#[derive(Debug, Copy)]
pub struct Context {
+ /// the root of the current region tree. This is typically the id
+ /// of the innermost fn body. Each fn forms its own disjoint tree
+ /// in the region hierarchy. These fn bodies are themselves
+ /// arranged into a tree. See the "Modeling closures" section of
+ /// the README in middle::infer::region_inference for more
+ /// details.
+ root_id: Option<ast::NodeId>,
+
/// the scope that contains any new variables declared
var_parent: InnermostDeclaringBlock,
self.free_region_map.borrow_mut().insert(sub, vec!(sup));
}
+ /// Records that `sub_fn` is defined within `sup_fn`. These ids
+ /// should be the id of the block that is the fn body, which is
+ /// also the root of the region hierarchy for that fn.
+ fn record_fn_parent(&self, sub_fn: ast::NodeId, sup_fn: ast::NodeId) {
+ debug!("record_fn_parent(sub_fn={:?}, sup_fn={:?})", sub_fn, sup_fn);
+ assert!(sub_fn != sup_fn);
+ let previous = self.fn_tree.borrow_mut().insert(sub_fn, sup_fn);
+ assert!(previous.is_none());
+ }
+
+ fn fn_is_enclosed_by(&self, mut sub_fn: ast::NodeId, sup_fn: ast::NodeId) -> bool {
+ let fn_tree = self.fn_tree.borrow();
+ loop {
+ if sub_fn == sup_fn { return true; }
+ match fn_tree.get(&sub_fn) {
+ Some(&s) => { sub_fn = s; }
+ None => { return false; }
+ }
+ }
+ }
+
pub fn record_encl_scope(&self, sub: CodeExtent, sup: CodeExtent) {
debug!("record_encl_scope(sub={:?}, sup={:?})", sub, sup);
assert!(sub != sup);
self.scope_map.borrow_mut().insert(sub, sup);
}
- pub fn record_var_scope(&self, var: ast::NodeId, lifetime: CodeExtent) {
+ fn record_var_scope(&self, var: ast::NodeId, lifetime: CodeExtent) {
debug!("record_var_scope(sub={:?}, sup={:?})", var, lifetime);
assert!(var != lifetime.node_id());
self.var_map.borrow_mut().insert(var, lifetime);
}
- pub fn record_rvalue_scope(&self, var: ast::NodeId, lifetime: CodeExtent) {
+ fn record_rvalue_scope(&self, var: ast::NodeId, lifetime: CodeExtent) {
debug!("record_rvalue_scope(sub={:?}, sup={:?})", var, lifetime);
assert!(var != lifetime.node_id());
self.rvalue_scopes.borrow_mut().insert(var, lifetime);
/// Records that a scope is a TERMINATING SCOPE. Whenever we create automatic temporaries --
/// e.g. by an expression like `a().f` -- they will be freed within the innermost terminating
/// scope.
- pub fn mark_as_terminating_scope(&self, scope_id: CodeExtent) {
+ fn mark_as_terminating_scope(&self, scope_id: CodeExtent) {
debug!("record_terminating_scope(scope_id={:?})", scope_id);
self.terminating_scopes.borrow_mut().insert(scope_id);
}
pub fn nearest_common_ancestor(&self,
scope_a: CodeExtent,
scope_b: CodeExtent)
- -> Option<CodeExtent> {
- if scope_a == scope_b { return Some(scope_a); }
+ -> CodeExtent {
+ if scope_a == scope_b { return scope_a; }
let a_ancestors = ancestors_of(self, scope_a);
let b_ancestors = ancestors_of(self, scope_b);
let mut a_index = a_ancestors.len() - 1;
let mut b_index = b_ancestors.len() - 1;
- // Here, ~[ab]_ancestors is a vector going from narrow to broad.
+ // Here, [ab]_ancestors is a vector going from narrow to broad.
// The end of each vector will be the item where the scope is
// defined; if there are any common ancestors, then the tails of
// the vector will be the same. So basically we want to walk
// then the corresponding scope is a superscope of the other.
if a_ancestors[a_index] != b_ancestors[b_index] {
- return None;
+ // In this case, the two regions belong to completely
+ // different functions. Compare those fn for lexical
+ // nesting. The reasoning behind this is subtle. See the
+ // "Modeling closures" section of the README in
+ // middle::infer::region_inference for more details.
+ let a_root_scope = a_ancestors[a_index];
+ let b_root_scope = a_ancestors[a_index];
+ return match (a_root_scope, b_root_scope) {
+ (CodeExtent::DestructionScope(a_root_id),
+ CodeExtent::DestructionScope(b_root_id)) => {
+ if self.fn_is_enclosed_by(a_root_id, b_root_id) {
+ // `a` is enclosed by `b`, hence `b` is the ancestor of everything in `a`
+ scope_b
+ } else if self.fn_is_enclosed_by(b_root_id, a_root_id) {
+ // `b` is enclosed by `a`, hence `a` is the ancestor of everything in `b`
+ scope_a
+ } else {
+ // neither fn encloses the other
+ unreachable!()
+ }
+ }
+ _ => {
+ // root ids are always Misc right now
+ unreachable!()
+ }
+ };
}
loop {
// Loop invariant: a_ancestors[a_index] == b_ancestors[b_index]
// for all indices between a_index and the end of the array
- if a_index == 0 { return Some(scope_a); }
- if b_index == 0 { return Some(scope_b); }
+ if a_index == 0 { return scope_a; }
+ if b_index == 0 { return scope_b; }
a_index -= 1;
b_index -= 1;
if a_ancestors[a_index] != b_ancestors[b_index] {
- return Some(a_ancestors[a_index + 1]);
+ return a_ancestors[a_index + 1];
}
}
- fn ancestors_of(this: &RegionMaps, scope: CodeExtent)
- -> Vec<CodeExtent> {
+ fn ancestors_of(this: &RegionMaps, scope: CodeExtent) -> Vec<CodeExtent> {
// debug!("ancestors_of(scope={:?})", scope);
let mut result = vec!(scope);
let mut scope = scope;
let prev_cx = visitor.cx;
let blk_scope = CodeExtent::Misc(blk.id);
+
// If block was previously marked as a terminating scope during
// the recursive visit of its parent node in the AST, then we need
// to account for the destruction scope representing the extent of
// itself has returned.
visitor.cx = Context {
+ root_id: prev_cx.root_id,
var_parent: InnermostDeclaringBlock::Block(blk.id),
parent: InnermostEnclosingExpr::Some(blk.id),
};
record_superlifetime(
visitor, declaring.to_code_extent(), statement.span);
visitor.cx = Context {
+ root_id: prev_cx.root_id,
var_parent: InnermostDeclaringBlock::Statement(declaring),
parent: InnermostEnclosingExpr::Statement(declaring),
};
// Items create a new outer block scope as far as we're concerned.
let prev_cx = visitor.cx;
visitor.cx = Context {
+ root_id: None,
var_parent: InnermostDeclaringBlock::None,
parent: InnermostEnclosingExpr::None
};
}
fn resolve_fn(visitor: &mut RegionResolutionVisitor,
- fk: FnKind,
+ _: FnKind,
decl: &ast::FnDecl,
body: &ast::Block,
sp: Span,
let body_scope = CodeExtent::from_node_id(body.id);
visitor.region_maps.mark_as_terminating_scope(body_scope);
+
let dtor_scope = CodeExtent::DestructionScope(body.id);
visitor.region_maps.record_encl_scope(body_scope, dtor_scope);
+
record_superlifetime(visitor, dtor_scope, body.span);
+ if let Some(root_id) = visitor.cx.root_id {
+ visitor.region_maps.record_fn_parent(body.id, root_id);
+ }
+
let outer_cx = visitor.cx;
// The arguments and `self` are parented to the body of the fn.
visitor.cx = Context {
+ root_id: Some(body.id),
parent: InnermostEnclosingExpr::Some(body.id),
var_parent: InnermostDeclaringBlock::Block(body.id)
};
visit::walk_fn_decl(visitor, decl);
- // The body of the fn itself is either a root scope (top-level fn)
- // or it continues with the inherited scope (closures).
- match fk {
- visit::FkItemFn(..) | visit::FkMethod(..) => {
- visitor.cx = Context {
- parent: InnermostEnclosingExpr::None,
- var_parent: InnermostDeclaringBlock::None
- };
- visitor.visit_block(body);
- visitor.cx = outer_cx;
- }
- visit::FkFnBlock(..) => {
- // FIXME(#3696) -- at present we are place the closure body
- // within the region hierarchy exactly where it appears lexically.
- // This is wrong because the closure may live longer
- // than the enclosing expression. We should probably fix this,
- // but the correct fix is a bit subtle, and I am also not sure
- // that the present approach is unsound -- it may not permit
- // any illegal programs. See issue for more details.
- visitor.cx = outer_cx;
- visitor.visit_block(body);
- }
- }
+ // The body of the every fn is a root scope.
+ visitor.cx = Context {
+ root_id: Some(body.id),
+ parent: InnermostEnclosingExpr::None,
+ var_parent: InnermostDeclaringBlock::None
+ };
+ visitor.visit_block(body);
+
+ // Restore context we had at the start.
+ visitor.cx = outer_cx;
}
impl<'a, 'v> Visitor<'v> for RegionResolutionVisitor<'a> {
free_region_map: RefCell::new(FnvHashMap()),
rvalue_scopes: RefCell::new(NodeMap()),
terminating_scopes: RefCell::new(FnvHashSet()),
+ fn_tree: RefCell::new(NodeMap()),
};
{
let mut visitor = RegionResolutionVisitor {
sess: sess,
region_maps: &maps,
cx: Context {
+ root_id: None,
parent: InnermostEnclosingExpr::None,
var_parent: InnermostDeclaringBlock::None,
}
sess: sess,
region_maps: region_maps,
cx: Context {
+ root_id: None,
parent: InnermostEnclosingExpr::None,
var_parent: InnermostDeclaringBlock::None
}
use super::project;
use super::util;
-use middle::subst::{Subst, TypeSpace};
+use middle::subst::{Subst, Substs, TypeSpace};
use middle::ty::{self, ToPolyTraitRef, Ty};
use middle::infer::{self, InferCtxt};
-use std::collections::HashSet;
use std::rc::Rc;
use syntax::ast;
-use syntax::codemap::DUMMY_SP;
+use syntax::codemap::{DUMMY_SP, Span};
use util::ppaux::Repr;
+#[derive(Copy)]
+struct ParamIsLocal(bool);
+
/// True if there exist types that satisfy both of the two given impls.
pub fn overlapping_impls(infcx: &InferCtxt,
impl1_def_id: ast::DefId,
a_def_id.repr(selcx.tcx()),
b_def_id.repr(selcx.tcx()));
- let (a_trait_ref, a_obligations) = impl_trait_ref_and_oblig(selcx, a_def_id);
- let (b_trait_ref, b_obligations) = impl_trait_ref_and_oblig(selcx, b_def_id);
+ let (a_trait_ref, a_obligations) = impl_trait_ref_and_oblig(selcx,
+ a_def_id,
+ util::free_substs_for_impl);
+
+ let (b_trait_ref, b_obligations) = impl_trait_ref_and_oblig(selcx,
+ b_def_id,
+ util::fresh_type_vars_for_impl);
debug!("overlap: a_trait_ref={}", a_trait_ref.repr(selcx.tcx()));
+
debug!("overlap: b_trait_ref={}", b_trait_ref.repr(selcx.tcx()));
// Does `a <: b` hold? If not, no overlap.
debug!("overlap: subtraitref check succeeded");
// Are any of the obligations unsatisfiable? If so, no overlap.
+ let tcx = selcx.tcx();
+ let infcx = selcx.infcx();
let opt_failing_obligation =
a_obligations.iter()
.chain(b_obligations.iter())
+ .map(|o| infcx.resolve_type_vars_if_possible(o))
.find(|o| !selcx.evaluate_obligation(o));
if let Some(failing_obligation) = opt_failing_obligation {
- debug!("overlap: obligation unsatisfiable {}", failing_obligation.repr(selcx.tcx()));
- return false;
+ debug!("overlap: obligation unsatisfiable {}", failing_obligation.repr(tcx));
+ return false
}
true
}
+pub fn trait_ref_is_knowable<'tcx>(tcx: &ty::ctxt<'tcx>, trait_ref: &ty::TraitRef<'tcx>) -> bool
+{
+ debug!("trait_ref_is_knowable(trait_ref={})", trait_ref.repr(tcx));
+
+ // if the orphan rules pass, that means that no ancestor crate can
+ // impl this, so it's up to us.
+ if orphan_check_trait_ref(tcx, trait_ref, ParamIsLocal(false)).is_ok() {
+ debug!("trait_ref_is_knowable: orphan check passed");
+ return true;
+ }
+
+ // if the trait is not marked fundamental, then it's always possible that
+ // an ancestor crate will impl this in the future, if they haven't
+ // already
+ if
+ trait_ref.def_id.krate != ast::LOCAL_CRATE &&
+ !ty::has_attr(tcx, trait_ref.def_id, "fundamental")
+ {
+ debug!("trait_ref_is_knowable: trait is neither local nor fundamental");
+ return false;
+ }
+
+ // find out when some downstream (or cousin) crate could impl this
+ // trait-ref, presuming that all the parameters were instantiated
+ // with downstream types. If not, then it could only be
+ // implemented by an upstream crate, which means that the impl
+ // must be visible to us, and -- since the trait is fundamental
+ // -- we can test.
+ orphan_check_trait_ref(tcx, trait_ref, ParamIsLocal(true)).is_err()
+}
+
+type SubstsFn = for<'a,'tcx> fn(infcx: &InferCtxt<'a, 'tcx>,
+ span: Span,
+ impl_def_id: ast::DefId)
+ -> Substs<'tcx>;
+
/// Instantiate fresh variables for all bound parameters of the impl
/// and return the impl trait ref with those variables substituted.
fn impl_trait_ref_and_oblig<'a,'tcx>(selcx: &mut SelectionContext<'a,'tcx>,
- impl_def_id: ast::DefId)
+ impl_def_id: ast::DefId,
+ substs_fn: SubstsFn)
-> (Rc<ty::TraitRef<'tcx>>,
Vec<PredicateObligation<'tcx>>)
{
let impl_substs =
- &util::fresh_substs_for_impl(selcx.infcx(), DUMMY_SP, impl_def_id);
+ &substs_fn(selcx.infcx(), DUMMY_SP, impl_def_id);
let impl_trait_ref =
ty::impl_trait_ref(selcx.tcx(), impl_def_id).unwrap();
let impl_trait_ref =
impl_def_id: ast::DefId)
-> Result<(), OrphanCheckErr<'tcx>>
{
- debug!("impl_is_local({})", impl_def_id.repr(tcx));
+ debug!("orphan_check({})", impl_def_id.repr(tcx));
// We only except this routine to be invoked on implementations
// of a trait, not inherent implementations.
let trait_ref = ty::impl_trait_ref(tcx, impl_def_id).unwrap();
- debug!("trait_ref={}", trait_ref.repr(tcx));
+ debug!("orphan_check: trait_ref={}", trait_ref.repr(tcx));
// If the *trait* is local to the crate, ok.
if trait_ref.def_id.krate == ast::LOCAL_CRATE {
return Ok(());
}
+ orphan_check_trait_ref(tcx, &trait_ref, ParamIsLocal(false))
+}
+
+fn orphan_check_trait_ref<'tcx>(tcx: &ty::ctxt<'tcx>,
+ trait_ref: &ty::TraitRef<'tcx>,
+ param_is_local: ParamIsLocal)
+ -> Result<(), OrphanCheckErr<'tcx>>
+{
+ debug!("orphan_check_trait_ref(trait_ref={}, param_is_local={})",
+ trait_ref.repr(tcx), param_is_local.0);
+
// First, create an ordered iterator over all the type parameters to the trait, with the self
// type appearing first.
let input_tys = Some(trait_ref.self_ty());
let input_tys = input_tys.iter().chain(trait_ref.substs.types.get_slice(TypeSpace).iter());
- let mut input_tys = input_tys;
// Find the first input type that either references a type parameter OR
// some local type.
- match input_tys.find(|&&input_ty| references_local_or_type_parameter(tcx, input_ty)) {
- Some(&input_ty) => {
- // Within this first type, check that all type parameters are covered by a local
- // type constructor. Note that if there is no local type constructor, then any
- // type parameter at all will be an error.
- let covered_params = type_parameters_covered_by_ty(tcx, input_ty);
- let all_params = type_parameters_reachable_from_ty(input_ty);
- for ¶m in all_params.difference(&covered_params) {
- return Err(OrphanCheckErr::UncoveredTy(param));
+ for input_ty in input_tys {
+ if ty_is_local(tcx, input_ty, param_is_local) {
+ debug!("orphan_check_trait_ref: ty_is_local `{}`", input_ty.repr(tcx));
+
+ // First local input type. Check that there are no
+ // uncovered type parameters.
+ let uncovered_tys = uncovered_tys(tcx, input_ty, param_is_local);
+ for uncovered_ty in uncovered_tys {
+ if let Some(param) = uncovered_ty.walk().find(|t| is_type_parameter(t)) {
+ debug!("orphan_check_trait_ref: uncovered type `{}`", param.repr(tcx));
+ return Err(OrphanCheckErr::UncoveredTy(param));
+ }
}
+
+ // OK, found local type, all prior types upheld invariant.
+ return Ok(());
}
- None => {
- return Err(OrphanCheckErr::NoLocalInputType);
+
+ // Otherwise, enforce invariant that there are no type
+ // parameters reachable.
+ if !param_is_local.0 {
+ if let Some(param) = input_ty.walk().find(|t| is_type_parameter(t)) {
+ debug!("orphan_check_trait_ref: uncovered type `{}`", param.repr(tcx));
+ return Err(OrphanCheckErr::UncoveredTy(param));
+ }
}
}
- return Ok(());
+ // If we exit above loop, never found a local type.
+ debug!("orphan_check_trait_ref: no local type");
+ return Err(OrphanCheckErr::NoLocalInputType);
+}
+
+fn uncovered_tys<'tcx>(tcx: &ty::ctxt<'tcx>,
+ ty: Ty<'tcx>,
+ param_is_local: ParamIsLocal)
+ -> Vec<Ty<'tcx>>
+{
+ if ty_is_local_constructor(tcx, ty, param_is_local) {
+ vec![]
+ } else if fundamental_ty(tcx, ty) {
+ ty.walk_shallow()
+ .flat_map(|t| uncovered_tys(tcx, t, param_is_local).into_iter())
+ .collect()
+ } else {
+ vec![ty]
+ }
}
-fn ty_is_local_constructor<'tcx>(tcx: &ty::ctxt<'tcx>, ty: Ty<'tcx>) -> bool {
+fn is_type_parameter<'tcx>(ty: Ty<'tcx>) -> bool {
+ match ty.sty {
+ // FIXME(#20590) straighten story about projection types
+ ty::ty_projection(..) | ty::ty_param(..) => true,
+ _ => false,
+ }
+}
+
+fn ty_is_local<'tcx>(tcx: &ty::ctxt<'tcx>, ty: Ty<'tcx>, param_is_local: ParamIsLocal) -> bool
+{
+ ty_is_local_constructor(tcx, ty, param_is_local) ||
+ fundamental_ty(tcx, ty) && ty.walk_shallow().any(|t| ty_is_local(tcx, t, param_is_local))
+}
+
+fn fundamental_ty<'tcx>(tcx: &ty::ctxt<'tcx>, ty: Ty<'tcx>) -> bool
+{
+ match ty.sty {
+ ty::ty_uniq(..) | ty::ty_rptr(..) =>
+ true,
+ ty::ty_enum(def_id, _) | ty::ty_struct(def_id, _) =>
+ ty::has_attr(tcx, def_id, "fundamental"),
+ ty::ty_trait(ref data) =>
+ ty::has_attr(tcx, data.principal_def_id(), "fundamental"),
+ _ =>
+ false
+ }
+}
+
+fn ty_is_local_constructor<'tcx>(tcx: &ty::ctxt<'tcx>,
+ ty: Ty<'tcx>,
+ param_is_local: ParamIsLocal)
+ -> bool
+{
debug!("ty_is_local_constructor({})", ty.repr(tcx));
match ty.sty {
ty::ty_ptr(..) |
ty::ty_rptr(..) |
ty::ty_tup(..) |
- ty::ty_param(..) |
+ ty::ty_infer(..) |
ty::ty_projection(..) => {
false
}
+ ty::ty_param(..) => {
+ param_is_local.0
+ }
+
ty::ty_enum(def_id, _) |
ty::ty_struct(def_id, _) => {
def_id.krate == ast::LOCAL_CRATE
}
ty::ty_closure(..) |
- ty::ty_infer(..) |
ty::ty_err => {
tcx.sess.bug(
&format!("ty_is_local invoked on unexpected type: {}",
}
}
-fn type_parameters_covered_by_ty<'tcx>(tcx: &ty::ctxt<'tcx>,
- ty: Ty<'tcx>)
- -> HashSet<Ty<'tcx>>
-{
- if ty_is_local_constructor(tcx, ty) {
- type_parameters_reachable_from_ty(ty)
- } else {
- ty.walk_children().flat_map(|t| type_parameters_covered_by_ty(tcx, t).into_iter()).collect()
- }
-}
-
-/// All type parameters reachable from `ty`
-fn type_parameters_reachable_from_ty<'tcx>(ty: Ty<'tcx>) -> HashSet<Ty<'tcx>> {
- ty.walk().filter(|&t| is_type_parameter(t)).collect()
-}
-
-fn references_local_or_type_parameter<'tcx>(tcx: &ty::ctxt<'tcx>, ty: Ty<'tcx>) -> bool {
- ty.walk().any(|ty| is_type_parameter(ty) || ty_is_local_constructor(tcx, ty))
-}
-fn is_type_parameter<'tcx>(ty: Ty<'tcx>) -> bool {
- match ty.sty {
- // FIXME(#20590) straighten story about projection types
- ty::ty_projection(..) | ty::ty_param(..) => true,
- _ => false,
- }
-}
obligation.repr(selcx.tcx()));
let infcx = selcx.infcx();
- infcx.try(|snapshot| {
+ infcx.commit_if_ok(|snapshot| {
let (skol_predicate, skol_map) =
infcx.skolemize_late_bound_regions(&obligation.predicate, snapshot);
}
}
+#[derive(Clone)]
pub struct Normalized<'tcx,T> {
pub value: T,
pub obligations: Vec<PredicateObligation<'tcx>>,
use self::BuiltinBoundConditions::*;
use self::EvaluationResult::*;
+use super::coherence;
use super::DerivedObligationCause;
use super::project;
use super::project::{normalize_with_depth, Normalized};
use middle::infer;
use middle::infer::{InferCtxt, TypeFreshener};
use middle::ty_fold::TypeFoldable;
+use middle::ty_match;
+use middle::ty_relate::TypeRelation;
use std::cell::RefCell;
-use std::collections::hash_map::HashMap;
use std::rc::Rc;
use syntax::{abi, ast};
use util::common::ErrorReported;
+use util::nodemap::FnvHashMap;
use util::ppaux::Repr;
pub struct SelectionContext<'cx, 'tcx:'cx> {
/// selection-context's freshener. Used to check for recursion.
fresh_trait_ref: ty::PolyTraitRef<'tcx>,
- previous: Option<&'prev TraitObligationStack<'prev, 'tcx>>
+ previous: TraitObligationStackList<'prev, 'tcx>,
}
#[derive(Clone)]
pub struct SelectionCache<'tcx> {
- hashmap: RefCell<HashMap<Rc<ty::TraitRef<'tcx>>,
- SelectionResult<'tcx, SelectionCandidate<'tcx>>>>,
+ hashmap: RefCell<FnvHashMap<Rc<ty::TraitRef<'tcx>>,
+ SelectionResult<'tcx, SelectionCandidate<'tcx>>>>,
}
pub enum MethodMatchResult {
debug!("select({})", obligation.repr(self.tcx()));
assert!(!obligation.predicate.has_escaping_regions());
- let stack = self.push_stack(None, obligation);
+ let stack = self.push_stack(TraitObligationStackList::empty(), obligation);
match try!(self.candidate_from_obligation(&stack)) {
None => {
self.consider_unification_despite_ambiguity(obligation);
debug!("evaluate_obligation({})",
obligation.repr(self.tcx()));
- self.evaluate_predicate_recursively(None, obligation).may_apply()
+ self.evaluate_predicate_recursively(TraitObligationStackList::empty(), obligation)
+ .may_apply()
}
fn evaluate_builtin_bound_recursively<'o>(&mut self,
match obligation {
Ok(obligation) => {
- self.evaluate_predicate_recursively(Some(previous_stack), &obligation)
+ self.evaluate_predicate_recursively(previous_stack.list(), &obligation)
}
Err(ErrorReported) => {
EvaluatedToOk
}
fn evaluate_predicates_recursively<'a,'o,I>(&mut self,
- stack: Option<&TraitObligationStack<'o, 'tcx>>,
+ stack: TraitObligationStackList<'o, 'tcx>,
predicates: I)
-> EvaluationResult<'tcx>
where I : Iterator<Item=&'a PredicateObligation<'tcx>>, 'tcx:'a
}
fn evaluate_predicate_recursively<'o>(&mut self,
- previous_stack: Option<&TraitObligationStack<'o, 'tcx>>,
+ previous_stack: TraitObligationStackList<'o, 'tcx>,
obligation: &PredicateObligation<'tcx>)
-> EvaluationResult<'tcx>
{
}
fn evaluate_obligation_recursively<'o>(&mut self,
- previous_stack: Option<&TraitObligationStack<'o, 'tcx>>,
+ previous_stack: TraitObligationStackList<'o, 'tcx>,
obligation: &TraitObligation<'tcx>)
-> EvaluationResult<'tcx>
{
debug!("evaluate_obligation_recursively({})",
obligation.repr(self.tcx()));
- let stack = self.push_stack(previous_stack.map(|x| x), obligation);
+ let stack = self.push_stack(previous_stack, obligation);
let result = self.evaluate_stack(&stack);
unbound_input_types &&
(self.intercrate ||
stack.iter().skip(1).any(
- |prev| stack.fresh_trait_ref.def_id() == prev.fresh_trait_ref.def_id()))
+ |prev| self.match_fresh_trait_refs(&stack.fresh_trait_ref,
+ &prev.fresh_trait_ref)))
{
debug!("evaluate_stack({}) --> unbound argument, recursion --> ambiguous",
stack.fresh_trait_ref.repr(self.tcx()));
obligation.recursion_depth + 1,
skol_map,
snapshot);
- self.winnow_selection(None, VtableImpl(vtable_impl)).may_apply()
+ self.winnow_selection(TraitObligationStackList::empty(),
+ VtableImpl(vtable_impl)).may_apply()
}
Err(()) => {
false
return Ok(Some(ErrorCandidate));
}
+ if !self.is_knowable(stack) {
+ debug!("intercrate not knowable");
+ return Ok(None);
+ }
+
let candidate_set = try!(self.assemble_candidates(stack));
if candidate_set.ambiguous {
Ok(Some(candidate))
}
+ fn is_knowable<'o>(&mut self,
+ stack: &TraitObligationStack<'o, 'tcx>)
+ -> bool
+ {
+ debug!("is_knowable(intercrate={})", self.intercrate);
+
+ if !self.intercrate {
+ return true;
+ }
+
+ let obligation = &stack.obligation;
+ let predicate = self.infcx().resolve_type_vars_if_possible(&obligation.predicate);
+
+ // ok to skip binder because of the nature of the
+ // trait-ref-is-knowable check, which does not care about
+ // bound regions
+ let trait_ref = &predicate.skip_binder().trait_ref;
+
+ coherence::trait_ref_is_knowable(self.tcx(), trait_ref)
+ }
+
fn pick_candidate_cache(&self) -> &SelectionCache<'tcx> {
// If there are any where-clauses in scope, then we always use
// a cache local to this particular scope. Otherwise, we
self.infcx().probe(move |_| {
match self.match_where_clause_trait_ref(stack.obligation, where_clause_trait_ref) {
Ok(obligations) => {
- self.evaluate_predicates_recursively(Some(stack), obligations.iter())
+ self.evaluate_predicates_recursively(stack.list(), obligations.iter())
}
Err(()) => {
EvaluatedToErr(Unimplemented)
return;
}
- self.infcx.try(|snapshot| {
+ self.infcx.commit_if_ok(|snapshot| {
let bound_self_ty =
self.infcx.resolve_type_vars_if_possible(&obligation.self_ty());
let (self_ty, _) =
let result = self.infcx.probe(|_| {
let candidate = (*candidate).clone();
match self.confirm_candidate(stack.obligation, candidate) {
- Ok(selection) => self.winnow_selection(Some(stack), selection),
+ Ok(selection) => self.winnow_selection(stack.list(),
+ selection),
Err(error) => EvaluatedToErr(error),
}
});
}
fn winnow_selection<'o>(&mut self,
- stack: Option<&TraitObligationStack<'o, 'tcx>>,
+ stack: TraitObligationStackList<'o,'tcx>,
selection: Selection<'tcx>)
-> EvaluationResult<'tcx>
{
// For each type, produce a vector of resulting obligations
let obligations: Result<Vec<Vec<_>>, _> = bound_types.iter().map(|nested_ty| {
- self.infcx.try(|snapshot| {
+ self.infcx.commit_if_ok(|snapshot| {
let (skol_ty, skol_map) =
self.infcx().skolemize_late_bound_regions(nested_ty, snapshot);
let Normalized { value: normalized_ty, mut obligations } =
obligation: &TraitObligation<'tcx>)
{
let _: Result<(),()> =
- self.infcx.try(|snapshot| {
+ self.infcx.commit_if_ok(|snapshot| {
let result =
self.match_projection_obligation_against_bounds_from_trait(obligation,
snapshot);
trait_def_id,
nested);
- let trait_obligations: Result<VecPerParamSpace<_>,()> = self.infcx.try(|snapshot| {
+ let trait_obligations: Result<VecPerParamSpace<_>,()> = self.infcx.commit_if_ok(|snapshot| {
let poly_trait_ref = obligation.predicate.to_poly_trait_ref();
let (trait_ref, skol_map) =
self.infcx().skolemize_late_bound_regions(&poly_trait_ref, snapshot);
// First, create the substitutions by matching the impl again,
// this time not in a probe.
- self.infcx.try(|snapshot| {
+ self.infcx.commit_if_ok(|snapshot| {
let (skol_obligation_trait_ref, skol_map) =
self.infcx().skolemize_late_bound_regions(&obligation.predicate, snapshot);
let substs =
return Err(());
}
- let impl_substs = util::fresh_substs_for_impl(self.infcx,
- obligation.cause.span,
- impl_def_id);
+ let impl_substs = util::fresh_type_vars_for_impl(self.infcx,
+ obligation.cause.span,
+ impl_def_id);
let impl_trait_ref = impl_trait_ref.subst(self.tcx(),
&impl_substs);
{
// Create fresh type variables for each type parameter declared
// on the impl etc.
- let impl_substs = util::fresh_substs_for_impl(self.infcx,
- obligation_cause.span,
- impl_def_id);
+ let impl_substs = util::fresh_type_vars_for_impl(self.infcx,
+ obligation_cause.span,
+ impl_def_id);
// Find the self type for the impl.
let impl_self_ty = ty::lookup_item_type(self.tcx(), impl_def_id).ty;
///////////////////////////////////////////////////////////////////////////
// Miscellany
+ fn match_fresh_trait_refs(&self,
+ previous: &ty::PolyTraitRef<'tcx>,
+ current: &ty::PolyTraitRef<'tcx>)
+ -> bool
+ {
+ let mut matcher = ty_match::Match::new(self.tcx());
+ matcher.relate(previous, current).is_ok()
+ }
+
fn push_stack<'o,'s:'o>(&mut self,
- previous_stack: Option<&'s TraitObligationStack<'s, 'tcx>>,
+ previous_stack: TraitObligationStackList<'s, 'tcx>,
obligation: &'o TraitObligation<'tcx>)
-> TraitObligationStack<'o, 'tcx>
{
TraitObligationStack {
obligation: obligation,
fresh_trait_ref: fresh_trait_ref,
- previous: previous_stack.map(|p| p), // FIXME variance
+ previous: previous_stack,
}
}
impl<'tcx> SelectionCache<'tcx> {
pub fn new() -> SelectionCache<'tcx> {
SelectionCache {
- hashmap: RefCell::new(HashMap::new())
+ hashmap: RefCell::new(FnvHashMap())
}
}
}
-impl<'o, 'tcx> TraitObligationStack<'o, 'tcx> {
- fn iter(&self) -> Option<&TraitObligationStack<'o, 'tcx>> {
- Some(self)
+impl<'o,'tcx> TraitObligationStack<'o,'tcx> {
+ fn list(&'o self) -> TraitObligationStackList<'o,'tcx> {
+ TraitObligationStackList::with(self)
}
+
+ fn iter(&'o self) -> TraitObligationStackList<'o,'tcx> {
+ self.list()
+ }
+}
+
+#[derive(Copy, Clone)]
+struct TraitObligationStackList<'o,'tcx:'o> {
+ head: Option<&'o TraitObligationStack<'o,'tcx>>
}
-impl<'o, 'tcx> Iterator for Option<&'o TraitObligationStack<'o, 'tcx>> {
+impl<'o,'tcx> TraitObligationStackList<'o,'tcx> {
+ fn empty() -> TraitObligationStackList<'o,'tcx> {
+ TraitObligationStackList { head: None }
+ }
+
+ fn with(r: &'o TraitObligationStack<'o,'tcx>) -> TraitObligationStackList<'o,'tcx> {
+ TraitObligationStackList { head: Some(r) }
+ }
+}
+
+impl<'o,'tcx> Iterator for TraitObligationStackList<'o,'tcx>{
type Item = &'o TraitObligationStack<'o,'tcx>;
- fn next(&mut self) -> Option<&'o TraitObligationStack<'o, 'tcx>> {
- match *self {
+ fn next(&mut self) -> Option<&'o TraitObligationStack<'o,'tcx>> {
+ match self.head {
Some(o) => {
*self = o.previous;
Some(o)
}
}
-impl<'o, 'tcx> Repr<'tcx> for TraitObligationStack<'o, 'tcx> {
+impl<'o,'tcx> Repr<'tcx> for TraitObligationStack<'o,'tcx> {
fn repr(&self, tcx: &ty::ctxt<'tcx>) -> String {
format!("TraitObligationStack({})",
self.obligation.repr(tcx))
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+use middle::region;
use middle::subst::{Substs, VecPerParamSpace};
use middle::infer::InferCtxt;
use middle::ty::{self, Ty, AsPredicate, ToPolyTraitRef};
}
}
-
///////////////////////////////////////////////////////////////////////////
// Other
///////////////////////////////////////////////////////////////////////////
// declared on the impl declaration e.g., `impl<A,B> for Box<[(A,B)]>`
// would return ($0, $1) where $0 and $1 are freshly instantiated type
// variables.
-pub fn fresh_substs_for_impl<'a, 'tcx>(infcx: &InferCtxt<'a, 'tcx>,
- span: Span,
- impl_def_id: ast::DefId)
- -> Substs<'tcx>
+pub fn fresh_type_vars_for_impl<'a, 'tcx>(infcx: &InferCtxt<'a, 'tcx>,
+ span: Span,
+ impl_def_id: ast::DefId)
+ -> Substs<'tcx>
{
let tcx = infcx.tcx;
let impl_generics = ty::lookup_item_type(tcx, impl_def_id).generics;
infcx.fresh_substs_for_generics(span, &impl_generics)
}
+// determine the `self` type, using fresh variables for all variables
+// declared on the impl declaration e.g., `impl<A,B> for Box<[(A,B)]>`
+// would return ($0, $1) where $0 and $1 are freshly instantiated type
+// variables.
+pub fn free_substs_for_impl<'a, 'tcx>(infcx: &InferCtxt<'a, 'tcx>,
+ _span: Span,
+ impl_def_id: ast::DefId)
+ -> Substs<'tcx>
+{
+ let tcx = infcx.tcx;
+ let impl_generics = ty::lookup_item_type(tcx, impl_def_id).generics;
+
+ let some_types = impl_generics.types.map(|def| {
+ ty::mk_param_from_def(tcx, def)
+ });
+
+ let some_regions = impl_generics.regions.map(|def| {
+ // FIXME. This destruction scope information is pretty darn
+ // bogus; after all, the impl might not even be in this crate!
+ // But given what we do in coherence, it is harmless enough
+ // for now I think. -nmatsakis
+ let extent = region::DestructionScopeData::new(ast::DUMMY_NODE_ID);
+ ty::free_region_from_def(extent, def)
+ });
+
+ Substs::new(some_types, some_regions)
+}
+
impl<'tcx, N> fmt::Debug for VtableImplData<'tcx, N> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "VtableImpl({:?})", self.impl_def_id)
use middle::traits;
use middle::ty;
use middle::ty_fold::{self, TypeFoldable, TypeFolder};
-use middle::ty_walk::TypeWalker;
+use middle::ty_walk::{self, TypeWalker};
use util::ppaux::{note_and_explain_region, bound_region_ptr_to_string};
use util::ppaux::ty_to_string;
use util::ppaux::{Repr, UserString};
use syntax::parse::token::{self, InternedString, special_idents};
use syntax::print::pprust;
use syntax::ptr::P;
-use syntax::{ast, ast_map};
+use syntax::ast;
+use syntax::ast_map::{self, LinkedPath};
pub type Disr = u64;
TypeWalker::new(self)
}
- /// Iterator that walks types reachable from `self`, in
- /// depth-first order. Note that this is a shallow walk. For
- /// example:
- ///
- /// ```notrust
- /// isize => { }
- /// Foo<Bar<isize>> => { Bar<isize>, isize }
- /// [isize] => { isize }
- /// ```
- pub fn walk_children(&'tcx self) -> TypeWalker<'tcx> {
- // Walks type reachable from `self` but not `self
- let mut walker = self.walk();
- let r = walker.next();
- assert_eq!(r, Some(self));
- walker
+ /// Iterator that walks the immediate children of `self`. Hence
+ /// `Foo<Bar<i32>, u32>` yields the sequence `[Bar<i32>, u32]`
+ /// (but not `i32`, like `walk`).
+ pub fn walk_shallow(&'tcx self) -> IntoIter<Ty<'tcx>> {
+ ty_walk::walk_shallow(self)
}
pub fn as_opt_param_ty(&self) -> Option<ty::ParamTy> {
if id.krate == ast::LOCAL_CRATE {
cx.map.with_path(id.node, f)
} else {
- f(csearch::get_item_path(cx, id).iter().cloned().chain(None))
+ f(csearch::get_item_path(cx, id).iter().cloned().chain(LinkedPath::empty()))
}
}
use middle::ty::{self, Ty};
use middle::traits;
use std::rc::Rc;
+use syntax::abi;
+use syntax::ast;
use syntax::owned_slice::OwnedSlice;
use util::ppaux::Repr;
/// The TypeFoldable trait is implemented for every type that can be folded.
/// Basically, every type that has a corresponding method in TypeFolder.
-pub trait TypeFoldable<'tcx> {
+pub trait TypeFoldable<'tcx>: Repr<'tcx> + Clone {
fn fold_with<F: TypeFolder<'tcx>>(&self, folder: &mut F) -> Self;
}
// can easily refactor the folding into the TypeFolder trait as
// needed.
-impl<'tcx> TypeFoldable<'tcx> for () {
- fn fold_with<F:TypeFolder<'tcx>>(&self, _: &mut F) -> () {
- ()
+macro_rules! CopyImpls {
+ ($($ty:ty),+) => {
+ $(
+ impl<'tcx> TypeFoldable<'tcx> for $ty {
+ fn fold_with<F:TypeFolder<'tcx>>(&self, _: &mut F) -> $ty {
+ *self
+ }
+ }
+ )+
}
}
+CopyImpls! { (), ast::Unsafety, abi::Abi }
+
impl<'tcx, T:TypeFoldable<'tcx>, U:TypeFoldable<'tcx>> TypeFoldable<'tcx> for (T, U) {
fn fold_with<F:TypeFolder<'tcx>>(&self, folder: &mut F) -> (T, U) {
(self.0.fold_with(folder), self.1.fold_with(folder))
--- /dev/null
+// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use middle::ty::{self, Ty};
+use middle::ty_relate::{self, Relate, TypeRelation, RelateResult};
+use util::ppaux::Repr;
+
+/// A type "A" *matches* "B" if the fresh types in B could be
+/// substituted with values so as to make it equal to A. Matching is
+/// intended to be used only on freshened types, and it basically
+/// indicates if the non-freshened versions of A and B could have been
+/// unified.
+///
+/// It is only an approximation. If it yields false, unification would
+/// definitely fail, but a true result doesn't mean unification would
+/// succeed. This is because we don't track the "side-constraints" on
+/// type variables, nor do we track if the same freshened type appears
+/// more than once. To some extent these approximations could be
+/// fixed, given effort.
+///
+/// Like subtyping, matching is really a binary relation, so the only
+/// important thing about the result is Ok/Err. Also, matching never
+/// affects any type variables or unification state.
+pub struct Match<'a, 'tcx: 'a> {
+ tcx: &'a ty::ctxt<'tcx>
+}
+
+impl<'a, 'tcx> Match<'a, 'tcx> {
+ pub fn new(tcx: &'a ty::ctxt<'tcx>) -> Match<'a, 'tcx> {
+ Match { tcx: tcx }
+ }
+}
+
+impl<'a, 'tcx> TypeRelation<'a, 'tcx> for Match<'a, 'tcx> {
+ fn tag(&self) -> &'static str { "Match" }
+ fn tcx(&self) -> &'a ty::ctxt<'tcx> { self.tcx }
+ fn a_is_expected(&self) -> bool { true } // irrelevant
+
+ fn relate_with_variance<T:Relate<'a,'tcx>>(&mut self,
+ _: ty::Variance,
+ a: &T,
+ b: &T)
+ -> RelateResult<'tcx, T>
+ {
+ self.relate(a, b)
+ }
+
+ fn regions(&mut self, a: ty::Region, b: ty::Region) -> RelateResult<'tcx, ty::Region> {
+ debug!("{}.regions({}, {})",
+ self.tag(),
+ a.repr(self.tcx()),
+ b.repr(self.tcx()));
+ Ok(a)
+ }
+
+ fn tys(&mut self, a: Ty<'tcx>, b: Ty<'tcx>) -> RelateResult<'tcx, Ty<'tcx>> {
+ debug!("{}.tys({}, {})", self.tag(),
+ a.repr(self.tcx()), b.repr(self.tcx()));
+ if a == b { return Ok(a); }
+
+ match (&a.sty, &b.sty) {
+ (_, &ty::ty_infer(ty::FreshTy(_))) |
+ (_, &ty::ty_infer(ty::FreshIntTy(_))) => {
+ Ok(a)
+ }
+
+ (&ty::ty_infer(_), _) |
+ (_, &ty::ty_infer(_)) => {
+ Err(ty::terr_sorts(ty_relate::expected_found(self, &a, &b)))
+ }
+
+ (&ty::ty_err, _) | (_, &ty::ty_err) => {
+ Ok(self.tcx().types.err)
+ }
+
+ _ => {
+ ty_relate::super_relate_tys(self, a, b)
+ }
+ }
+ }
+
+ fn binders<T>(&mut self, a: &ty::Binder<T>, b: &ty::Binder<T>)
+ -> RelateResult<'tcx, ty::Binder<T>>
+ where T: Relate<'a,'tcx>
+ {
+ Ok(ty::Binder(try!(self.relate(a.skip_binder(), b.skip_binder()))))
+ }
+}
--- /dev/null
+// Copyright 2012-2013 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+//! Generalized type relating mechanism. A type relation R relates a
+//! pair of values (A, B). A and B are usually types or regions but
+//! can be other things. Examples of type relations are subtyping,
+//! type equality, etc.
+
+use middle::subst::{ErasedRegions, NonerasedRegions, ParamSpace, Substs};
+use middle::ty::{self, Ty};
+use middle::ty_fold::TypeFoldable;
+use std::rc::Rc;
+use syntax::abi;
+use syntax::ast;
+use util::ppaux::Repr;
+
+pub type RelateResult<'tcx, T> = Result<T, ty::type_err<'tcx>>;
+
+pub trait TypeRelation<'a,'tcx> : Sized {
+ fn tcx(&self) -> &'a ty::ctxt<'tcx>;
+
+ /// Returns a static string we can use for printouts.
+ fn tag(&self) -> &'static str;
+
+ /// Returns true if the value `a` is the "expected" type in the
+ /// relation. Just affects error messages.
+ fn a_is_expected(&self) -> bool;
+
+ /// Generic relation routine suitable for most anything.
+ fn relate<T:Relate<'a,'tcx>>(&mut self, a: &T, b: &T) -> RelateResult<'tcx, T> {
+ Relate::relate(self, a, b)
+ }
+
+ /// Switch variance for the purpose of relating `a` and `b`.
+ fn relate_with_variance<T:Relate<'a,'tcx>>(&mut self,
+ variance: ty::Variance,
+ a: &T,
+ b: &T)
+ -> RelateResult<'tcx, T>;
+
+ // Overrideable relations. You shouldn't typically call these
+ // directly, instead call `relate()`, which in turn calls
+ // these. This is both more uniform but also allows us to add
+ // additional hooks for other types in the future if needed
+ // without making older code, which called `relate`, obsolete.
+
+ fn tys(&mut self, a: Ty<'tcx>, b: Ty<'tcx>)
+ -> RelateResult<'tcx, Ty<'tcx>>;
+
+ fn regions(&mut self, a: ty::Region, b: ty::Region)
+ -> RelateResult<'tcx, ty::Region>;
+
+ fn binders<T>(&mut self, a: &ty::Binder<T>, b: &ty::Binder<T>)
+ -> RelateResult<'tcx, ty::Binder<T>>
+ where T: Relate<'a,'tcx>;
+}
+
+pub trait Relate<'a,'tcx>: TypeFoldable<'tcx> {
+ fn relate<R:TypeRelation<'a,'tcx>>(relation: &mut R,
+ a: &Self,
+ b: &Self)
+ -> RelateResult<'tcx, Self>;
+}
+
+///////////////////////////////////////////////////////////////////////////
+// Relate impls
+
+impl<'a,'tcx:'a> Relate<'a,'tcx> for ty::mt<'tcx> {
+ fn relate<R>(relation: &mut R,
+ a: &ty::mt<'tcx>,
+ b: &ty::mt<'tcx>)
+ -> RelateResult<'tcx, ty::mt<'tcx>>
+ where R: TypeRelation<'a,'tcx>
+ {
+ debug!("{}.mts({}, {})",
+ relation.tag(),
+ a.repr(relation.tcx()),
+ b.repr(relation.tcx()));
+ if a.mutbl != b.mutbl {
+ Err(ty::terr_mutability)
+ } else {
+ let mutbl = a.mutbl;
+ let variance = match mutbl {
+ ast::MutImmutable => ty::Covariant,
+ ast::MutMutable => ty::Invariant,
+ };
+ let ty = try!(relation.relate_with_variance(variance, &a.ty, &b.ty));
+ Ok(ty::mt {ty: ty, mutbl: mutbl})
+ }
+ }
+}
+
+// substitutions are not themselves relatable without more context,
+// but they is an important subroutine for things that ARE relatable,
+// like traits etc.
+fn relate_item_substs<'a,'tcx:'a,R>(relation: &mut R,
+ item_def_id: ast::DefId,
+ a_subst: &Substs<'tcx>,
+ b_subst: &Substs<'tcx>)
+ -> RelateResult<'tcx, Substs<'tcx>>
+ where R: TypeRelation<'a,'tcx>
+{
+ debug!("substs: item_def_id={} a_subst={} b_subst={}",
+ item_def_id.repr(relation.tcx()),
+ a_subst.repr(relation.tcx()),
+ b_subst.repr(relation.tcx()));
+
+ let variances;
+ let opt_variances = if relation.tcx().variance_computed.get() {
+ variances = ty::item_variances(relation.tcx(), item_def_id);
+ Some(&*variances)
+ } else {
+ None
+ };
+ relate_substs(relation, opt_variances, a_subst, b_subst)
+}
+
+fn relate_substs<'a,'tcx,R>(relation: &mut R,
+ variances: Option<&ty::ItemVariances>,
+ a_subst: &Substs<'tcx>,
+ b_subst: &Substs<'tcx>)
+ -> RelateResult<'tcx, Substs<'tcx>>
+ where R: TypeRelation<'a,'tcx>
+{
+ let mut substs = Substs::empty();
+
+ for &space in &ParamSpace::all() {
+ let a_tps = a_subst.types.get_slice(space);
+ let b_tps = b_subst.types.get_slice(space);
+ let t_variances = variances.map(|v| v.types.get_slice(space));
+ let tps = try!(relate_type_params(relation, t_variances, a_tps, b_tps));
+ substs.types.replace(space, tps);
+ }
+
+ match (&a_subst.regions, &b_subst.regions) {
+ (&ErasedRegions, _) | (_, &ErasedRegions) => {
+ substs.regions = ErasedRegions;
+ }
+
+ (&NonerasedRegions(ref a), &NonerasedRegions(ref b)) => {
+ for &space in &ParamSpace::all() {
+ let a_regions = a.get_slice(space);
+ let b_regions = b.get_slice(space);
+ let r_variances = variances.map(|v| v.regions.get_slice(space));
+ let regions = try!(relate_region_params(relation,
+ r_variances,
+ a_regions,
+ b_regions));
+ substs.mut_regions().replace(space, regions);
+ }
+ }
+ }
+
+ Ok(substs)
+}
+
+fn relate_type_params<'a,'tcx,R>(relation: &mut R,
+ variances: Option<&[ty::Variance]>,
+ a_tys: &[Ty<'tcx>],
+ b_tys: &[Ty<'tcx>])
+ -> RelateResult<'tcx, Vec<Ty<'tcx>>>
+ where R: TypeRelation<'a,'tcx>
+{
+ if a_tys.len() != b_tys.len() {
+ return Err(ty::terr_ty_param_size(expected_found(relation,
+ &a_tys.len(),
+ &b_tys.len())));
+ }
+
+ (0 .. a_tys.len())
+ .map(|i| {
+ let a_ty = a_tys[i];
+ let b_ty = b_tys[i];
+ let v = variances.map_or(ty::Invariant, |v| v[i]);
+ relation.relate_with_variance(v, &a_ty, &b_ty)
+ })
+ .collect()
+}
+
+fn relate_region_params<'a,'tcx:'a,R>(relation: &mut R,
+ variances: Option<&[ty::Variance]>,
+ a_rs: &[ty::Region],
+ b_rs: &[ty::Region])
+ -> RelateResult<'tcx, Vec<ty::Region>>
+ where R: TypeRelation<'a,'tcx>
+{
+ let tcx = relation.tcx();
+ let num_region_params = a_rs.len();
+
+ debug!("relate_region_params(a_rs={}, \
+ b_rs={}, variances={})",
+ a_rs.repr(tcx),
+ b_rs.repr(tcx),
+ variances.repr(tcx));
+
+ assert_eq!(num_region_params,
+ variances.map_or(num_region_params,
+ |v| v.len()));
+
+ assert_eq!(num_region_params, b_rs.len());
+
+ (0..a_rs.len())
+ .map(|i| {
+ let a_r = a_rs[i];
+ let b_r = b_rs[i];
+ let variance = variances.map_or(ty::Invariant, |v| v[i]);
+ relation.relate_with_variance(variance, &a_r, &b_r)
+ })
+ .collect()
+}
+
+impl<'a,'tcx:'a> Relate<'a,'tcx> for ty::BareFnTy<'tcx> {
+ fn relate<R>(relation: &mut R,
+ a: &ty::BareFnTy<'tcx>,
+ b: &ty::BareFnTy<'tcx>)
+ -> RelateResult<'tcx, ty::BareFnTy<'tcx>>
+ where R: TypeRelation<'a,'tcx>
+ {
+ let unsafety = try!(relation.relate(&a.unsafety, &b.unsafety));
+ let abi = try!(relation.relate(&a.abi, &b.abi));
+ let sig = try!(relation.relate(&a.sig, &b.sig));
+ Ok(ty::BareFnTy {unsafety: unsafety,
+ abi: abi,
+ sig: sig})
+ }
+}
+
+impl<'a,'tcx:'a> Relate<'a,'tcx> for ty::FnSig<'tcx> {
+ fn relate<R>(relation: &mut R,
+ a: &ty::FnSig<'tcx>,
+ b: &ty::FnSig<'tcx>)
+ -> RelateResult<'tcx, ty::FnSig<'tcx>>
+ where R: TypeRelation<'a,'tcx>
+ {
+ if a.variadic != b.variadic {
+ return Err(ty::terr_variadic_mismatch(
+ expected_found(relation, &a.variadic, &b.variadic)));
+ }
+
+ let inputs = try!(relate_arg_vecs(relation,
+ &a.inputs,
+ &b.inputs));
+
+ let output = try!(match (a.output, b.output) {
+ (ty::FnConverging(a_ty), ty::FnConverging(b_ty)) =>
+ Ok(ty::FnConverging(try!(relation.relate(&a_ty, &b_ty)))),
+ (ty::FnDiverging, ty::FnDiverging) =>
+ Ok(ty::FnDiverging),
+ (a, b) =>
+ Err(ty::terr_convergence_mismatch(
+ expected_found(relation, &(a != ty::FnDiverging), &(b != ty::FnDiverging)))),
+ });
+
+ return Ok(ty::FnSig {inputs: inputs,
+ output: output,
+ variadic: a.variadic});
+ }
+}
+
+fn relate_arg_vecs<'a,'tcx,R>(relation: &mut R,
+ a_args: &[Ty<'tcx>],
+ b_args: &[Ty<'tcx>])
+ -> RelateResult<'tcx, Vec<Ty<'tcx>>>
+ where R: TypeRelation<'a,'tcx>
+{
+ if a_args.len() != b_args.len() {
+ return Err(ty::terr_arg_count);
+ }
+
+ a_args.iter()
+ .zip(b_args.iter())
+ .map(|(a, b)| relation.relate_with_variance(ty::Contravariant, a, b))
+ .collect()
+}
+
+impl<'a,'tcx:'a> Relate<'a,'tcx> for ast::Unsafety {
+ fn relate<R>(relation: &mut R,
+ a: &ast::Unsafety,
+ b: &ast::Unsafety)
+ -> RelateResult<'tcx, ast::Unsafety>
+ where R: TypeRelation<'a,'tcx>
+ {
+ if a != b {
+ Err(ty::terr_unsafety_mismatch(expected_found(relation, a, b)))
+ } else {
+ Ok(*a)
+ }
+ }
+}
+
+impl<'a,'tcx:'a> Relate<'a,'tcx> for abi::Abi {
+ fn relate<R>(relation: &mut R,
+ a: &abi::Abi,
+ b: &abi::Abi)
+ -> RelateResult<'tcx, abi::Abi>
+ where R: TypeRelation<'a,'tcx>
+ {
+ if a == b {
+ Ok(*a)
+ } else {
+ Err(ty::terr_abi_mismatch(expected_found(relation, a, b)))
+ }
+ }
+}
+
+impl<'a,'tcx:'a> Relate<'a,'tcx> for ty::ProjectionTy<'tcx> {
+ fn relate<R>(relation: &mut R,
+ a: &ty::ProjectionTy<'tcx>,
+ b: &ty::ProjectionTy<'tcx>)
+ -> RelateResult<'tcx, ty::ProjectionTy<'tcx>>
+ where R: TypeRelation<'a,'tcx>
+ {
+ if a.item_name != b.item_name {
+ Err(ty::terr_projection_name_mismatched(
+ expected_found(relation, &a.item_name, &b.item_name)))
+ } else {
+ let trait_ref = try!(relation.relate(&*a.trait_ref, &*b.trait_ref));
+ Ok(ty::ProjectionTy { trait_ref: Rc::new(trait_ref), item_name: a.item_name })
+ }
+ }
+}
+
+impl<'a,'tcx:'a> Relate<'a,'tcx> for ty::ProjectionPredicate<'tcx> {
+ fn relate<R>(relation: &mut R,
+ a: &ty::ProjectionPredicate<'tcx>,
+ b: &ty::ProjectionPredicate<'tcx>)
+ -> RelateResult<'tcx, ty::ProjectionPredicate<'tcx>>
+ where R: TypeRelation<'a,'tcx>
+ {
+ let projection_ty = try!(relation.relate(&a.projection_ty, &b.projection_ty));
+ let ty = try!(relation.relate(&a.ty, &b.ty));
+ Ok(ty::ProjectionPredicate { projection_ty: projection_ty, ty: ty })
+ }
+}
+
+impl<'a,'tcx:'a> Relate<'a,'tcx> for Vec<ty::PolyProjectionPredicate<'tcx>> {
+ fn relate<R>(relation: &mut R,
+ a: &Vec<ty::PolyProjectionPredicate<'tcx>>,
+ b: &Vec<ty::PolyProjectionPredicate<'tcx>>)
+ -> RelateResult<'tcx, Vec<ty::PolyProjectionPredicate<'tcx>>>
+ where R: TypeRelation<'a,'tcx>
+ {
+ // To be compatible, `a` and `b` must be for precisely the
+ // same set of traits and item names. We always require that
+ // projection bounds lists are sorted by trait-def-id and item-name,
+ // so we can just iterate through the lists pairwise, so long as they are the
+ // same length.
+ if a.len() != b.len() {
+ Err(ty::terr_projection_bounds_length(expected_found(relation, &a.len(), &b.len())))
+ } else {
+ a.iter()
+ .zip(b.iter())
+ .map(|(a, b)| relation.relate(a, b))
+ .collect()
+ }
+ }
+}
+
+impl<'a,'tcx:'a> Relate<'a,'tcx> for ty::ExistentialBounds<'tcx> {
+ fn relate<R>(relation: &mut R,
+ a: &ty::ExistentialBounds<'tcx>,
+ b: &ty::ExistentialBounds<'tcx>)
+ -> RelateResult<'tcx, ty::ExistentialBounds<'tcx>>
+ where R: TypeRelation<'a,'tcx>
+ {
+ let r = try!(relation.relate_with_variance(ty::Contravariant,
+ &a.region_bound,
+ &b.region_bound));
+ let nb = try!(relation.relate(&a.builtin_bounds, &b.builtin_bounds));
+ let pb = try!(relation.relate(&a.projection_bounds, &b.projection_bounds));
+ Ok(ty::ExistentialBounds { region_bound: r,
+ builtin_bounds: nb,
+ projection_bounds: pb })
+ }
+}
+
+impl<'a,'tcx:'a> Relate<'a,'tcx> for ty::BuiltinBounds {
+ fn relate<R>(relation: &mut R,
+ a: &ty::BuiltinBounds,
+ b: &ty::BuiltinBounds)
+ -> RelateResult<'tcx, ty::BuiltinBounds>
+ where R: TypeRelation<'a,'tcx>
+ {
+ // Two sets of builtin bounds are only relatable if they are
+ // precisely the same (but see the coercion code).
+ if a != b {
+ Err(ty::terr_builtin_bounds(expected_found(relation, a, b)))
+ } else {
+ Ok(*a)
+ }
+ }
+}
+
+impl<'a,'tcx:'a> Relate<'a,'tcx> for ty::TraitRef<'tcx> {
+ fn relate<R>(relation: &mut R,
+ a: &ty::TraitRef<'tcx>,
+ b: &ty::TraitRef<'tcx>)
+ -> RelateResult<'tcx, ty::TraitRef<'tcx>>
+ where R: TypeRelation<'a,'tcx>
+ {
+ // Different traits cannot be related
+ if a.def_id != b.def_id {
+ Err(ty::terr_traits(expected_found(relation, &a.def_id, &b.def_id)))
+ } else {
+ let substs = try!(relate_item_substs(relation, a.def_id, a.substs, b.substs));
+ Ok(ty::TraitRef { def_id: a.def_id, substs: relation.tcx().mk_substs(substs) })
+ }
+ }
+}
+
+impl<'a,'tcx:'a> Relate<'a,'tcx> for Ty<'tcx> {
+ fn relate<R>(relation: &mut R,
+ a: &Ty<'tcx>,
+ b: &Ty<'tcx>)
+ -> RelateResult<'tcx, Ty<'tcx>>
+ where R: TypeRelation<'a,'tcx>
+ {
+ relation.tys(a, b)
+ }
+}
+
+/// The main "type relation" routine. Note that this does not handle
+/// inference artifacts, so you should filter those out before calling
+/// it.
+pub fn super_relate_tys<'a,'tcx:'a,R>(relation: &mut R,
+ a: Ty<'tcx>,
+ b: Ty<'tcx>)
+ -> RelateResult<'tcx, Ty<'tcx>>
+ where R: TypeRelation<'a,'tcx>
+{
+ let tcx = relation.tcx();
+ let a_sty = &a.sty;
+ let b_sty = &b.sty;
+ debug!("super_tys: a_sty={:?} b_sty={:?}", a_sty, b_sty);
+ match (a_sty, b_sty) {
+ (&ty::ty_infer(_), _) |
+ (_, &ty::ty_infer(_)) =>
+ {
+ // The caller should handle these cases!
+ tcx.sess.bug("var types encountered in super_relate_tys")
+ }
+
+ (&ty::ty_err, _) | (_, &ty::ty_err) =>
+ {
+ Ok(tcx.types.err)
+ }
+
+ (&ty::ty_char, _) |
+ (&ty::ty_bool, _) |
+ (&ty::ty_int(_), _) |
+ (&ty::ty_uint(_), _) |
+ (&ty::ty_float(_), _) |
+ (&ty::ty_str, _)
+ if a == b =>
+ {
+ Ok(a)
+ }
+
+ (&ty::ty_param(ref a_p), &ty::ty_param(ref b_p))
+ if a_p.idx == b_p.idx && a_p.space == b_p.space =>
+ {
+ Ok(a)
+ }
+
+ (&ty::ty_enum(a_id, a_substs), &ty::ty_enum(b_id, b_substs))
+ if a_id == b_id =>
+ {
+ let substs = try!(relate_item_substs(relation, a_id, a_substs, b_substs));
+ Ok(ty::mk_enum(tcx, a_id, tcx.mk_substs(substs)))
+ }
+
+ (&ty::ty_trait(ref a_), &ty::ty_trait(ref b_)) =>
+ {
+ let principal = try!(relation.relate(&a_.principal, &b_.principal));
+ let bounds = try!(relation.relate(&a_.bounds, &b_.bounds));
+ Ok(ty::mk_trait(tcx, principal, bounds))
+ }
+
+ (&ty::ty_struct(a_id, a_substs), &ty::ty_struct(b_id, b_substs))
+ if a_id == b_id =>
+ {
+ let substs = try!(relate_item_substs(relation, a_id, a_substs, b_substs));
+ Ok(ty::mk_struct(tcx, a_id, tcx.mk_substs(substs)))
+ }
+
+ (&ty::ty_closure(a_id, a_substs),
+ &ty::ty_closure(b_id, b_substs))
+ if a_id == b_id =>
+ {
+ // All ty_closure types with the same id represent
+ // the (anonymous) type of the same closure expression. So
+ // all of their regions should be equated.
+ let substs = try!(relate_substs(relation, None, a_substs, b_substs));
+ Ok(ty::mk_closure(tcx, a_id, tcx.mk_substs(substs)))
+ }
+
+ (&ty::ty_uniq(a_inner), &ty::ty_uniq(b_inner)) =>
+ {
+ let typ = try!(relation.relate(&a_inner, &b_inner));
+ Ok(ty::mk_uniq(tcx, typ))
+ }
+
+ (&ty::ty_ptr(ref a_mt), &ty::ty_ptr(ref b_mt)) =>
+ {
+ let mt = try!(relation.relate(a_mt, b_mt));
+ Ok(ty::mk_ptr(tcx, mt))
+ }
+
+ (&ty::ty_rptr(a_r, ref a_mt), &ty::ty_rptr(b_r, ref b_mt)) =>
+ {
+ let r = try!(relation.relate_with_variance(ty::Contravariant, a_r, b_r));
+ let mt = try!(relation.relate(a_mt, b_mt));
+ Ok(ty::mk_rptr(tcx, tcx.mk_region(r), mt))
+ }
+
+ (&ty::ty_vec(a_t, Some(sz_a)), &ty::ty_vec(b_t, Some(sz_b))) =>
+ {
+ let t = try!(relation.relate(&a_t, &b_t));
+ if sz_a == sz_b {
+ Ok(ty::mk_vec(tcx, t, Some(sz_a)))
+ } else {
+ Err(ty::terr_fixed_array_size(expected_found(relation, &sz_a, &sz_b)))
+ }
+ }
+
+ (&ty::ty_vec(a_t, None), &ty::ty_vec(b_t, None)) =>
+ {
+ let t = try!(relation.relate(&a_t, &b_t));
+ Ok(ty::mk_vec(tcx, t, None))
+ }
+
+ (&ty::ty_tup(ref as_), &ty::ty_tup(ref bs)) =>
+ {
+ if as_.len() == bs.len() {
+ let ts = try!(as_.iter()
+ .zip(bs.iter())
+ .map(|(a, b)| relation.relate(a, b))
+ .collect::<Result<_, _>>());
+ Ok(ty::mk_tup(tcx, ts))
+ } else if as_.len() != 0 && bs.len() != 0 {
+ Err(ty::terr_tuple_size(
+ expected_found(relation, &as_.len(), &bs.len())))
+ } else {
+ Err(ty::terr_sorts(expected_found(relation, &a, &b)))
+ }
+ }
+
+ (&ty::ty_bare_fn(a_opt_def_id, a_fty), &ty::ty_bare_fn(b_opt_def_id, b_fty))
+ if a_opt_def_id == b_opt_def_id =>
+ {
+ let fty = try!(relation.relate(a_fty, b_fty));
+ Ok(ty::mk_bare_fn(tcx, a_opt_def_id, tcx.mk_bare_fn(fty)))
+ }
+
+ (&ty::ty_projection(ref a_data), &ty::ty_projection(ref b_data)) =>
+ {
+ let projection_ty = try!(relation.relate(a_data, b_data));
+ Ok(ty::mk_projection(tcx, projection_ty.trait_ref, projection_ty.item_name))
+ }
+
+ _ =>
+ {
+ Err(ty::terr_sorts(expected_found(relation, &a, &b)))
+ }
+ }
+}
+
+impl<'a,'tcx:'a> Relate<'a,'tcx> for ty::Region {
+ fn relate<R>(relation: &mut R,
+ a: &ty::Region,
+ b: &ty::Region)
+ -> RelateResult<'tcx, ty::Region>
+ where R: TypeRelation<'a,'tcx>
+ {
+ relation.regions(*a, *b)
+ }
+}
+
+impl<'a,'tcx:'a,T> Relate<'a,'tcx> for ty::Binder<T>
+ where T: Relate<'a,'tcx>
+{
+ fn relate<R>(relation: &mut R,
+ a: &ty::Binder<T>,
+ b: &ty::Binder<T>)
+ -> RelateResult<'tcx, ty::Binder<T>>
+ where R: TypeRelation<'a,'tcx>
+ {
+ relation.binders(a, b)
+ }
+}
+
+impl<'a,'tcx:'a,T> Relate<'a,'tcx> for Rc<T>
+ where T: Relate<'a,'tcx>
+{
+ fn relate<R>(relation: &mut R,
+ a: &Rc<T>,
+ b: &Rc<T>)
+ -> RelateResult<'tcx, Rc<T>>
+ where R: TypeRelation<'a,'tcx>
+ {
+ let a: &T = a;
+ let b: &T = b;
+ Ok(Rc::new(try!(relation.relate(a, b))))
+ }
+}
+
+impl<'a,'tcx:'a,T> Relate<'a,'tcx> for Box<T>
+ where T: Relate<'a,'tcx>
+{
+ fn relate<R>(relation: &mut R,
+ a: &Box<T>,
+ b: &Box<T>)
+ -> RelateResult<'tcx, Box<T>>
+ where R: TypeRelation<'a,'tcx>
+ {
+ let a: &T = a;
+ let b: &T = b;
+ Ok(Box::new(try!(relation.relate(a, b))))
+ }
+}
+
+///////////////////////////////////////////////////////////////////////////
+// Error handling
+
+pub fn expected_found<'a,'tcx,R,T>(relation: &mut R,
+ a: &T,
+ b: &T)
+ -> ty::expected_found<T>
+ where R: TypeRelation<'a,'tcx>, T: Clone
+{
+ expected_found_bool(relation.a_is_expected(), a, b)
+}
+
+pub fn expected_found_bool<T>(a_is_expected: bool,
+ a: &T,
+ b: &T)
+ -> ty::expected_found<T>
+ where T: Clone
+{
+ let a = a.clone();
+ let b = b.clone();
+ if a_is_expected {
+ ty::expected_found {expected: a, found: b}
+ } else {
+ ty::expected_found {expected: b, found: a}
+ }
+}
+
use middle::ty::{self, Ty};
use std::iter::Iterator;
+use std::vec::IntoIter;
pub struct TypeWalker<'tcx> {
stack: Vec<Ty<'tcx>>,
TypeWalker { stack: vec!(ty), last_subtree: 1, }
}
- fn push_subtypes(&mut self, parent_ty: Ty<'tcx>) {
- match parent_ty.sty {
- ty::ty_bool | ty::ty_char | ty::ty_int(_) | ty::ty_uint(_) | ty::ty_float(_) |
- ty::ty_str | ty::ty_infer(_) | ty::ty_param(_) | ty::ty_err => {
- }
- ty::ty_uniq(ty) | ty::ty_vec(ty, _) => {
- self.stack.push(ty);
- }
- ty::ty_ptr(ref mt) | ty::ty_rptr(_, ref mt) => {
- self.stack.push(mt.ty);
- }
- ty::ty_projection(ref data) => {
- self.push_reversed(data.trait_ref.substs.types.as_slice());
- }
- ty::ty_trait(box ty::TyTrait { ref principal, ref bounds }) => {
- self.push_reversed(principal.substs().types.as_slice());
- self.push_reversed(&bounds.projection_bounds.iter().map(|pred| {
- pred.0.ty
- }).collect::<Vec<_>>());
- }
- ty::ty_enum(_, ref substs) |
- ty::ty_struct(_, ref substs) |
- ty::ty_closure(_, ref substs) => {
- self.push_reversed(substs.types.as_slice());
- }
- ty::ty_tup(ref ts) => {
- self.push_reversed(ts);
- }
- ty::ty_bare_fn(_, ref ft) => {
- self.push_sig_subtypes(&ft.sig);
- }
- }
- }
-
- fn push_sig_subtypes(&mut self, sig: &ty::PolyFnSig<'tcx>) {
- match sig.0.output {
- ty::FnConverging(output) => { self.stack.push(output); }
- ty::FnDiverging => { }
- }
- self.push_reversed(&sig.0.inputs);
- }
-
- fn push_reversed(&mut self, tys: &[Ty<'tcx>]) {
- // We push slices on the stack in reverse order so as to
- // maintain a pre-order traversal. As of the time of this
- // writing, the fact that the traversal is pre-order is not
- // known to be significant to any code, but it seems like the
- // natural order one would expect (basically, the order of the
- // types as they are written).
- for &ty in tys.iter().rev() {
- self.stack.push(ty);
- }
- }
-
/// Skips the subtree of types corresponding to the last type
/// returned by `next()`.
///
}
Some(ty) => {
self.last_subtree = self.stack.len();
- self.push_subtypes(ty);
+ push_subtypes(&mut self.stack, ty);
debug!("next: stack={:?}", self.stack);
Some(ty)
}
}
}
}
+
+pub fn walk_shallow<'tcx>(ty: Ty<'tcx>) -> IntoIter<Ty<'tcx>> {
+ let mut stack = vec![];
+ push_subtypes(&mut stack, ty);
+ stack.into_iter()
+}
+
+fn push_subtypes<'tcx>(stack: &mut Vec<Ty<'tcx>>, parent_ty: Ty<'tcx>) {
+ match parent_ty.sty {
+ ty::ty_bool | ty::ty_char | ty::ty_int(_) | ty::ty_uint(_) | ty::ty_float(_) |
+ ty::ty_str | ty::ty_infer(_) | ty::ty_param(_) | ty::ty_err => {
+ }
+ ty::ty_uniq(ty) | ty::ty_vec(ty, _) => {
+ stack.push(ty);
+ }
+ ty::ty_ptr(ref mt) | ty::ty_rptr(_, ref mt) => {
+ stack.push(mt.ty);
+ }
+ ty::ty_projection(ref data) => {
+ push_reversed(stack, data.trait_ref.substs.types.as_slice());
+ }
+ ty::ty_trait(box ty::TyTrait { ref principal, ref bounds }) => {
+ push_reversed(stack, principal.substs().types.as_slice());
+ push_reversed(stack, &bounds.projection_bounds.iter().map(|pred| {
+ pred.0.ty
+ }).collect::<Vec<_>>());
+ }
+ ty::ty_enum(_, ref substs) |
+ ty::ty_struct(_, ref substs) |
+ ty::ty_closure(_, ref substs) => {
+ push_reversed(stack, substs.types.as_slice());
+ }
+ ty::ty_tup(ref ts) => {
+ push_reversed(stack, ts);
+ }
+ ty::ty_bare_fn(_, ref ft) => {
+ push_sig_subtypes(stack, &ft.sig);
+ }
+ }
+}
+
+fn push_sig_subtypes<'tcx>(stack: &mut Vec<Ty<'tcx>>, sig: &ty::PolyFnSig<'tcx>) {
+ match sig.0.output {
+ ty::FnConverging(output) => { stack.push(output); }
+ ty::FnDiverging => { }
+ }
+ push_reversed(stack, &sig.0.inputs);
+}
+
+fn push_reversed<'tcx>(stack: &mut Vec<Ty<'tcx>>, tys: &[Ty<'tcx>]) {
+ // We push slices on the stack in reverse order so as to
+ // maintain a pre-order traversal. As of the time of this
+ // writing, the fact that the traversal is pre-order is not
+ // known to be significant to any code, but it seems like the
+ // natural order one would expect (basically, the order of the
+ // types as they are written).
+ for &ty in tys.iter().rev() {
+ stack.push(ty);
+ }
+}
}
ty_infer(infer_ty) => infer_ty_to_string(cx, infer_ty),
ty_err => "[type error]".to_string(),
- ty_param(ref param_ty) => {
- if cx.sess.verbose() {
- param_ty.repr(cx)
- } else {
- param_ty.user_string(cx)
- }
- }
+ ty_param(ref param_ty) => param_ty.user_string(cx),
ty_enum(did, substs) | ty_struct(did, substs) => {
let base = ty::item_path_str(cx, did);
parameterized(cx, &base, substs, did, &[],
}
}
}
+
+impl<'tcx> Repr<'tcx> for ast::Unsafety {
+ fn repr(&self, _: &ctxt<'tcx>) -> String {
+ format!("{:?}", *self)
+ }
+}
try!(pp::space(&mut s.s));
s.synth_comment(item.id.to_string())
}
+ pprust::NodeSubItem(id) => {
+ try!(pp::space(&mut s.s));
+ s.synth_comment(id.to_string())
+ }
pprust::NodeBlock(blk) => {
try!(pp::space(&mut s.s));
s.synth_comment(format!("block {}", blk.id))
use rustc_typeck::middle::subst;
use rustc_typeck::middle::subst::Subst;
use rustc_typeck::middle::ty::{self, Ty};
-use rustc_typeck::middle::infer::combine::Combine;
+use rustc_typeck::middle::ty_relate::TypeRelation;
use rustc_typeck::middle::infer;
use rustc_typeck::middle::infer::lub::Lub;
use rustc_typeck::middle::infer::glb::Glb;
pub fn sub(&self) -> Sub<'a, 'tcx> {
let trace = self.dummy_type_trace();
- Sub(self.infcx.combine_fields(true, trace))
+ self.infcx.sub(true, trace)
}
pub fn lub(&self) -> Lub<'a, 'tcx> {
let trace = self.dummy_type_trace();
- Lub(self.infcx.combine_fields(true, trace))
+ self.infcx.lub(true, trace)
}
pub fn glb(&self) -> Glb<'a, 'tcx> {
let trace = self.dummy_type_trace();
- Glb(self.infcx.combine_fields(true, trace))
+ self.infcx.glb(true, trace)
}
pub fn make_lub_ty(&self, t1: Ty<'tcx>, t2: Ty<'tcx>) -> Ty<'tcx> {
- match self.lub().tys(t1, t2) {
+ match self.lub().relate(&t1, &t2) {
Ok(t) => t,
Err(ref e) => panic!("unexpected error computing LUB: {}",
ty::type_err_to_str(self.infcx.tcx, e))
/// Checks that `t1 <: t2` is true (this may register additional
/// region checks).
pub fn check_sub(&self, t1: Ty<'tcx>, t2: Ty<'tcx>) {
- match self.sub().tys(t1, t2) {
+ match self.sub().relate(&t1, &t2) {
Ok(_) => { }
Err(ref e) => {
panic!("unexpected error computing sub({},{}): {}",
/// Checks that `t1 <: t2` is false (this may register additional
/// region checks).
pub fn check_not_sub(&self, t1: Ty<'tcx>, t2: Ty<'tcx>) {
- match self.sub().tys(t1, t2) {
+ match self.sub().relate(&t1, &t2) {
Err(_) => { }
Ok(_) => {
panic!("unexpected success computing sub({},{})",
/// Checks that `LUB(t1,t2) == t_lub`
pub fn check_lub(&self, t1: Ty<'tcx>, t2: Ty<'tcx>, t_lub: Ty<'tcx>) {
- match self.lub().tys(t1, t2) {
+ match self.lub().relate(&t1, &t2) {
Ok(t) => {
self.assert_eq(t, t_lub);
}
self.ty_to_string(t1),
self.ty_to_string(t2),
self.ty_to_string(t_glb));
- match self.glb().tys(t1, t2) {
+ match self.glb().relate(&t1, &t2) {
Err(e) => {
panic!("unexpected error computing LUB: {:?}", e)
}
fn lub_returning_scope() {
test_env(EMPTY_SOURCE_STR,
errors(&["cannot infer an appropriate lifetime"]), |env| {
+ env.create_simple_region_hierarchy();
let t_rptr_scope10 = env.t_rptr_scope(10);
let t_rptr_scope11 = env.t_rptr_scope(11);
}
enum NameDefinition {
- NoNameDefinition, //< The name was unbound.
- ChildNameDefinition(Def, LastPrivate), //< The name identifies an immediate child.
- ImportNameDefinition(Def, LastPrivate) //< The name identifies an import.
+ // The name was unbound.
+ NoNameDefinition,
+ // The name identifies an immediate child.
+ ChildNameDefinition(Def, LastPrivate),
+ // The name identifies an import.
+ ImportNameDefinition(Def, LastPrivate),
}
impl<'a, 'v, 'tcx> Visitor<'v> for Resolver<'a, 'tcx> {
// The current self type if inside an impl (used for better errors).
current_self_type: Option<Ty>,
- // The ident for the keyword "self".
- self_name: Name,
- // The ident for the non-keyword "Self".
- type_self_name: Name,
-
// The idents for the primitive types.
primitive_type_table: PrimitiveTypeTable,
current_trait_ref: None,
current_self_type: None,
- self_name: special_names::self_,
- type_self_name: special_names::type_self,
-
primitive_type_table: PrimitiveTypeTable::new(),
def_map: RefCell::new(NodeMap()),
let mut self_type_rib = Rib::new(ItemRibKind);
// plain insert (no renaming, types are not currently hygienic....)
- let name = self.type_self_name;
+ let name = special_names::type_self;
self_type_rib.bindings.insert(name, DlDef(DefSelfTy(item.id)));
self.type_ribs.push(self_type_rib);
fn with_optional_trait_ref<T, F>(&mut self,
opt_trait_ref: Option<&TraitRef>,
- f: F) -> T where
- F: FnOnce(&mut Resolver) -> T,
+ f: F)
+ -> T
+ where F: FnOnce(&mut Resolver) -> T,
{
let mut new_val = None;
if let Some(trait_ref) = opt_trait_ref {
let span = path.span;
let segments = &path.segments[..path.segments.len()-path_depth];
- let mk_res = |(def, lp)| PathResolution {
- base_def: def,
- last_private: lp,
- depth: path_depth
- };
+ let mk_res = |(def, lp)| PathResolution::new(def, lp, path_depth);
if path.global {
let def = self.resolve_crate_relative_path(span, segments, namespace);
check_ribs,
span);
- if segments.len() > 1 {
- let def = self.resolve_module_relative_path(span, segments, namespace);
- match (def, unqualified_def) {
- (Some((ref d, _)), Some((ref ud, _))) if *d == *ud => {
- self.session
- .add_lint(lint::builtin::UNUSED_QUALIFICATIONS,
- id, span,
- "unnecessary qualification".to_string());
- }
- _ => ()
- }
+ if segments.len() <= 1 {
+ return unqualified_def.map(mk_res);
+ }
- def.map(mk_res)
- } else {
- unqualified_def.map(mk_res)
+ let def = self.resolve_module_relative_path(span, segments, namespace);
+ match (def, unqualified_def) {
+ (Some((ref d, _)), Some((ref ud, _))) if *d == *ud => {
+ self.session
+ .add_lint(lint::builtin::UNUSED_QUALIFICATIONS,
+ id, span,
+ "unnecessary qualification".to_string());
+ }
+ _ => {}
}
+
+ def.map(mk_res)
}
- // resolve a single identifier (used as a varref)
+ // Resolve a single identifier.
fn resolve_identifier(&mut self,
identifier: Ident,
namespace: Namespace,
match child_name_bindings.def_for_namespace(namespace) {
Some(def) => {
// Found it. Stop the search here.
- let p = child_name_bindings.defined_in_public_namespace(
- namespace);
+ let p = child_name_bindings.defined_in_public_namespace(namespace);
let lp = if p {LastMod(AllPublic)} else {
LastMod(DependsOn(def.def_id()))
};
let containing_module;
let last_private;
- let module = self.current_module.clone();
- match self.resolve_module_path(module,
+ let current_module = self.current_module.clone();
+ match self.resolve_module_path(current_module,
&module_path[..],
UseLexicalScope,
span,
match search_result {
Some(DlDef(def)) => {
- debug!("(resolving path in local ribs) resolved `{}` to \
- local: {:?}",
+ debug!("(resolving path in local ribs) resolved `{}` to local: {:?}",
token::get_ident(ident),
def);
Some(def)
panic!("unexpected indeterminate result");
}
Failed(err) => {
- match err {
- Some((span, msg)) =>
- self.resolve_error(span, &format!("failed to resolve. {}",
- msg)),
- None => ()
- }
-
debug!("(resolving item path by identifier in lexical scope) \
failed to resolve {}", token::get_name(name));
+
+ if let Some((span, msg)) = err {
+ self.resolve_error(span, &format!("failed to resolve. {}", msg))
+ }
+
return None;
}
}
}
} else {
match this.resolve_module_path(root,
- &name_path[..],
- UseLexicalScope,
- span,
- PathSearch) {
+ &name_path[..],
+ UseLexicalScope,
+ span,
+ PathSearch) {
Success((module, _)) => Some(module),
_ => None
}
false // Stop advancing
});
- if method_scope && &token::get_name(self.self_name)[..]
- == path_name {
+ if method_scope &&
+ &token::get_name(special_names::self_)[..] == path_name {
self.resolve_error(
expr.span,
"`self` is not available \
let fcx = cx.fcx;
debug!("trans_stmt({})", s.repr(cx.tcx()));
+ if cx.unreachable.get() {
+ return cx;
+ }
+
if cx.sess().asm_comments() {
add_span_comment(cx, s.span, &s.repr(cx.tcx()));
}
pub fn trans_stmt_semi<'blk, 'tcx>(cx: Block<'blk, 'tcx>, e: &ast::Expr)
-> Block<'blk, 'tcx> {
let _icx = push_ctxt("trans_stmt_semi");
+
+ if cx.unreachable.get() {
+ return cx;
+ }
+
let ty = expr_ty(cx, e);
if cx.fcx.type_needs_drop(ty) {
expr::trans_to_lvalue(cx, e, "stmt").bcx
mut dest: expr::Dest)
-> Block<'blk, 'tcx> {
let _icx = push_ctxt("trans_block");
+
+ if bcx.unreachable.get() {
+ return bcx;
+ }
+
let fcx = bcx.fcx;
let mut bcx = bcx;
bcx.to_str(), if_id, bcx.expr_to_string(cond), thn.id,
dest.to_string(bcx.ccx()));
let _icx = push_ctxt("trans_if");
+
+ if bcx.unreachable.get() {
+ return bcx;
+ }
+
let mut bcx = bcx;
let cond_val = unpack_result!(bcx, expr::trans(bcx, cond).to_llbool());
body: &ast::Block)
-> Block<'blk, 'tcx> {
let _icx = push_ctxt("trans_while");
+
+ if bcx.unreachable.get() {
+ return bcx;
+ }
+
let fcx = bcx.fcx;
// bcx
body: &ast::Block)
-> Block<'blk, 'tcx> {
let _icx = push_ctxt("trans_loop");
+
+ if bcx.unreachable.get() {
+ return bcx;
+ }
+
let fcx = bcx.fcx;
// bcx
exit: usize)
-> Block<'blk, 'tcx> {
let _icx = push_ctxt("trans_break_cont");
- let fcx = bcx.fcx;
if bcx.unreachable.get() {
return bcx;
}
+ let fcx = bcx.fcx;
+
// Locate loop that we will break to
let loop_id = match opt_label {
None => fcx.top_loop_scope(),
retval_expr: Option<&ast::Expr>)
-> Block<'blk, 'tcx> {
let _icx = push_ctxt("trans_ret");
+
+ if bcx.unreachable.get() {
+ return bcx;
+ }
+
let fcx = bcx.fcx;
let mut bcx = bcx;
let dest = match (fcx.llretslotptr.get(), retval_expr) {
let ccx = bcx.ccx();
let _icx = push_ctxt("trans_fail_value");
+ if bcx.unreachable.get() {
+ return bcx;
+ }
+
let v_str = C_str_slice(ccx, fail_str);
let loc = bcx.sess().codemap().lookup_char_pos(call_info.span.lo);
let filename = token::intern_and_get_ident(&loc.file.name);
let ccx = bcx.ccx();
let _icx = push_ctxt("trans_fail_bounds_check");
+ if bcx.unreachable.get() {
+ return bcx;
+ }
+
// Extract the file/line from the span
let loc = bcx.sess().codemap().lookup_char_pos(call_info.span.lo);
let filename = token::intern_and_get_ident(&loc.file.name);
span: Span,
_trait_ref: Rc<ty::TraitRef<'tcx>>,
_item_name: ast::Name)
- -> Ty<'tcx>
- {
- span_err!(self.tcx().sess, span, E0213,
- "associated types are not accepted in this context");
-
- self.tcx().types.err
- }
+ -> Ty<'tcx>;
}
pub fn ast_region_to_region(tcx: &ty::ctxt, lifetime: &ast::Lifetime)
}
};
- let substs = ast_path_substs_for_ty(this, rscope,
- span, param_mode,
- &generics, item_segment);
+ let substs = ast_path_substs_for_ty(this,
+ rscope,
+ span,
+ param_mode,
+ &generics,
+ item_segment);
// FIXME(#12938): This is a hack until we have full support for DST.
if Some(did) == this.tcx().lang_items.owned_box() {
type_str, trait_str, name);
}
+// Create a type from a a path to an associated type.
+// For a path A::B::C::D, ty and ty_path_def are the type and def for A::B::C
+// and item_segment is the path segment for D. We return a type and a def for
+// the whole path.
+// Will fail except for T::A and Self::A; i.e., if ty/ty_path_def are not a type
+// parameter or Self.
fn associated_path_def_to_ty<'tcx>(this: &AstConv<'tcx>,
span: Span,
ty: Ty<'tcx>,
-> (Ty<'tcx>, def::Def)
{
let tcx = this.tcx();
- check_path_args(tcx, slice::ref_slice(item_segment), NO_TPS | NO_REGIONS);
let assoc_name = item_segment.identifier.name;
- let is_param = match (&ty.sty, ty_path_def) {
- (&ty::ty_param(_), def::DefTyParam(..)) |
- (&ty::ty_param(_), def::DefSelfTy(_)) => true,
- _ => false
- };
+ debug!("associated_path_def_to_ty: {}::{}", ty.repr(tcx), token::get_name(assoc_name));
- let ty_param_node_id = if is_param {
- ty_path_def.local_node_id()
- } else {
- report_ambiguous_associated_type(
- tcx, span, &ty.user_string(tcx), "Trait", &token::get_name(assoc_name));
- return (tcx.types.err, ty_path_def);
- };
+ check_path_args(tcx, slice::ref_slice(item_segment), NO_TPS | NO_REGIONS);
+
+ // Check that the path prefix given by ty/ty_path_def is a type parameter/Self.
+ match (&ty.sty, ty_path_def) {
+ (&ty::ty_param(_), def::DefTyParam(..)) |
+ (&ty::ty_param(_), def::DefSelfTy(_)) => {}
+ _ => {
+ report_ambiguous_associated_type(tcx,
+ span,
+ &ty.user_string(tcx),
+ "Trait",
+ &token::get_name(assoc_name));
+ return (tcx.types.err, ty_path_def);
+ }
+ }
+ let ty_param_node_id = ty_path_def.local_node_id();
let ty_param_name = tcx.ty_param_defs.borrow().get(&ty_param_node_id).unwrap().name;
let bounds = match this.get_type_parameter_bounds(span, ty_param_node_id) {
Ok(v) => v,
- Err(ErrorReported) => { return (tcx.types.err, ty_path_def); }
+ Err(ErrorReported) => {
+ return (tcx.types.err, ty_path_def);
+ }
};
- // ensure the super predicates and stop if we encountered an error
+ // Ensure the super predicates and stop if we encountered an error.
if bounds.iter().any(|b| this.ensure_super_predicates(span, b.def_id()).is_err()) {
return (this.tcx().types.err, ty_path_def);
}
+ // Check that there is exactly one way to find an associated type with the
+ // correct name.
let mut suitable_bounds: Vec<_> =
traits::transitive_bounds(tcx, &bounds)
.filter(|b| this.trait_defines_associated_type_named(b.def_id(), assoc_name))
// by type collection, which may be in progress at this point.
match this.tcx().map.expect_item(trait_did.node).node {
ast::ItemTrait(_, _, _, ref trait_items) => {
- let item = trait_items.iter().find(|i| i.ident.name == assoc_name)
+ let item = trait_items.iter()
+ .find(|i| i.ident.name == assoc_name)
.expect("missing associated type");
ast_util::local_def(item.id)
}
let item = trait_items.iter().find(|i| i.name() == assoc_name);
item.expect("missing associated type").def_id()
};
+
(ty, def::DefAssociatedTy(trait_did, item_did))
}
ty
} else {
let path_str = ty::item_path_str(tcx, trait_def_id);
- report_ambiguous_associated_type(
- tcx, span, "Type", &path_str, &token::get_ident(item_segment.identifier));
+ report_ambiguous_associated_type(tcx,
+ span,
+ "Type",
+ &path_str,
+ &token::get_ident(item_segment.identifier));
return tcx.types.err;
};
}
}
+// Note that both base_segments and assoc_segments may be empty, although not at
+// the same time.
pub fn finish_resolving_def_to_ty<'tcx>(this: &AstConv<'tcx>,
rscope: &RegionScope,
span: Span,
param_mode: PathParamMode,
- def: &mut def::Def,
+ def: &def::Def,
opt_self_ty: Option<Ty<'tcx>>,
- segments: &[ast::PathSegment],
+ base_segments: &[ast::PathSegment],
assoc_segments: &[ast::PathSegment])
-> Ty<'tcx> {
let tcx = this.tcx();
span,
param_mode,
trait_def_id,
- segments.last().unwrap(),
+ base_segments.last().unwrap(),
&mut projection_bounds);
- check_path_args(tcx, segments.init(), NO_TPS | NO_REGIONS);
- trait_ref_to_object_type(this, rscope, span, trait_ref,
- projection_bounds, &[])
+ check_path_args(tcx, base_segments.init(), NO_TPS | NO_REGIONS);
+ trait_ref_to_object_type(this,
+ rscope,
+ span,
+ trait_ref,
+ projection_bounds,
+ &[])
}
def::DefTy(did, _) | def::DefStruct(did) => {
- check_path_args(tcx, segments.init(), NO_TPS | NO_REGIONS);
+ check_path_args(tcx, base_segments.init(), NO_TPS | NO_REGIONS);
ast_path_to_ty(this, rscope, span,
param_mode, did,
- segments.last().unwrap())
+ base_segments.last().unwrap())
}
def::DefTyParam(space, index, _, name) => {
- check_path_args(tcx, segments, NO_TPS | NO_REGIONS);
+ check_path_args(tcx, base_segments, NO_TPS | NO_REGIONS);
ty::mk_param(tcx, space, index, name)
}
def::DefSelfTy(_) => {
- // n.b.: resolve guarantees that the this type only appears in a
+ // N.b.: resolve guarantees that the this type only appears in a
// trait, which we rely upon in various places when creating
- // substs
- check_path_args(tcx, segments, NO_TPS | NO_REGIONS);
+ // substs.
+ check_path_args(tcx, base_segments, NO_TPS | NO_REGIONS);
ty::mk_self_type(tcx)
}
def::DefAssociatedTy(trait_did, _) => {
- check_path_args(tcx, &segments[..segments.len()-2], NO_TPS | NO_REGIONS);
- qpath_to_ty(this, rscope, span, param_mode,
- opt_self_ty, trait_did,
- &segments[segments.len()-2],
- segments.last().unwrap())
+ check_path_args(tcx, &base_segments[..base_segments.len()-2], NO_TPS | NO_REGIONS);
+ qpath_to_ty(this,
+ rscope,
+ span,
+ param_mode,
+ opt_self_ty,
+ trait_did,
+ &base_segments[base_segments.len()-2],
+ base_segments.last().unwrap())
}
def::DefMod(id) => {
// Used as sentinel by callers to indicate the `<T>::A::B::C` form.
// FIXME(#22519) This part of the resolution logic should be
// avoided entirely for that form, once we stop needed a Def
// for `associated_path_def_to_ty`.
- if segments.is_empty() {
- opt_self_ty.expect("missing T in <T>::a::b::c")
- } else {
- span_err!(tcx.sess, span, E0247, "found module name used as a type: {}",
+
+ if !base_segments.is_empty() {
+ span_err!(tcx.sess,
+ span,
+ E0247,
+ "found module name used as a type: {}",
tcx.map.node_to_string(id.node));
return this.tcx().types.err;
}
+
+ opt_self_ty.expect("missing T in <T>::a::b::c")
}
def::DefPrimTy(prim_ty) => {
- prim_ty_to_ty(tcx, segments, prim_ty)
+ prim_ty_to_ty(tcx, base_segments, prim_ty)
}
_ => {
span_err!(tcx.sess, span, E0248,
// If any associated type segments remain, attempt to resolve them.
let mut ty = base_ty;
+ let mut def = *def;
for segment in assoc_segments {
if ty.sty == ty::ty_err {
break;
}
// This is pretty bad (it will fail except for T::A and Self::A).
- let (a_ty, a_def) = associated_path_def_to_ty(this, span,
- ty, *def, segment);
+ let (a_ty, a_def) = associated_path_def_to_ty(this,
+ span,
+ ty,
+ def,
+ segment);
ty = a_ty;
- *def = a_def;
+ def = a_def;
}
ty
}
tcx.sess.span_bug(ast_ty.span,
&format!("unbound path {}", ast_ty.repr(tcx)))
};
- let mut def = path_res.base_def;
+ let def = path_res.base_def;
let base_ty_end = path.segments.len() - path_res.depth;
let opt_self_ty = maybe_qself.as_ref().map(|qself| {
ast_ty_to_ty(this, rscope, &qself.ty)
});
- let ty = finish_resolving_def_to_ty(this, rscope, ast_ty.span,
- PathParamMode::Explicit, &mut def,
+ let ty = finish_resolving_def_to_ty(this,
+ rscope,
+ ast_ty.span,
+ PathParamMode::Explicit,
+ &def,
opt_self_ty,
&path.segments[..base_ty_end],
&path.segments[base_ty_end..]);
UnresolvedTypeAction::Error,
LvaluePreference::NoPreference,
|adj_ty, idx| {
- let autoderefref = ty::AutoDerefRef { autoderefs: idx, autoref: None };
- try_overloaded_call_step(fcx, call_expr, callee_expr,
- adj_ty, autoderefref)
+ try_overloaded_call_step(fcx, call_expr, callee_expr, adj_ty, idx)
});
match result {
call_expr: &'tcx ast::Expr,
callee_expr: &'tcx ast::Expr,
adjusted_ty: Ty<'tcx>,
- autoderefref: ty::AutoDerefRef<'tcx>)
+ autoderefs: usize)
-> Option<CallStep<'tcx>>
{
- debug!("try_overloaded_call_step(call_expr={}, adjusted_ty={}, autoderefref={})",
+ debug!("try_overloaded_call_step(call_expr={}, adjusted_ty={}, autoderefs={})",
call_expr.repr(fcx.tcx()),
adjusted_ty.repr(fcx.tcx()),
- autoderefref.repr(fcx.tcx()));
+ autoderefs);
+
+ let autoderefref = ty::AutoDerefRef { autoderefs: autoderefs, autoref: None };
// If the callee is a bare function or a closure, then we're all set.
match structurally_resolved_type(fcx, callee_expr.span, adjusted_ty).sty {
}
}
+ // Hack: we know that there are traits implementing Fn for &F
+ // where F:Fn and so forth. In the particular case of types
+ // like `x: &mut FnMut()`, if there is a call `x()`, we would
+ // normally translate to `FnMut::call_mut(&mut x, ())`, but
+ // that winds up requiring `mut x: &mut FnMut()`. A little
+ // over the top. The simplest fix by far is to just ignore
+ // this case and deref again, so we wind up with
+ // `FnMut::call_mut(&mut *x, ())`.
+ ty::ty_rptr(..) if autoderefs == 0 => {
+ return None;
+ }
+
_ => {}
}
use check::{autoderef, FnCtxt, NoPreference, PreferMutLvalue, UnresolvedTypeAction};
-use middle::infer::{self, cres, Coercion, TypeTrace};
-use middle::infer::combine::Combine;
-use middle::infer::sub::Sub;
+use middle::infer::{self, Coercion};
use middle::subst;
use middle::ty::{AutoPtr, AutoDerefRef, AdjustDerefRef, AutoUnsize, AutoUnsafe};
use middle::ty::{self, mt, Ty};
+use middle::ty_relate::RelateResult;
use util::common::indent;
use util::ppaux;
use util::ppaux::Repr;
struct Coerce<'a, 'tcx: 'a> {
fcx: &'a FnCtxt<'a, 'tcx>,
- trace: TypeTrace<'tcx>
+ origin: infer::TypeOrigin,
}
-type CoerceResult<'tcx> = cres<'tcx, Option<ty::AutoAdjustment<'tcx>>>;
+type CoerceResult<'tcx> = RelateResult<'tcx, Option<ty::AutoAdjustment<'tcx>>>;
impl<'f, 'tcx> Coerce<'f, 'tcx> {
fn tcx(&self) -> &ty::ctxt<'tcx> {
}
fn subtype(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> CoerceResult<'tcx> {
- let sub = Sub(self.fcx.infcx().combine_fields(false, self.trace.clone()));
- try!(sub.tys(a, b));
+ try!(self.fcx.infcx().sub_types(false, self.origin.clone(), a, b));
Ok(None) // No coercion required.
}
- fn outlives(&self, a: ty::Region, b: ty::Region) -> cres<'tcx, ()> {
- let sub = Sub(self.fcx.infcx().combine_fields(false, self.trace.clone()));
- try!(sub.regions(b, a));
+ fn outlives(&self,
+ origin: infer::SubregionOrigin<'tcx>,
+ a: ty::Region,
+ b: ty::Region)
+ -> RelateResult<'tcx, ()> {
+ infer::mk_subr(self.fcx.infcx(), origin, b, a);
Ok(())
}
_ => return self.subtype(a, b)
}
- let coercion = Coercion(self.trace.clone());
+ let coercion = Coercion(self.origin.span());
let r_borrow = self.fcx.infcx().next_region_var(coercion);
let autoref = Some(AutoPtr(r_borrow, mutbl_b, None));
}
let ty = ty::mk_rptr(self.tcx(), r_borrow,
mt {ty: inner_ty, mutbl: mutbl_b});
- if let Err(err) = self.fcx.infcx().try(|_| self.subtype(ty, b)) {
+ if let Err(err) = self.subtype(ty, b) {
if first_error.is_none() {
first_error = Some(err);
}
return Err(ty::terr_mutability);
}
- let coercion = Coercion(self.trace.clone());
+ let coercion = Coercion(self.origin.span());
let r_borrow = self.fcx.infcx().next_region_var(coercion);
let ty = ty::mk_rptr(self.tcx(),
self.tcx().mk_region(r_borrow),
ty::mt{ty: ty, mutbl: mt_b.mutbl});
- try!(self.fcx.infcx().try(|_| self.subtype(ty, b)));
+ try!(self.subtype(ty, b));
debug!("Success, coerced with AutoDerefRef(1, \
AutoPtr(AutoUnsize({:?})))", kind);
Ok(Some(AdjustDerefRef(AutoDerefRef {
let ty = ty::mk_ptr(self.tcx(),
ty::mt{ty: ty, mutbl: mt_b.mutbl});
- try!(self.fcx.infcx().try(|_| self.subtype(ty, b)));
+ try!(self.subtype(ty, b));
debug!("Success, coerced with AutoDerefRef(1, \
AutoPtr(AutoUnsize({:?})))", kind);
Ok(Some(AdjustDerefRef(AutoDerefRef {
match self.unsize_ty(t_a, t_b) {
Some((ty, kind)) => {
let ty = ty::mk_uniq(self.tcx(), ty);
- try!(self.fcx.infcx().try(|_| self.subtype(ty, b)));
+ try!(self.subtype(ty, b));
debug!("Success, coerced with AutoDerefRef(1, \
AutoUnsizeUniq({:?}))", kind);
Ok(Some(AdjustDerefRef(AutoDerefRef {
let ty_a1 = ty::mk_trait(tcx, data_a.principal.clone(), bounds_a1);
// relate `a1` to `b`
- let result = self.fcx.infcx().try(|_| {
+ let result = self.fcx.infcx().commit_if_ok(|_| {
// it's ok to upcast from Foo+'a to Foo+'b so long as 'a : 'b
- try!(self.outlives(data_a.bounds.region_bound,
+ try!(self.outlives(infer::RelateObjectBound(self.origin.span()),
+ data_a.bounds.region_bound,
data_b.bounds.region_bound));
self.subtype(ty_a1, ty_b)
});
let mut result = None;
let tps = ty_substs_a.iter().zip(ty_substs_b.iter()).enumerate();
for (i, (tp_a, tp_b)) in tps {
- if self.fcx.infcx().try(|_| self.subtype(*tp_a, *tp_b)).is_ok() {
+ if self.subtype(*tp_a, *tp_b).is_ok() {
continue;
}
match self.unsize_ty(*tp_a, *tp_b) {
let mut new_substs = substs_a.clone();
new_substs.types.get_mut_slice(subst::TypeSpace)[i] = new_tp;
let ty = ty::mk_struct(tcx, did_a, tcx.mk_substs(new_substs));
- if self.fcx.infcx().try(|_| self.subtype(ty, ty_b)).is_err() {
+ if self.subtype(ty, ty_b).is_err() {
debug!("Unsized type parameter '{}', but still \
could not match types {} and {}",
ppaux::ty_to_string(tcx, *tp_a),
expr: &ast::Expr,
a: Ty<'tcx>,
b: Ty<'tcx>)
- -> cres<'tcx, ()> {
+ -> RelateResult<'tcx, ()> {
debug!("mk_assignty({} -> {})", a.repr(fcx.tcx()), b.repr(fcx.tcx()));
let adjustment = try!(indent(|| {
- fcx.infcx().commit_if_ok(|| {
- let origin = infer::ExprAssignable(expr.span);
+ fcx.infcx().commit_if_ok(|_| {
Coerce {
fcx: fcx,
- trace: infer::TypeTrace::types(origin, false, a, b)
+ origin: infer::ExprAssignable(expr.span),
}.coerce(expr, a, b)
})
}));
let trait_fty = ty::mk_bare_fn(tcx, None, tcx.mk_bare_fn(trait_m.fty.clone()));
let trait_fty = trait_fty.subst(tcx, &trait_to_skol_substs);
- let err = infcx.try(|snapshot| {
+ let err = infcx.commit_if_ok(|snapshot| {
let origin = infer::MethodCompatCheck(impl_m_span);
let (impl_sig, _) =
ty::lookup_item_type(tcx, self_type_did);
let infcx = infer::new_infer_ctxt(tcx);
- infcx.try(|snapshot| {
+ infcx.commit_if_ok(|snapshot| {
let (named_type_to_skolem, skol_map) =
infcx.construct_skolemized_subst(named_type_generics, snapshot);
let named_type_skolem = named_type.subst(tcx, &named_type_to_skolem);
///////////////////////////////////////////////////////////////////////////
// MISCELLANY
- fn make_sub_ty(&self, sub: Ty<'tcx>, sup: Ty<'tcx>) -> infer::ures<'tcx> {
+ fn make_sub_ty(&self, sub: Ty<'tcx>, sup: Ty<'tcx>) -> infer::UnitResult<'tcx> {
self.infcx().sub_types(false, infer::Misc(DUMMY_SP), sub, sup)
}
&format!("unbound path {}", expr.repr(tcx)))
};
- let mut def = path_res.base_def;
+ let def = path_res.base_def;
if path_res.depth == 0 {
let (scheme, predicates) =
type_scheme_and_predicates_for_def(fcx, expr.span, def);
} else {
let ty_segments = path.segments.init();
let base_ty_end = path.segments.len() - path_res.depth;
- let ty = astconv::finish_resolving_def_to_ty(fcx, fcx, expr.span,
+ let ty = astconv::finish_resolving_def_to_ty(fcx,
+ fcx,
+ expr.span,
PathParamMode::Optional,
- &mut def,
+ &def,
opt_self_ty,
&ty_segments[..base_ty_end],
&ty_segments[base_ty_end..]);
debug!("projection_bounds: outlives={} (2)",
outlives.repr(tcx));
- let region_result = infcx.try(|_| {
+ let region_result = infcx.commit_if_ok(|_| {
let (outlives, _) =
infcx.replace_late_bound_regions_with_fresh_var(
span,
use middle::ty::ty_projection;
use middle::ty;
use CrateCtxt;
-use middle::infer::combine::Combine;
use middle::infer::InferCtxt;
use middle::infer::new_infer_ctxt;
use std::collections::HashSet;
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+//! Traits for working with Errors.
+//!
+//! # The `Error` trait
+//!
+//! `Error` is a trait representing the basic expectations for error values,
+//! i.e. values of type `E` in `Result<T, E>`. At a minimum, errors must provide
+//! a description, but they may optionally provide additional detail (via
+//! `Display`) and cause chain information:
+//!
+//! ```
+//! use std::fmt::Display;
+//!
+//! trait Error: Display {
+//! fn description(&self) -> &str;
+//!
+//! fn cause(&self) -> Option<&Error> { None }
+//! }
+//! ```
+//!
+//! The `cause` method is generally used when errors cross "abstraction
+//! boundaries", i.e. when a one module must report an error that is "caused"
+//! by an error from a lower-level module. This setup makes it possible for the
+//! high-level module to provide its own errors that do not commit to any
+//! particular implementation, but also reveal some of its implementation for
+//! debugging via `cause` chains.
+
+#![stable(feature = "rust1", since = "1.0.0")]
+
+// A note about crates and the facade:
+//
+// Originally, the `Error` trait was defined in libcore, and the impls
+// were scattered about. However, coherence objected to this
+// arrangement, because to create the blanket impls for `Box` required
+// knowing that `&str: !Error`, and we have no means to deal with that
+// sort of conflict just now. Therefore, for the time being, we have
+// moved the `Error` trait into libstd. As we evolve a sol'n to the
+// coherence challenge (e.g., specialization, neg impls, etc) we can
+// reconsider what crate these items belong in.
+
+use boxed::Box;
+use convert::From;
+use fmt::{self, Debug, Display};
+use marker::Send;
+use num;
+use option::Option;
+use option::Option::None;
+use str;
+use string::{self, String};
+
+/// Base functionality for all errors in Rust.
+#[stable(feature = "rust1", since = "1.0.0")]
+pub trait Error: Debug + Display {
+ /// A short description of the error.
+ ///
+ /// The description should not contain newlines or sentence-ending
+ /// punctuation, to facilitate embedding in larger user-facing
+ /// strings.
+ #[stable(feature = "rust1", since = "1.0.0")]
+ fn description(&self) -> &str;
+
+ /// The lower-level cause of this error, if any.
+ #[stable(feature = "rust1", since = "1.0.0")]
+ fn cause(&self) -> Option<&Error> { None }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl<'a, E: Error + 'a> From<E> for Box<Error + 'a> {
+ fn from(err: E) -> Box<Error + 'a> {
+ Box::new(err)
+ }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl<'a, E: Error + Send + 'a> From<E> for Box<Error + Send + 'a> {
+ fn from(err: E) -> Box<Error + Send + 'a> {
+ Box::new(err)
+ }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl<'a, 'b> From<&'b str> for Box<Error + Send + 'a> {
+ fn from(err: &'b str) -> Box<Error + Send + 'a> {
+ #[derive(Debug)]
+ struct StringError(String);
+
+ impl Error for StringError {
+ fn description(&self) -> &str { &self.0 }
+ }
+
+ impl Display for StringError {
+ fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+ Display::fmt(&self.0, f)
+ }
+ }
+
+ Box::new(StringError(String::from_str(err)))
+ }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl Error for str::ParseBoolError {
+ fn description(&self) -> &str { "failed to parse bool" }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl Error for str::Utf8Error {
+ fn description(&self) -> &str {
+ match *self {
+ str::Utf8Error::TooShort => "invalid utf-8: not enough bytes",
+ str::Utf8Error::InvalidByte(..) => "invalid utf-8: corrupt contents",
+ }
+ }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl Error for num::ParseIntError {
+ fn description(&self) -> &str {
+ self.description()
+ }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl Error for num::ParseFloatError {
+ fn description(&self) -> &str {
+ self.description()
+ }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl Error for string::FromUtf8Error {
+ fn description(&self) -> &str {
+ "invalid utf-8"
+ }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl Error for string::FromUtf16Error {
+ fn description(&self) -> &str {
+ "invalid utf-16"
+ }
+}
+
fn read_to_end<R: Read + ?Sized>(r: &mut R, buf: &mut Vec<u8>) -> Result<usize> {
let start_len = buf.len();
let mut len = start_len;
- let mut cap_bump = 16;
+ let mut new_write_size = 16;
let ret;
loop {
if len == buf.len() {
- if buf.capacity() == buf.len() {
- if cap_bump < DEFAULT_BUF_SIZE {
- cap_bump *= 2;
- }
- buf.reserve(cap_bump);
+ if new_write_size < DEFAULT_BUF_SIZE {
+ new_write_size *= 2;
}
- let new_area = buf.capacity() - buf.len();
- buf.extend(iter::repeat(0).take(new_area));
+ buf.extend(iter::repeat(0).take(new_write_size));
}
match r.read(&mut buf[len..]) {
pub use core::simd;
pub use core::result;
pub use core::option;
-pub use core::error;
+pub mod error;
#[cfg(not(test))] pub use alloc::boxed;
pub use alloc::rc;
/// Determine whether the character is one of the permitted path
/// separators for the current platform.
+///
+/// # Examples
+///
+/// ```
+/// use std::path;
+///
+/// assert!(path::is_separator('/'));
+/// assert!(!path::is_separator('❤'));
+/// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn is_separator(c: char) -> bool {
use ascii::*;
///
/// See the module documentation for an in-depth explanation of components and
/// their role in the API.
+///
+/// # Examples
+///
+/// ```
+/// use std::path::Path;
+///
+/// let path = Path::new("/tmp/foo/bar.txt");
+///
+/// for component in path.components() {
+/// println!("{:?}", component);
+/// }
+/// ```
#[derive(Clone)]
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Components<'a> {
}
/// Extract a slice corresponding to the portion of the path remaining for iteration.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path = Path::new("/tmp/foo/bar.txt");
+ ///
+ /// println!("{:?}", path.components().as_path());
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn as_path(&self) -> &'a Path {
let mut comps = self.clone();
/// Directly wrap a string slice as a `Path` slice.
///
/// This is a cost-free conversion.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// Path::new("foo.txt");
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn new<S: AsRef<OsStr> + ?Sized>(s: &S) -> &Path {
unsafe { mem::transmute(s.as_ref()) }
}
/// Yield the underlying `OsStr` slice.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let os_str = Path::new("foo.txt").as_os_str();
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn as_os_str(&self) -> &OsStr {
&self.inner
/// Yield a `&str` slice if the `Path` is valid unicode.
///
/// This conversion may entail doing a check for UTF-8 validity.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path_str = Path::new("foo.txt").to_str();
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn to_str(&self) -> Option<&str> {
self.inner.to_str()
/// Convert a `Path` to a `Cow<str>`.
///
/// Any non-Unicode sequences are replaced with U+FFFD REPLACEMENT CHARACTER.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path_str = Path::new("foo.txt").to_string_lossy();
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn to_string_lossy(&self) -> Cow<str> {
self.inner.to_string_lossy()
}
/// Convert a `Path` to an owned `PathBuf`.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path_str = Path::new("foo.txt").to_path_buf();
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn to_path_buf(&self) -> PathBuf {
PathBuf::from(self.inner.to_os_string())
/// * On Windows, a path is absolute if it has a prefix and starts with the
/// root: `c:\windows` is absolute, while `c:temp` and `\temp` are not. In
/// other words, `path.is_absolute() == path.prefix().is_some() && path.has_root()`.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// assert_eq!(false, Path::new("foo.txt").is_absolute());
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn is_absolute(&self) -> bool {
self.has_root() &&
}
/// A path is *relative* if it is not absolute.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// assert!(Path::new("foo.txt").is_relative());
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn is_relative(&self) -> bool {
!self.is_absolute()
/// * has no prefix and begins with a separator, e.g. `\\windows`
/// * has a prefix followed by a separator, e.g. `c:\windows` but not `c:windows`
/// * has any non-disk prefix, e.g. `\\server\share`
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// assert!(Path::new("/etc/passwd").has_root());
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn has_root(&self) -> bool {
self.components().has_root()
///
/// let path = Path::new("/foo/bar");
/// let foo = path.parent().unwrap();
+ ///
/// assert!(foo == Path::new("/foo"));
+ ///
/// let root = foo.parent().unwrap();
+ ///
/// assert!(root == Path::new("/"));
/// assert!(root.parent() == None);
/// ```
///
/// If the path terminates in `.`, `..`, or consists solely or a root of
/// prefix, `file_name` will return `None`.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path = Path::new("hello_world.rs");
+ /// let filename = "hello_world.rs";
+ ///
+ /// assert_eq!(filename, path.file_name().unwrap());
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn file_name(&self) -> Option<&OsStr> {
self.components().next_back().and_then(|p| match p {
}
/// Determines whether `base` is a prefix of `self`.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path = Path::new("/etc/passwd");
+ ///
+ /// assert!(path.starts_with("/etc"));
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn starts_with<P: AsRef<Path>>(&self, base: P) -> bool {
iter_after(self.components(), base.as_ref().components()).is_some()
}
/// Determines whether `child` is a suffix of `self`.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path = Path::new("/etc/passwd");
+ ///
+ /// assert!(path.ends_with("passwd"));
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn ends_with<P: AsRef<Path>>(&self, child: P) -> bool {
iter_after(self.components().rev(), child.as_ref().components().rev()).is_some()
/// * The entire file name if there is no embedded `.`;
/// * The entire file name if the file name begins with `.` and has no other `.`s within;
/// * Otherwise, the portion of the file name before the final `.`
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path = Path::new("foo.rs");
+ ///
+ /// assert_eq!("foo", path.file_stem().unwrap());
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn file_stem(&self) -> Option<&OsStr> {
self.file_name().map(split_file_at_dot).and_then(|(before, after)| before.or(after))
/// * None, if there is no embedded `.`;
/// * None, if the file name begins with `.` and has no other `.`s within;
/// * Otherwise, the portion of the file name after the final `.`
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path = Path::new("foo.rs");
+ ///
+ /// assert_eq!("rs", path.extension().unwrap());
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn extension(&self) -> Option<&OsStr> {
self.file_name().map(split_file_at_dot).and_then(|(before, after)| before.and(after))
/// Creates an owned `PathBuf` with `path` adjoined to `self`.
///
/// See `PathBuf::push` for more details on what it means to adjoin a path.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path = Path::new("/tmp");
+ ///
+ /// let new_path = path.join("foo");
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn join<P: AsRef<Path>>(&self, path: P) -> PathBuf {
let mut buf = self.to_path_buf();
/// Creates an owned `PathBuf` like `self` but with the given file name.
///
/// See `PathBuf::set_file_name` for more details.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path = Path::new("/tmp/foo.rs");
+ ///
+ /// let new_path = path.with_file_name("bar.rs");
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn with_file_name<S: AsRef<OsStr>>(&self, file_name: S) -> PathBuf {
let mut buf = self.to_path_buf();
/// Creates an owned `PathBuf` like `self` but with the given extension.
///
/// See `PathBuf::set_extension` for more details.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path = Path::new("/tmp/foo.rs");
+ ///
+ /// let new_path = path.with_extension("foo.txt");
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn with_extension<S: AsRef<OsStr>>(&self, extension: S) -> PathBuf {
let mut buf = self.to_path_buf();
}
/// Produce an iterator over the components of the path.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path = Path::new("/tmp/foo.rs");
+ ///
+ /// for component in path.components() {
+ /// println!("{:?}", component);
+ /// }
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn components(&self) -> Components {
let prefix = parse_prefix(self.as_os_str());
}
/// Produce an iterator over the path's components viewed as `OsStr` slices.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path = Path::new("/tmp/foo.rs");
+ ///
+ /// for component in path.iter() {
+ /// println!("{:?}", component);
+ /// }
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn iter(&self) -> Iter {
Iter { inner: self.components() }
/// Returns an object that implements `Display` for safely printing paths
/// that may contain non-Unicode data.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let path = Path::new("/tmp/foo.rs");
+ ///
+ /// println!("{}", path.display());
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn display(&self) -> Display {
Display { path: self }
}
#[derive(Clone)]
-struct LinkedPathNode<'a> {
+pub struct LinkedPathNode<'a> {
node: PathElem,
next: LinkedPath<'a>,
}
-type LinkedPath<'a> = Option<&'a LinkedPathNode<'a>>;
+#[derive(Copy, Clone)]
+pub struct LinkedPath<'a>(Option<&'a LinkedPathNode<'a>>);
+
+impl<'a> LinkedPath<'a> {
+ pub fn empty() -> LinkedPath<'a> {
+ LinkedPath(None)
+ }
+
+ pub fn from(node: &'a LinkedPathNode) -> LinkedPath<'a> {
+ LinkedPath(Some(node))
+ }
+}
impl<'a> Iterator for LinkedPath<'a> {
type Item = PathElem;
fn next(&mut self) -> Option<PathElem> {
- match *self {
+ match self.0 {
Some(node) => {
*self = node.next;
Some(node.node)
pub fn with_path<T, F>(&self, id: NodeId, f: F) -> T where
F: FnOnce(PathElems) -> T,
{
- self.with_path_next(id, None, f)
+ self.with_path_next(id, LinkedPath::empty(), f)
}
pub fn path_to_string(&self, id: NodeId) -> String {
_ => f([].iter().cloned().chain(next))
}
} else {
- self.with_path_next(parent, Some(&LinkedPathNode {
+ self.with_path_next(parent, LinkedPath::from(&LinkedPathNode {
node: self.get_path_elem(id),
next: next
}), f)
("start", "1.0.0", Active),
("main", "1.0.0", Active),
+ ("fundamental", "1.0.0", Active),
+
// Deprecate after snapshot
// SNAP 5520801
("unsafe_destructor", "1.0.0", Active),
("allow_internal_unstable", Gated("allow_internal_unstable",
EXPLAIN_ALLOW_INTERNAL_UNSTABLE)),
+ ("fundamental", Gated("fundamental",
+ "the `#[fundamental]` attribute \
+ is an experimental feature")),
+
// FIXME: #14408 whitelist docs since rustdoc looks at them
("doc", Whitelisted),
NodeName(&'a ast::Name),
NodeBlock(&'a ast::Block),
NodeItem(&'a ast::Item),
+ NodeSubItem(ast::NodeId),
NodeExpr(&'a ast::Expr),
NodePat(&'a ast::Pat),
}
pub fn print_trait_item(&mut self, ti: &ast::TraitItem)
-> io::Result<()> {
+ try!(self.ann.pre(self, NodeSubItem(ti.id)));
try!(self.hardbreak_if_not_bol());
try!(self.maybe_print_comment(ti.span.lo));
try!(self.print_outer_attributes(&ti.attrs));
try!(self.print_method_sig(ti.ident, sig, ast::Inherited));
if let Some(ref body) = *body {
try!(self.nbsp());
- self.print_block_with_attrs(body, &ti.attrs)
+ try!(self.print_block_with_attrs(body, &ti.attrs));
} else {
- word(&mut self.s, ";")
+ try!(word(&mut self.s, ";"));
}
}
ast::TypeTraitItem(ref bounds, ref default) => {
- self.print_associated_type(ti.ident, Some(bounds),
- default.as_ref().map(|ty| &**ty))
+ try!(self.print_associated_type(ti.ident, Some(bounds),
+ default.as_ref().map(|ty| &**ty)));
}
}
+ self.ann.post(self, NodeSubItem(ti.id))
}
pub fn print_impl_item(&mut self, ii: &ast::ImplItem) -> io::Result<()> {
+ try!(self.ann.pre(self, NodeSubItem(ii.id)));
try!(self.hardbreak_if_not_bol());
try!(self.maybe_print_comment(ii.span.lo));
try!(self.print_outer_attributes(&ii.attrs));
try!(self.head(""));
try!(self.print_method_sig(ii.ident, sig, ii.vis));
try!(self.nbsp());
- self.print_block_with_attrs(body, &ii.attrs)
+ try!(self.print_block_with_attrs(body, &ii.attrs));
}
ast::TypeImplItem(ref ty) => {
- self.print_associated_type(ii.ident, None, Some(ty))
+ try!(self.print_associated_type(ii.ident, None, Some(ty)));
}
ast::MacImplItem(codemap::Spanned { node: ast::MacInvocTT(ref pth, ref tts, _),
..}) => {
try!(self.print_tts(&tts[..]));
try!(self.pclose());
try!(word(&mut self.s, ";"));
- self.end()
+ try!(self.end())
}
}
+ self.ann.post(self, NodeSubItem(ii.id))
}
pub fn print_outer_attributes(&mut self,
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![crate_type = "rlib"]
+#![feature(fundamental)]
+
+use std::marker::MarkerTrait;
+
+pub trait MyCopy : MarkerTrait { }
+impl MyCopy for i32 { }
+
+pub struct MyStruct<T>(T);
+
+#[fundamental]
+pub struct MyFundamentalStruct<T>(T);
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// aux-build:coherence_lib.rs
+
+// pretty-expanded FIXME #23616
+
+// Test that the `Pair` type reports an error if it contains type
+// parameters, even when they are covered by local types. This test
+// was originally intended to test the opposite, but the rules changed
+// with RFC 1023 and this became illegal.
+
+extern crate coherence_lib as lib;
+use lib::{Remote,Pair};
+
+pub struct Cover<T>(T);
+
+impl<T> Remote for Pair<T,Cover<T>> { }
+//~^ ERROR E0210
+
+fn main() { }
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// aux-build:coherence_lib.rs
+
+// Test that the `Pair` type reports an error if it contains type
+// parameters, even when they are covered by local types. This test
+// was originally intended to test the opposite, but the rules changed
+// with RFC 1023 and this became illegal.
+
+// pretty-expanded FIXME #23616
+
+extern crate coherence_lib as lib;
+use lib::{Remote,Pair};
+
+pub struct Cover<T>(T);
+
+impl<T> Remote for Pair<Cover<T>,T> { } //~ ERROR E0210
+
+fn main() { }
// aux-build:coherence_lib.rs
-// Test that it's not ok for U to appear uncovered
+// Test that it's not ok for T to appear uncovered
extern crate coherence_lib as lib;
use lib::{Remote,Pair};
pub struct Cover<T>(T);
impl<T,U> Remote for Pair<Cover<T>,U> { }
-//~^ ERROR type parameter `U` must be used as the type parameter for some local type
+//~^ ERROR type parameter `T` must be used as the type parameter for some local type
fn main() { }
impl Copy for TestE {}
impl Copy for MyType {}
+
+impl Copy for &'static mut MyType {}
+//~^ ERROR E0206
+
impl Copy for (MyType, MyType) {}
//~^ ERROR E0206
+//~| ERROR E0117
impl Copy for &'static NotSync {}
//~^ ERROR E0206
impl Copy for [MyType] {}
//~^ ERROR E0206
+//~| ERROR E0117
impl Copy for &'static [NotSync] {}
//~^ ERROR E0206
+//~| ERROR E0117
fn main() {
}
unsafe impl Send for TestE {}
unsafe impl Send for MyType {}
unsafe impl Send for (MyType, MyType) {}
-//~^ ERROR E0321
+//~^ ERROR E0117
unsafe impl Send for &'static NotSync {}
//~^ ERROR E0321
unsafe impl Send for [MyType] {}
-//~^ ERROR E0321
+//~^ ERROR E0117
unsafe impl Send for &'static [NotSync] {}
-//~^ ERROR E0321
-//~| ERROR conflicting implementations
+//~^ ERROR E0117
+//~| ERROR E0119
fn main() {
}
impl !Sync for NotSync {}
impl Sized for TestE {} //~ ERROR E0322
+
impl Sized for MyType {} //~ ERROR E0322
-impl Sized for (MyType, MyType) {} //~ ERROR E0322
+
+impl Sized for (MyType, MyType) {} //~ ERROR E0117
+
impl Sized for &'static NotSync {} //~ ERROR E0322
-impl Sized for [MyType] {} //~ ERROR E0322
+
+impl Sized for [MyType] {} //~ ERROR E0117
//~^ ERROR E0277
-impl Sized for &'static [NotSync] {} //~ ERROR E0322
+
+impl Sized for &'static [NotSync] {} //~ ERROR E0117
fn main() {
}
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Tests that we consider `Box<U>: !Sugar` to be ambiguous, even
+// though we see no impl of `Sugar` for `Box`. Therefore, an overlap
+// error is reported for the following pair of impls (#23516).
+
+pub trait Sugar { fn dummy(&self) { } }
+pub trait Sweet { fn dummy(&self) { } }
+impl<T:Sugar> Sweet for T { } //~ ERROR E0119
+impl<U:Sugar> Sweet for Box<U> { }
+fn main() { }
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that a local, generic type appearing within a
+// *non-fundamental* remote type like `Vec` is not considered local.
+
+// aux-build:coherence_lib.rs
+
+// pretty-expanded FIXME #23616
+
+extern crate coherence_lib as lib;
+use lib::Remote;
+
+struct Local<T>(T);
+
+impl<T> Remote for Vec<Local<T>> { } //~ ERROR E0210
+
+fn main() { }
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that a local type (with no type parameters) appearing within a
+// *non-fundamental* remote type like `Vec` is not considered local.
+
+// aux-build:coherence_lib.rs
+
+// pretty-expanded FIXME #23616
+
+extern crate coherence_lib as lib;
+use lib::Remote;
+
+struct Local;
+
+impl Remote for Vec<Local> { } //~ ERROR E0117
+
+fn main() { }
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that we are able to introduce a negative constraint that
+// `MyType: !MyTrait` along with other "fundamental" wrappers.
+
+// aux-build:coherence_copy_like_lib.rs
+
+#![feature(rustc_attrs)]
+#![allow(dead_code)]
+
+extern crate coherence_copy_like_lib as lib;
+
+use std::marker::MarkerTrait;
+
+struct MyType { x: i32 }
+
+trait MyTrait : MarkerTrait { }
+impl<T: lib::MyCopy> MyTrait for T { }
+
+// `MyFundamentalStruct` is declared fundamental, so we can test that
+//
+// MyFundamentalStruct<MyTrait>: !MyTrait
+//
+// Huzzah.
+impl MyTrait for lib::MyFundamentalStruct<MyType> { }
+
+#[rustc_error]
+fn main() { } //~ ERROR compilation successful
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that we are able to introduce a negative constraint that
+// `MyType: !MyTrait` along with other "fundamental" wrappers.
+
+// aux-build:coherence_copy_like_lib.rs
+
+#![feature(rustc_attrs)]
+#![allow(dead_code)]
+
+extern crate coherence_copy_like_lib as lib;
+
+use std::marker::MarkerTrait;
+
+struct MyType { x: i32 }
+
+trait MyTrait : MarkerTrait { }
+impl<T: lib::MyCopy> MyTrait for T { }
+
+// `MyFundamentalStruct` is declared fundamental, so we can test that
+//
+// MyFundamentalStruct<&MyTrait>: !MyTrait
+//
+// Huzzah.
+impl<'a> MyTrait for lib::MyFundamentalStruct<&'a MyType> { }
+
+#[rustc_error]
+fn main() { } //~ ERROR compilation successful
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that we are able to introduce a negative constraint that
+// `MyType: !MyTrait` along with other "fundamental" wrappers.
+
+// aux-build:coherence_copy_like_lib.rs
+
+#![feature(rustc_attrs)]
+
+extern crate coherence_copy_like_lib as lib;
+
+use std::marker::MarkerTrait;
+
+struct MyType { x: i32 }
+
+trait MyTrait : MarkerTrait { }
+
+impl<T: lib::MyCopy> MyTrait for T { } //~ ERROR E0119
+
+// Tuples are not fundamental.
+impl MyTrait for lib::MyFundamentalStruct<(MyType,)> { }
+
+#[rustc_error]
+fn main() { }
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// aux-build:coherence_copy_like_lib.rs
+
+// Test that we are able to introduce a negative constraint that
+// `MyType: !MyTrait` along with other "fundamental" wrappers.
+
+extern crate coherence_copy_like_lib as lib;
+
+use std::marker::MarkerTrait;
+
+struct MyType { x: i32 }
+
+trait MyTrait : MarkerTrait { }
+impl<T: lib::MyCopy> MyTrait for T { } //~ ERROR E0119
+
+// `MyStruct` is not declared fundamental, therefore this would
+// require that
+//
+// MyStruct<MyType>: !MyTrait
+//
+// which we cannot approve.
+impl MyTrait for lib::MyStruct<MyType> { }
+
+fn main() { }
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that we are able to introduce a negative constraint that
+// `MyType: !MyTrait` along with other "fundamental" wrappers.
+
+// aux-build:coherence_copy_like_lib.rs
+
+extern crate coherence_copy_like_lib as lib;
+
+use std::marker::MarkerTrait;
+
+struct MyType { x: i32 }
+
+trait MyTrait : MarkerTrait { }
+impl<T: lib::MyCopy> MyTrait for T { } //~ ERROR E0119
+
+// Tuples are not fundamental, therefore this would require that
+//
+// (MyType,): !MyTrait
+//
+// which we cannot approve.
+impl MyTrait for (MyType,) { }
+
+fn main() { }
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that we are able to introduce a negative constraint that
+// `MyType: !MyTrait` along with other "fundamental" wrappers.
+
+// aux-build:coherence_copy_like_lib.rs
+
+#![feature(rustc_attrs)]
+#![allow(dead_code)]
+
+extern crate coherence_copy_like_lib as lib;
+
+struct MyType { x: i32 }
+
+// These are all legal because they are all fundamental types:
+
+impl lib::MyCopy for MyType { }
+impl<'a> lib::MyCopy for &'a MyType { }
+impl<'a> lib::MyCopy for &'a Box<MyType> { }
+impl lib::MyCopy for Box<MyType> { }
+impl lib::MyCopy for lib::MyFundamentalStruct<MyType> { }
+impl lib::MyCopy for lib::MyFundamentalStruct<Box<MyType>> { }
+
+#[rustc_error]
+fn main() { } //~ ERROR compilation successful
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that we are able to introduce a negative constraint that
+// `MyType: !MyTrait` along with other "fundamental" wrappers.
+
+// aux-build:coherence_copy_like_lib.rs
+
+#![feature(rustc_attrs)]
+#![allow(dead_code)]
+
+extern crate coherence_copy_like_lib as lib;
+
+struct MyType { x: i32 }
+
+// These are all legal because they are all fundamental types:
+
+// MyStruct is not fundamental.
+impl lib::MyCopy for lib::MyStruct<MyType> { } //~ ERROR E0117
+
+#[rustc_error]
+fn main() { }
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that we are able to introduce a negative constraint that
+// `MyType: !MyTrait` along with other "fundamental" wrappers.
+
+// aux-build:coherence_copy_like_lib.rs
+
+#![feature(rustc_attrs)]
+#![allow(dead_code)]
+
+extern crate coherence_copy_like_lib as lib;
+
+struct MyType { x: i32 }
+
+// These are all legal because they are all fundamental types:
+
+// Tuples are not fundamental, so this is not a local impl.
+impl lib::MyCopy for (MyType,) { } //~ ERROR E0117
+
+#[rustc_error]
+fn main() { }
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that we are able to introduce a negative constraint that
+// `MyType: !MyTrait` along with other "fundamental" wrappers.
+
+// aux-build:coherence_copy_like_lib.rs
+
+#![feature(rustc_attrs)]
+#![allow(dead_code)]
+
+extern crate coherence_copy_like_lib as lib;
+
+struct MyType { x: i32 }
+
+// naturally, legal
+impl lib::MyCopy for MyType { }
+
+#[rustc_error]
+fn main() { } //~ ERROR compilation successful
let f2: &Fat<[isize; 3]> = &f1;
let f3: &Fat<[usize]> = f2;
//~^ ERROR mismatched types
- //~| expected `&Fat<[usize]>`
- //~| found `&Fat<[isize; 3]>`
- //~| expected usize
- //~| found isize
// With a trait.
let f1 = Fat { ptr: Foo };
// which fails to type check.
ss
- //~^ ERROR cannot infer
- //~| ERROR mismatched types
+ //~^ ERROR lifetime of the source pointer does not outlive lifetime bound
+ //~| ERROR cannot infer
}
fn main() {
// `Box<SomeTrait>` defaults to a `'static` bound, so this return
// is illegal.
- ss.r //~ ERROR mismatched types
+ ss.r //~ ERROR lifetime of the source pointer does not outlive lifetime bound
}
fn store(ss: &mut SomeStruct, b: Box<SomeTrait>) {
fn store1<'b>(ss: &mut SomeStruct, b: Box<SomeTrait+'b>) {
// Here we override the lifetimes explicitly, and so naturally we get an error.
- ss.r = b; //~ ERROR mismatched types
+ ss.r = b; //~ ERROR lifetime of the source pointer does not outlive lifetime bound
}
fn main() {
fn make_object_bad<'a,'b,'c,A:SomeTrait+'a+'b>(v: A) -> Box<SomeTrait+'c> {
// A outlives 'a AND 'b...but not 'c.
- box v as Box<SomeTrait+'a> //~ ERROR mismatched types
+ box v as Box<SomeTrait+'a> //~ ERROR lifetime of the source pointer does not outlive
}
fn main() {
fn foo3<'a,'b>(x: &'a mut Dummy) -> &'b mut Dummy {
// Without knowing 'a:'b, we can't coerce
- x //~ ERROR mismatched types
+ x //~ ERROR lifetime of the source pointer does not outlive
//~^ ERROR cannot infer
}
use lib::DefaultedTrait;
struct A;
-impl DefaultedTrait for (A,) { } //~ ERROR E0321
+impl DefaultedTrait for (A,) { } //~ ERROR E0117
struct B;
-impl !DefaultedTrait for (B,) { } //~ ERROR E0321
+impl !DefaultedTrait for (B,) { } //~ ERROR E0117
struct C;
struct D<T>(T);
impl DefaultedTrait for Box<C> { } //~ ERROR E0321
-impl DefaultedTrait for lib::Something<C> { } //~ ERROR E0321
+impl DefaultedTrait for lib::Something<C> { } //~ ERROR E0117
impl DefaultedTrait for D<C> { } // OK
fn main() { }
(a, ref b) => {}
}
for a in &[111i32] {}
+ let test = if some_predicate() { 1 } else { 2 };
+ while some_predicate() {
+ let abc = !some_predicate();
+ }
+ loop {
+ let abc = !some_predicate();
+ break;
+ }
+ // nested block
+ {
+ let abc = !some_predicate();
+
+ {
+ let def = !some_predicate();
+ }
+ }
}
fn after_panic() {
(a, ref b) => {}
}
for a in &[111i32] {}
+ let test = if some_predicate() { 1 } else { 2 };
+ while some_predicate() {
+ let abc = !some_predicate();
+ }
+ loop {
+ let abc = !some_predicate();
+ break;
+ }
+ // nested block
+ {
+ let abc = !some_predicate();
+
+ {
+ let def = !some_predicate();
+ }
+ }
}
fn after_diverging_function() {
(a, ref b) => {}
}
for a in &[111i32] {}
+ let test = if some_predicate() { 1 } else { 2 };
+ while some_predicate() {
+ let abc = !some_predicate();
+ }
+ loop {
+ let abc = !some_predicate();
+ break;
+ }
+ // nested block
+ {
+ let abc = !some_predicate();
+
+ {
+ let def = !some_predicate();
+ }
+ }
}
fn after_break() {
(a, ref b) => {}
}
for a in &[111i32] {}
+ let test = if some_predicate() { 1 } else { 2 };
+ while some_predicate() {
+ let abc = !some_predicate();
+ }
+ loop {
+ let abc = !some_predicate();
+ break;
+ }
+ // nested block
+ {
+ let abc = !some_predicate();
+
+ {
+ let def = !some_predicate();
+ }
+ }
}
}
fn after_continue() {
for _ in 0..10i32 {
- break;
+ continue;
let x = "0";
let (ref y,z) = (1i32, 2u32);
match (20i32, 'c') {
(a, ref b) => {}
}
for a in &[111i32] {}
+ let test = if some_predicate() { 1 } else { 2 };
+ while some_predicate() {
+ let abc = !some_predicate();
+ }
+ loop {
+ let abc = !some_predicate();
+ break;
+ }
+ // nested block
+ {
+ let abc = !some_predicate();
+
+ {
+ let def = !some_predicate();
+ }
+ }
}
}
fn diverge() -> ! {
panic!();
}
+
+fn some_predicate() -> bool { true || false }
+
+++ /dev/null
-// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-// aux-build:coherence_lib.rs
-
-// Test that it's ok for T to appear first in the self-type, as long
-// as it's covered somewhere.
-
-// pretty-expanded FIXME #23616
-
-extern crate coherence_lib as lib;
-use lib::{Remote,Pair};
-
-pub struct Cover<T>(T);
-
-impl<T> Remote for Pair<T,Cover<T>> { }
-
-fn main() { }
+++ /dev/null
-// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-// aux-build:coherence_lib.rs
-
-// Test that it's ok for T to appear second in the self-type, as long
-// as it's covered somewhere.
-
-// pretty-expanded FIXME #23616
-
-extern crate coherence_lib as lib;
-use lib::{Remote,Pair};
-
-pub struct Cover<T>(T);
-
-impl<T> Remote for Pair<Cover<T>,T> { }
-
-fn main() { }
+++ /dev/null
-// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-// aux-build:coherence_lib.rs
-
-// pretty-expanded FIXME #23616
-
-extern crate coherence_lib as lib;
-use lib::Remote;
-
-struct Local;
-
-impl Remote for Vec<Local> { }
-
-fn main() { }
+++ /dev/null
-// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-// aux-build:coherence_lib.rs
-
-// pretty-expanded FIXME #23616
-
-extern crate coherence_lib as lib;
-use lib::Remote;
-
-struct Local<T>(T);
-
-impl<T> Remote for Vec<Local<T>> { }
-
-fn main() { }
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that we are able to introduce a negative constraint that
+// `MyType: !MyTrait` along with other "fundamental" wrappers.
+
+// aux-build:coherence_copy_like_lib.rs
+
+extern crate coherence_copy_like_lib as lib;
+
+use std::marker::MarkerTrait;
+
+struct MyType { x: i32 }
+
+trait MyTrait : MarkerTrait { }
+impl<T: lib::MyCopy> MyTrait for T { }
+impl MyTrait for MyType { }
+impl<'a> MyTrait for &'a MyType { }
+impl MyTrait for Box<MyType> { }
+impl<'a> MyTrait for &'a Box<MyType> { }
+
+fn main() { }
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// Test that we pick which version of `Foo` to run based on whether
-// the type we (ultimately) inferred for `x` is copyable or not.
-//
-// In this case, the two versions are both impls of same trait, and
-// hence we we can resolve method even without knowing yet which
-// version will run (note that the `push` occurs after the call to
-// `foo()`).
+// Test that when we write `x.foo()`, we do nothave to know the
+// complete type of `x` in order to type-check the method call. In
+// this case, we know that `x: Vec<_1>`, but we don't know what type
+// `_1` is (because the call to `push` comes later). To pick between
+// the impls, we would have to know `_1`, since we have to know
+// whether `_1: MyCopy` or `_1 == Box<i32>`. However (and this is the
+// point of the test), we don't have to pick between the two impls --
+// it is enough to know that `foo` comes from the `Foo` trait. We can
+// translate the call as `Foo::foo(&x)` and let the specific impl get
+// chosen later.
// pretty-expanded FIXME #23616
fn foo(&self) -> isize;
}
-impl<T:Copy> Foo for Vec<T> {
+trait MyCopy { fn foo(&self) { } }
+impl MyCopy for i32 { }
+
+impl<T:MyCopy> Foo for Vec<T> {
fn foo(&self) -> isize {1}
}
-impl<T> Foo for Vec<Box<T>> {
+impl Foo for Vec<Box<i32>> {
fn foo(&self) -> isize {2}
}
fn call_foo_copy() -> isize {
let mut x = Vec::new();
let y = x.foo();
- x.push(0_usize);
+ x.push(0_i32);
y
}
fn call_foo_other() -> isize {
- let mut x: Vec<Box<_>> = Vec::new();
+ let mut x: Vec<_> = Vec::new();
let y = x.foo();
- x.push(box 0);
+ let z: Box<i32> = box 0;
+ x.push(z);
y
}
#![allow(unknown_features)]
#![feature(box_syntax)]
+use std::marker::MarkerTrait;
+
trait Get {
fn get(&self) -> Self;
}
-impl<T:Copy> Get for T {
- fn get(&self) -> T { *self }
+trait MyCopy : MarkerTrait { fn copy(&self) -> Self; }
+impl MyCopy for u16 { fn copy(&self) -> Self { *self } }
+impl MyCopy for u32 { fn copy(&self) -> Self { *self } }
+impl MyCopy for i32 { fn copy(&self) -> Self { *self } }
+impl<T:Copy> MyCopy for Option<T> { fn copy(&self) -> Self { *self } }
+
+impl<T:MyCopy> Get for T {
+ fn get(&self) -> T { self.copy() }
}
-impl<T:Get> Get for Box<T> {
- fn get(&self) -> Box<T> { box get_it(&**self) }
+impl Get for Box<i32> {
+ fn get(&self) -> Box<i32> { box get_it(&**self) }
}
fn get_it<T:Get>(t: &T) -> T {
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that you can supply `&F` where `F: FnMut()`.
+
+// pretty-expanded FIXME #23616
+
+#![feature(lang_items, unboxed_closures)]
+
+fn a<F:FnMut() -> i32>(mut f: F) -> i32 {
+ f()
+}
+
+fn b(f: &mut FnMut() -> i32) -> i32 {
+ a(f)
+}
+
+fn c<F:FnMut() -> i32>(f: &mut F) -> i32 {
+ a(f)
+}
+
+fn main() {
+ let z: isize = 7;
+
+ let x = b(&mut || 22);
+ assert_eq!(x, 22);
+
+ let x = c(&mut || 22);
+ assert_eq!(x, 22);
+}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that you can supply `&F` where `F: Fn()`.
+
+// pretty-expanded FIXME #23616
+
+#![feature(lang_items, unboxed_closures)]
+
+fn a<F:Fn() -> i32>(f: F) -> i32 {
+ f()
+}
+
+fn b(f: &Fn() -> i32) -> i32 {
+ a(f)
+}
+
+fn c<F:Fn() -> i32>(f: &F) -> i32 {
+ a(f)
+}
+
+fn main() {
+ let z: isize = 7;
+
+ let x = b(&|| 22);
+ assert_eq!(x, 22);
+
+ let x = c(&|| 22);
+ assert_eq!(x, 22);
+}