& everything builds in the correct manner.
- `make check-stage1-std NO_REBUILD=1` - test the standard library without
rebuilding the entire compiler
-- `make check TESTNAME=<path-to-test-file>.rs` - Run a single test file
-- `make check-stage1-rpass TESTNAME=<path-to-test-file>.rs` - Run a single
+- `make check TESTNAME=<substring-of-test-name>` - Run a matching set of tests.
+ - `TESTNAME` should be a substring of the tests to match against e.g. it could
+ be the fully qualified test name, or just a part of it.
+ `TESTNAME=collections::hash::map::test_map::test_capacity_not_less_than_len`
+ or `TESTNAME=test_capacity_not_less_than_len`.
+- `make check-stage1-rpass TESTNAME=<substring-of-test-name>` - Run a single
rpass test with the stage1 compiler (this will be quicker than running the
command above as we only build the stage1 compiler, not the entire thing).
You can also leave off the `-rpass` to run all stage1 test types.
("msp430", "msp430"),
("powerpc", "powerpc"),
("powerpc64", "powerpc64"),
- ("powerpc64le", "powerpc64le"),
("s390x", "systemz"),
("sparc", "sparc"),
("x86_64", "x86_64"),
detected/enforced by panics, documenting it is very important.
```rust
-/// # Failures
+/// # Errors
# fn foo() {}
```
```
You might think that we could use the `map` combinator to reduce the case
-analysis, but its type doesn't quite fit. Namely, `map` takes a function that
-does something only with the inner value. The result of that function is then
-*always* [rewrapped with `Some`](#code-option-map). Instead, we need something
-like `map`, but which allows the caller to return another `Option`. Its generic
-implementation is even simpler than `map`:
+analysis, but its type doesn't quite fit...
+
+```rust,ignore
+fn file_path_ext(file_path: &str) -> Option<&str> {
+ file_name(file_path).map(|x| extension(x)) //Compilation error
+}
+```
+
+The `map` function here wraps the value returned by the `extension` function
+inside an `Option<_>` and since the `extension` function itself returns an
+`Option<&str>` the expression `file_name(file_path).map(|x| extension(x))`
+actually returns an `Option<Option<&str>>`.
+
+But since `file_path_ext` just returns `Option<&str>` (and not
+`Option<Option<&str>>`) we get a compilation error.
+
+The result of the function taken by map as input is *always* [rewrapped with
+`Some`](#code-option-map). Instead, we need something like `map`, but which
+allows the caller to return a `Option<_>` directly without wrapping it in
+another `Option<_>`.
+
+Its generic implementation is even simpler than `map`:
```rust
fn and_then<F, T, A>(option: Option<T>, f: F) -> Option<A>
}
```
+Side note: Since `and_then` essentially works like `map` but returns an
+`Option<_>` instead of an `Option<Option<_>>` it is known as `flatmap` in some
+other languages.
+
The `Option` type has many other combinators [defined in the standard
library][5]. It is a good idea to skim this list and familiarize
yourself with what's available—they can often reduce case analysis
| Target | std |rustc|cargo| notes |
|-------------------------------|-----|-----|-----|----------------------------|
+| `i686-pc-windows-msvc` | ✓ | ✓ | ✓ | 32-bit MSVC (Windows 7+) |
| `x86_64-pc-windows-msvc` | ✓ | ✓ | ✓ | 64-bit MSVC (Windows 7+) |
| `i686-pc-windows-gnu` | ✓ | ✓ | ✓ | 32-bit MinGW (Windows 7+) |
| `x86_64-pc-windows-gnu` | ✓ | ✓ | ✓ | 64-bit MinGW (Windows 7+) |
| Target | std |rustc|cargo| notes |
|-------------------------------|-----|-----|-----|----------------------------|
-| `i686-pc-windows-msvc` | ✓ | ✓ | ✓ | 32-bit MSVC (Windows 7+) |
| `x86_64-unknown-linux-musl` | ✓ | | | 64-bit Linux with MUSL |
| `arm-linux-androideabi` | ✓ | | | ARM Android |
| `arm-unknown-linux-gnueabi` | ✓ | ✓ | | ARM Linux (2.6.18+) |
| `i686-linux-android` | ✓ | | | 32-bit x86 Android |
| `aarch64-linux-android` | ✓ | | | ARM64 Android |
| `powerpc-unknown-linux-gnu` | ✓ | | | PowerPC Linux (2.6.18+) |
+| `powerpc64-unknown-linux-gnu` | ✓ | | | PPC64 Linux (2.6.18+) |
+|`powerpc64le-unknown-linux-gnu`| ✓ | | | PPC64LE Linux (2.6.18+) |
+|`armv7-unknown-linux-gnueabihf`| ✓ | | | ARMv7 Linux (2.6.18+) |
| `i386-apple-ios` | ✓ | | | 32-bit x86 iOS |
| `x86_64-apple-ios` | ✓ | | | 64-bit x86 iOS |
| `armv7-apple-ios` | ✓ | | | ARM iOS |
| `x86_64-unknown-bitrig` | ✓ | ✓ | | 64-bit Bitrig |
| `x86_64-unknown-dragonfly` | ✓ | ✓ | | 64-bit DragonFlyBSD |
| `x86_64-rumprun-netbsd` | ✓ | | | 64-bit NetBSD Rump Kernel |
+| `x86_64-sun-solaris` | ✓ | ✓ | | 64-bit Solaris/SunOS |
| `i686-pc-windows-msvc` (XP) | ✓ | | | Windows XP support |
| `x86_64-pc-windows-msvc` (XP) | ✓ | | | Windows XP support |
*binaries* (as in `/usr/bin`, if you’re on a Unix system).
Cargo has generated two files and one directory for us: a `Cargo.toml` and a
-*src* directory with a *main.rs* file inside. These should look familliar,
+*src* directory with a *main.rs* file inside. These should look familiar,
they’re exactly what we created by hand, above.
This output is all you need to get started. First, open `Cargo.toml`. It should
message you passed it. A `panic!` like this will cause our program to crash,
displaying the message.
-[expect]: ../std/option/enum.Option.html#method.expect
+[expect]: ../std/result/enum.Result.html#method.expect
[panic]: error-handling.html
If we leave off calling this method, our program will compile, but
#### On iterators:
```rust
-# let lines = "hello\nworld".lines();
+let lines = "hello\nworld".lines();
+
for (linenumber, line) in lines.enumerate() {
println!("{}: {}", linenumber, line);
}
Outputs:
```text
-0: Content of line one
-1: Content of line two
-2: Content of line three
-3: Content of line four
+0: hello
+1: world
```
## Ending iteration early
Here, we bind the first and last element of the tuple to `x` and `z`, but
ignore the middle element.
-Similarly, you can use `..` in a pattern to disregard multiple values.
+It’s worth noting that using `_` never binds the value in the first place,
+which means a value may not move:
+
+```rust
+let tuple: (u32, String) = (5, String::from("five"));
+
+// Here, tuple is moved, because the String moved:
+let (x, _s) = tuple;
+
+// The next line would give "error: use of partially moved value: `tuple`"
+// println!("Tuple is: {:?}", tuple);
+
+// However,
+
+let tuple = (5, String::from("five"));
+
+// Here, tuple is _not_ moved, as the String was never moved, and u32 is Copy:
+let (x, _) = tuple;
+
+// That means this works:
+println!("Tuple is: {:?}", tuple);
+```
+
+This also means that any temporary variables will be dropped at the end of the
+statement:
+
+```rust
+// Here, the String created will be dropped immediately, as it’s not bound:
+
+let _ = String::from(" hello ").trim();
+```
+
+You can also use `..` in a pattern to disregard multiple values:
```rust
enum OptionalTuple {
production. For example, it controls the behavior of the standard library's
`debug_assert!` macro.
* `target_arch = "..."` - Target CPU architecture, such as `"x86"`, `"x86_64"`
- `"mips"`, `"powerpc"`, `"powerpc64"`, `"powerpc64le"`, `"arm"`, or `"aarch64"`.
+ `"mips"`, `"powerpc"`, `"powerpc64"`, `"arm"`, or `"aarch64"`.
* `target_endian = "..."` - Endianness of the target CPU, either `"little"` or
`"big"`.
* `target_env = ".."` - An option provided by the compiler by default
if errcode in errcode_checked:
continue
all_errors.append(errcode)
- print("error: unused error code: " + errcode)
+ print("error: unused error code: {0} ({1}:{2})".format(*errcode_map[errcode][0]))
errors = True
// constant at the call site and the branch will be optimized out.
#[cfg(all(any(target_arch = "arm",
target_arch = "mips",
- target_arch = "mipsel",
target_arch = "powerpc")))]
const MIN_ALIGN: usize = 8;
#[cfg(all(any(target_arch = "x86",
#[cfg(all(any(target_arch = "x86",
target_arch = "arm",
target_arch = "mips",
- target_arch = "mipsel",
target_arch = "powerpc",
- target_arch = "powerpc64",
- target_arch = "powerpc64le")))]
+ target_arch = "powerpc64")))]
const MIN_ALIGN: usize = 8;
#[cfg(all(any(target_arch = "x86_64",
target_arch = "aarch64")))]
nelem: usize,
}
-/// An iterator over mutable references to the items of a `LinkedList`.
+/// An iterator over the items of a `LinkedList`.
#[derive(Clone)]
#[stable(feature = "rust1", since = "1.0.0")]
pub struct IntoIter<T> {
/// 'Whitespace' is defined according to the terms of the Unicode Derived
/// Core Property `White_Space`.
///
+ /// # Text directionality
+ ///
+ /// A string is a sequence of bytes. 'Left' in this context means the first
+ /// position of that byte string; for a language like Arabic or Hebrew
+ /// which are 'right to left' rather than 'left to right', this will be
+ /// the _right_ side, not the left.
+ ///
/// # Examples
///
/// Basic usage:
///
/// assert_eq!("Hello\tworld\t", s.trim_left());
/// ```
+ ///
+ /// Directionality:
+ ///
+ /// ```
+ /// let s = " English";
+ /// assert!(Some('E') == s.trim_left().chars().next());
+ ///
+ /// let s = " עברית";
+ /// assert!(Some('×¢') == s.trim_left().chars().next());
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn trim_left(&self) -> &str {
UnicodeStr::trim_left(self)
/// 'Whitespace' is defined according to the terms of the Unicode Derived
/// Core Property `White_Space`.
///
+ /// # Text directionality
+ ///
+ /// A string is a sequence of bytes. 'Right' in this context means the last
+ /// position of that byte string; for a language like Arabic or Hebrew
+ /// which are 'right to left' rather than 'left to right', this will be
+ /// the _left_ side, not the right.
+ ///
/// # Examples
///
/// Basic usage:
///
/// assert_eq!(" Hello\tworld", s.trim_right());
/// ```
+ ///
+ /// Directionality:
+ ///
+ /// ```
+ /// let s = "English ";
+ /// assert!(Some('h') == s.trim_right().chars().rev().next());
+ ///
+ /// let s = "עברית ";
+ /// assert!(Some('ת') == s.trim_right().chars().rev().next());
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn trim_right(&self) -> &str {
UnicodeStr::trim_right(self)
///
/// [`char`]: primitive.char.html
///
+ /// # Text directionality
+ ///
+ /// A string is a sequence of bytes. 'Left' in this context means the first
+ /// position of that byte string; for a language like Arabic or Hebrew
+ /// which are 'right to left' rather than 'left to right', this will be
+ /// the _right_ side, not the left.
+ ///
/// # Examples
///
/// Basic usage:
///
/// [`char`]: primitive.char.html
///
+ /// # Text directionality
+ ///
+ /// A string is a sequence of bytes. 'Right' in this context means the last
+ /// position of that byte string; for a language like Arabic or Hebrew
+ /// which are 'right to left' rather than 'left to right', this will be
+ /// the _left_ side, not the right.
+ ///
/// # Examples
///
/// Simple patterns:
///
/// [`FromStr`]: str/trait.FromStr.html
///
- /// # Failure
+ /// # Errors
///
/// Will return `Err` if it's not possible to parse this string slice into
/// the desired type.
///
/// [`str::from_utf8()`]: ../str/fn.from_utf8.html
///
- /// # Failure
+ /// # Errors
///
/// Returns `Err` if the slice is not UTF-8 with a description as to why the
/// provided bytes are not UTF-8. The vector you moved in is also included.
}
#[test]
+#[allow(dead_code)]
fn test_variance() {
use std::collections::btree_map::{Iter, IntoIter, Range, Keys, Values};
}
#[test]
+#[allow(dead_code)]
fn test_variance() {
use std::collections::btree_set::{IntoIter, Iter, Range};
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+#![deny(warnings)]
+
#![feature(ascii)]
#![feature(binary_heap_extras)]
#![feature(box_syntax)]
#![feature(collections_bound)]
#![feature(const_fn)]
#![feature(fn_traits)]
-#![feature(deque_extras)]
-#![feature(drain)]
#![feature(enumset)]
-#![feature(into_cow)]
#![feature(iter_arith)]
#![feature(pattern)]
#![feature(rand)]
-#![feature(range_inclusive)]
#![feature(rustc_private)]
#![feature(set_recovery)]
#![feature(slice_bytes)]
-#![feature(slice_splits)]
#![feature(step_by)]
#![feature(str_char)]
#![feature(str_escape)]
-#![feature(str_match_indices)]
#![feature(str_utf16)]
#![feature(test)]
#![feature(unboxed_closures)]
#![feature(unicode)]
-#![feature(vec_push_all)]
#[macro_use] extern crate log;
}
#[test]
+#[allow(deprecated)]
fn test_bytes_set_memory() {
use std::slice::bytes::MutableByteVector;
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-use std::borrow::{IntoCow, Cow};
+use std::borrow::Cow;
use std::iter::repeat;
use test::Bencher;
+pub trait IntoCow<'a, B: ?Sized> where B: ToOwned {
+ fn into_cow(self) -> Cow<'a, B>;
+}
+
+impl<'a> IntoCow<'a, str> for String {
+ fn into_cow(self) -> Cow<'a, str> {
+ Cow::Owned(self)
+ }
+}
+
+impl<'a> IntoCow<'a, str> for &'a str {
+ fn into_cow(self) -> Cow<'a, str> {
+ Cow::Borrowed(self)
+ }
+}
+
#[test]
fn test_from_str() {
let owned: Option<::std::string::String> = "string".parse().ok();
let mut s = String::from("ABC");
unsafe {
let mv = s.as_mut_vec();
- mv.push_all(&[b'D']);
+ mv.extend_from_slice(&[b'D']);
}
assert_eq!(s, "ABCD");
}
b.iter(|| {
let mut dst = dst.clone();
- dst.push_all(&src);
+ dst.extend_from_slice(&src);
assert_eq!(dst.len(), dst_len + src_len);
assert!(dst.iter().enumerate().all(|(i, x)| i == *x));
});
//! Like many traits, these are often used as bounds for generic functions, to
//! support arguments of multiple types.
//!
+//! - Impl the `As*` traits for reference-to-reference conversions
+//! - Impl the `Into` trait when you want to consume the value in the conversion
+//! - The `From` trait is the most flexible, usefull for values _and_ references conversions
+//!
+//! As a library writer, you should prefer implementing `From<T>` rather than
+//! `Into<U>`, as `From` provides greater flexibility and offer the equivalent `Into`
+//! implementation for free, thanks to a blanket implementation in the standard library.
+//!
+//! **Note: these traits must not fail**. If the conversion can fail, you must use a dedicated
+//! method which return an `Option<T>` or a `Result<T, E>`.
+//!
+//! # Generic impl
+//!
+//! - `AsRef` and `AsMut` auto-dereference if the inner type is a reference
+//! - `From<U> for T` implies `Into<T> for U`
+//! - `From` and `Into` are reflexive, which means that all types can `into()`
+//! themselves and `from()` themselves
+//!
//! See each trait for usage examples.
#![stable(feature = "rust1", since = "1.0.0")]
///
/// [book]: ../../book/borrow-and-asref.html
///
+/// **Note: this trait must not fail**. If the conversion can fail, use a dedicated method which
+/// return an `Option<T>` or a `Result<T, E>`.
+///
/// # Examples
///
/// Both `String` and `&str` implement `AsRef<str>`:
/// let s = "hello".to_string();
/// is_hello(s);
/// ```
+///
+/// # Generic Impls
+///
+/// - `AsRef` auto-dereference if the inner type is a reference or a mutable
+/// reference (eg: `foo.as_ref()` will work the same if `foo` has type `&mut Foo` or `&&mut Foo`)
+///
#[stable(feature = "rust1", since = "1.0.0")]
pub trait AsRef<T: ?Sized> {
/// Performs the conversion.
}
/// A cheap, mutable reference-to-mutable reference conversion.
+///
+/// **Note: this trait must not fail**. If the conversion can fail, use a dedicated method which
+/// return an `Option<T>` or a `Result<T, E>`.
+///
+/// # Generic Impls
+///
+/// - `AsMut` auto-dereference if the inner type is a reference or a mutable
+/// reference (eg: `foo.as_ref()` will work the same if `foo` has type `&mut Foo` or `&&mut Foo`)
+///
#[stable(feature = "rust1", since = "1.0.0")]
pub trait AsMut<T: ?Sized> {
/// Performs the conversion.
/// A conversion that consumes `self`, which may or may not be expensive.
///
+/// **Note: this trait must not fail**. If the conversion can fail, use a dedicated method which
+/// return an `Option<T>` or a `Result<T, E>`.
+///
+/// Library writer should not implement directly this trait, but should prefer the implementation
+/// of the `From` trait, which offer greater flexibility and provide the equivalent `Into`
+/// implementation for free, thanks to a blanket implementation in the standard library.
+///
/// # Examples
///
/// `String` implements `Into<Vec<u8>>`:
/// let s = "hello".to_string();
/// is_hello(s);
/// ```
+///
+/// # Generic Impls
+///
+/// - `From<T> for U` implies `Into<U> for T`
+/// - `into()` is reflexive, which means that `Into<T> for T` is implemented
+///
#[stable(feature = "rust1", since = "1.0.0")]
pub trait Into<T>: Sized {
/// Performs the conversion.
/// Construct `Self` via a conversion.
///
+/// **Note: this trait must not fail**. If the conversion can fail, use a dedicated method which
+/// return an `Option<T>` or a `Result<T, E>`.
+///
/// # Examples
///
/// `String` implements `From<&str>`:
///
/// assert_eq!(string, other_string);
/// ```
+/// # Generic impls
+///
+/// - `From<T> for U` implies `Into<U> for T`
+/// - `from()` is reflexive, which means that `From<T> for T` is implemented
+///
#[stable(feature = "rust1", since = "1.0.0")]
pub trait From<T>: Sized {
/// Performs the conversion.
/// // got a false, take_while() isn't used any more
/// assert_eq!(iter.next(), None);
/// ```
+ ///
+ /// Because `take_while()` needs to look at the value in order to see if it
+ /// should be included or not, consuming iterators will see that it is
+ /// removed:
+ ///
+ /// ```
+ /// let a = [1, 2, 3, 4];
+ /// let mut iter = a.into_iter();
+ ///
+ /// let result: Vec<i32> = iter.by_ref()
+ /// .take_while(|n| **n != 3)
+ /// .cloned()
+ /// .collect();
+ ///
+ /// assert_eq!(result, &[1, 2]);
+ ///
+ /// let result: Vec<i32> = iter.cloned().collect();
+ ///
+ /// assert_eq!(result, &[4]);
+ /// ```
+ ///
+ /// The `3` is no longer there, because it was consumed in order to see if
+ /// the iteration should stop, but wasn't placed back into the iterator or
+ /// some similar thing.
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P> where
///
/// let mut iter = numbers.iter();
///
-/// let n = iter.next();
-/// assert_eq!(Some(&1), n);
-///
-/// let n = iter.next_back();
-/// assert_eq!(Some(&3), n);
-///
-/// let n = iter.next_back();
-/// assert_eq!(Some(&2), n);
-///
-/// let n = iter.next();
-/// assert_eq!(None, n);
-///
-/// let n = iter.next_back();
-/// assert_eq!(None, n);
+/// assert_eq!(Some(&1), iter.next());
+/// assert_eq!(Some(&3), iter.next_back());
+/// assert_eq!(Some(&2), iter.next_back());
+/// assert_eq!(None, iter.next());
+/// assert_eq!(None, iter.next_back());
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub trait DoubleEndedIterator: Iterator {
///
/// let mut iter = numbers.iter();
///
- /// let n = iter.next();
- /// assert_eq!(Some(&1), n);
- ///
- /// let n = iter.next_back();
- /// assert_eq!(Some(&3), n);
- ///
- /// let n = iter.next_back();
- /// assert_eq!(Some(&2), n);
- ///
- /// let n = iter.next();
- /// assert_eq!(None, n);
- ///
- /// let n = iter.next_back();
- /// assert_eq!(None, n);
+ /// assert_eq!(Some(&1), iter.next());
+ /// assert_eq!(Some(&3), iter.next_back());
+ /// assert_eq!(Some(&2), iter.next_back());
+ /// assert_eq!(None, iter.next());
+ /// assert_eq!(None, iter.next_back());
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
fn next_back(&mut self) -> Option<Self::Item>;
///
/// [`map()`]: trait.Iterator.html#method.map
/// [`Iterator`]: trait.Iterator.html
+///
+/// # Notes about side effects
+///
+/// The [`map()`] iterator implements [`DoubleEndedIterator`], meaning that
+/// you can also [`map()`] backwards:
+///
+/// ```rust
+/// let v: Vec<i32> = vec![1, 2, 3].into_iter().rev().map(|x| x + 1).collect();
+///
+/// assert_eq!(v, [4, 3, 2]);
+/// ```
+///
+/// [`DoubleEndedIterator`]: trait.DoubleEndedIterator.html
+///
+/// But if your closure has state, iterating backwards may act in a way you do
+/// not expect. Let's go through an example. First, in the forward direction:
+///
+/// ```rust
+/// let mut c = 0;
+///
+/// for pair in vec!['a', 'b', 'c'].into_iter()
+/// .map(|letter| { c += 1; (letter, c) }) {
+/// println!("{:?}", pair);
+/// }
+/// ```
+///
+/// This will print "('a', 1), ('b', 2), ('c', 3)".
+///
+/// Now consider this twist where we add a call to `rev`. This version will
+/// print `('c', 1), ('b', 2), ('a', 3)`. Note that the letters are reversed,
+/// but the values of the counter still go in order. This is because `map()` is
+/// still being called lazilly on each item, but we are popping items off the
+/// back of the vector now, instead of shifting them from the front.
+///
+/// ```rust
+/// let mut c = 0;
+///
+/// for pair in vec!['a', 'b', 'c'].into_iter()
+/// .map(|letter| { c += 1; (letter, c) })
+/// .rev() {
+/// println!("{:?}", pair);
+/// }
+/// ```
#[must_use = "iterator adaptors are lazy and do nothing unless consumed"]
#[stable(feature = "rust1", since = "1.0.0")]
#[derive(Clone)]
/// it, this function is one way to have a stack-allocated string. There is
/// an example of this in the examples section below.
///
-/// # Failure
+/// # Errors
///
/// Returns `Err` if the slice is not UTF-8 with a description as to why the
/// provided slice is not UTF-8.
/// ```
/// use std::sync::atomic::{AtomicUsize, Ordering};
///
- /// let some_usize= AtomicUsize::new(5);
+ /// let some_usize = AtomicUsize::new(5);
///
/// assert_eq!(some_usize.swap(10, Ordering::Relaxed), 5);
/// assert_eq!(some_usize.load(Ordering::Relaxed), 10);
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-use core::fmt::radix;
#[test]
fn test_format_int() {
}
#[test]
+#[allow(deprecated)]
fn test_format_radix() {
+ use core::fmt::radix;
assert!(format!("{:04}", radix(3, 2)) == "0011");
assert!(format!("{}", radix(55, 36)) == "1j");
}
#[test]
#[should_panic]
+#[allow(deprecated)]
fn test_radix_base_too_large() {
+ use core::fmt::radix;
let _ = radix(55, 37);
}
+#[allow(deprecated)]
mod u32 {
use test::Bencher;
use core::fmt::radix;
}
}
+#[allow(deprecated)]
mod i32 {
use test::Bencher;
use core::fmt::radix;
}
#[test]
-fn test_max_by() {
+fn test_max_by_key() {
let xs: &[isize] = &[-3, 0, 1, 5, -10];
- assert_eq!(*xs.iter().max_by(|x| x.abs()).unwrap(), -10);
+ assert_eq!(*xs.iter().max_by_key(|x| x.abs()).unwrap(), -10);
}
#[test]
-fn test_min_by() {
+fn test_min_by_key() {
let xs: &[isize] = &[-3, 0, 1, 5, -10];
- assert_eq!(*xs.iter().min_by(|x| x.abs()).unwrap(), 0);
+ assert_eq!(*xs.iter().min_by_key(|x| x.abs()).unwrap(), 0);
}
#[test]
fn scatter(x: i32) -> i32 { (x * 31) % 127 }
#[bench]
-fn bench_max_by(b: &mut Bencher) {
+fn bench_max_by_key(b: &mut Bencher) {
b.iter(|| {
let it = 0..100;
- it.max_by(|&x| scatter(x))
+ it.max_by_key(|&x| scatter(x))
})
}
// http://www.reddit.com/r/rust/comments/31syce/using_iterators_to_find_the_index_of_the_min_or/
#[bench]
-fn bench_max_by2(b: &mut Bencher) {
+fn bench_max_by_key2(b: &mut Bencher) {
fn max_index_iter(array: &[i32]) -> usize {
- array.iter().enumerate().max_by(|&(_, item)| item).unwrap().0
+ array.iter().enumerate().max_by_key(|&(_, item)| item).unwrap().0
}
let mut data = vec![0i32; 1638];
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+#![deny(warnings)]
+
#![feature(as_unsafe_cell)]
#![feature(borrow_state)]
#![feature(box_syntax)]
#![feature(cell_extras)]
#![feature(const_fn)]
-#![feature(core)]
#![feature(core_float)]
#![feature(core_private_bignum)]
#![feature(core_private_diy_float)]
#![feature(decode_utf16)]
#![feature(fixed_size_array)]
#![feature(float_extras)]
-#![feature(float_from_str_radix)]
#![feature(flt2dec)]
#![feature(fmt_radix)]
#![feature(iter_arith)]
#![feature(iter_arith)]
-#![feature(iter_cmp)]
-#![feature(iter_order)]
#![feature(libc)]
#![feature(nonzero)]
-#![feature(num_bits_bytes)]
#![feature(peekable_is_empty)]
#![feature(ptr_as_ref)]
#![feature(rand)]
-#![feature(range_inclusive)]
#![feature(raw)]
-#![feature(slice_bytes)]
#![feature(slice_patterns)]
#![feature(step_by)]
#![feature(test)]
#![feature(unboxed_closures)]
#![feature(unicode)]
#![feature(unique)]
-#![feature(clone_from_slice)]
extern crate core;
extern crate test;
use core::$T_i::*;
use core::isize;
use core::ops::{Shl, Shr, Not, BitXor, BitAnd, BitOr};
+ use core::mem;
+
use num;
#[test]
#[test]
fn test_count_zeros() {
- assert!(A.count_zeros() == BITS as u32 - 3);
- assert!(B.count_zeros() == BITS as u32 - 2);
- assert!(C.count_zeros() == BITS as u32 - 5);
+ let bits = mem::size_of::<$T>() * 8;
+ assert!(A.count_zeros() == bits as u32 - 3);
+ assert!(B.count_zeros() == bits as u32 - 2);
+ assert!(C.count_zeros() == bits as u32 - 5);
}
#[test]
use num;
use core::ops::{BitOr, BitAnd, BitXor, Shl, Shr, Not};
use std::str::FromStr;
+ use std::mem;
#[test]
fn test_overflows() {
#[test]
fn test_count_zeros() {
- assert!(A.count_zeros() == BITS as u32 - 3);
- assert!(B.count_zeros() == BITS as u32 - 2);
- assert!(C.count_zeros() == BITS as u32 - 5);
+ let bits = mem::size_of::<$T>() * 8;
+ assert!(A.count_zeros() == bits as u32 - 3);
+ assert!(B.count_zeros() == bits as u32 - 2);
+ assert!(C.count_zeros() == bits as u32 - 5);
}
#[test]
Def::TyParam(..) | Def::Struct(..) | Def::Trait(..) |
Def::Method(..) | Def::Const(..) | Def::AssociatedConst(..) |
Def::PrimTy(..) | Def::Label(..) | Def::SelfTy(..) | Def::Err => {
- panic!("attempted .def_id() on invalid {:?}", self)
+ panic!("attempted .var_id() on invalid {:?}", self)
}
}
}
static ROOT_SCOPE: ScopeChain<'static> = RootScope;
-pub fn krate(sess: &Session, krate: &hir::Crate, def_map: &DefMap) -> NamedRegionMap {
+pub fn krate(sess: &Session,
+ krate: &hir::Crate,
+ def_map: &DefMap)
+ -> Result<NamedRegionMap, usize> {
let mut named_region_map = NodeMap();
- sess.abort_if_new_errors(|| {
+ try!(sess.track_errors(|| {
krate.visit_all_items(&mut LifetimeContext {
sess: sess,
named_region_map: &mut named_region_map,
trait_ref_hack: false,
labels_in_fn: vec![],
});
- });
- named_region_map
+ }));
+ Ok(named_region_map)
}
impl<'a, 'v> Visitor<'v> for LifetimeContext<'a> {
})
}
+ pub fn expr_ty_adjusted_opt(&self, expr: &hir::Expr) -> Option<Ty<'tcx>> {
+ self.expr_ty_opt(expr).map(|t| t.adjust(self,
+ expr.span,
+ expr.id,
+ self.tables.borrow().adjustments.get(&expr.id),
+ |method_call| {
+ self.tables.borrow().method_map.get(&method_call).map(|method| method.ty)
+ }))
+ }
+
pub fn expr_span(&self, id: NodeId) -> Span {
match self.map.find(id) {
Some(ast_map::NodeExpr(e)) => {
///////////////////////////////////////////////////////////////////////////
// Variables and temps
-// A "variable" is a binding declared by the user as part of the fn
-// decl, a let, etc.
+/// A "variable" is a binding declared by the user as part of the fn
+/// decl, a let, etc.
#[derive(Clone, RustcEncodable, RustcDecodable)]
pub struct VarDecl<'tcx> {
pub mutability: Mutability,
pub ty: Ty<'tcx>,
}
-// A "temp" is a temporary that we place on the stack. They are
-// anonymous, always mutable, and have only a type.
+/// A "temp" is a temporary that we place on the stack. They are
+/// anonymous, always mutable, and have only a type.
#[derive(Clone, RustcEncodable, RustcDecodable)]
pub struct TempDecl<'tcx> {
pub ty: Ty<'tcx>,
}
-// A "arg" is one of the function's formal arguments. These are
-// anonymous and distinct from the bindings that the user declares.
-//
-// For example, in this function:
-//
-// ```
-// fn foo((x, y): (i32, u32)) { ... }
-// ```
-//
-// there is only one argument, of type `(i32, u32)`, but two bindings
-// (`x` and `y`).
+/// A "arg" is one of the function's formal arguments. These are
+/// anonymous and distinct from the bindings that the user declares.
+///
+/// For example, in this function:
+///
+/// ```
+/// fn foo((x, y): (i32, u32)) { ... }
+/// ```
+///
+/// there is only one argument, of type `(i32, u32)`, but two bindings
+/// (`x` and `y`).
#[derive(Clone, RustcEncodable, RustcDecodable)]
pub struct ArgDecl<'tcx> {
pub ty: Ty<'tcx>,
#[derive(Copy, Clone, Debug, PartialEq, Eq, RustcEncodable, RustcDecodable)]
pub enum DropKind {
- Free, // free a partially constructed box, should go away eventually
+ /// free a partially constructed box, should go away eventually
+ Free,
Deep
}
Field(Field),
Index(V),
- // These indices are generated by slice patterns. Easiest to explain
- // by example:
- //
- // ```
- // [X, _, .._, _, _] => { offset: 0, min_length: 4, from_end: false },
- // [_, X, .._, _, _] => { offset: 1, min_length: 4, from_end: false },
- // [_, _, .._, X, _] => { offset: 2, min_length: 4, from_end: true },
- // [_, _, .._, _, X] => { offset: 1, min_length: 4, from_end: true },
- // ```
+ /// These indices are generated by slice patterns. Easiest to explain
+ /// by example:
+ ///
+ /// ```
+ /// [X, _, .._, _, _] => { offset: 0, min_length: 4, from_end: false },
+ /// [_, X, .._, _, _] => { offset: 1, min_length: 4, from_end: false },
+ /// [_, _, .._, X, _] => { offset: 2, min_length: 4, from_end: true },
+ /// [_, _, .._, _, X] => { offset: 1, min_length: 4, from_end: true },
+ /// ```
ConstantIndex {
- offset: u32, // index or -index (in Python terms), depending on from_end
- min_length: u32, // thing being indexed must be at least this long
- from_end: bool, // counting backwards from end?
+ /// index or -index (in Python terms), depending on from_end
+ offset: u32,
+ /// thing being indexed must be at least this long
+ min_length: u32,
+ /// counting backwards from end?
+ from_end: bool,
},
- // "Downcast" to a variant of an ADT. Currently, we only introduce
- // this for ADTs with more than one variant. It may be better to
- // just introduce it always, or always for enums.
+ /// "Downcast" to a variant of an ADT. Currently, we only introduce
+ /// this for ADTs with more than one variant. It may be better to
+ /// just introduce it always, or always for enums.
Downcast(AdtDef<'tcx>, usize),
}
///////////////////////////////////////////////////////////////////////////
// Operands
//
-// These are values that can appear inside an rvalue (or an index
-// lvalue). They are intentionally limited to prevent rvalues from
-// being nested in one another.
+/// These are values that can appear inside an rvalue (or an index
+/// lvalue). They are intentionally limited to prevent rvalues from
+/// being nested in one another.
#[derive(Clone, PartialEq, RustcEncodable, RustcDecodable)]
pub enum Operand<'tcx> {
}
///////////////////////////////////////////////////////////////////////////
-// Rvalues
+/// Rvalues
#[derive(Clone, RustcEncodable, RustcDecodable)]
pub enum Rvalue<'tcx> {
- // x (either a move or copy, depending on type of x)
+ /// x (either a move or copy, depending on type of x)
Use(Operand<'tcx>),
- // [x; 32]
+ /// [x; 32]
Repeat(Operand<'tcx>, TypedConstVal<'tcx>),
- // &x or &mut x
+ /// &x or &mut x
Ref(Region, BorrowKind, Lvalue<'tcx>),
- // length of a [X] or [X;n] value
+ /// length of a [X] or [X;n] value
Len(Lvalue<'tcx>),
Cast(CastKind, Operand<'tcx>, Ty<'tcx>),
UnaryOp(UnOp, Operand<'tcx>),
- // Creates an *uninitialized* Box
+ /// Creates an *uninitialized* Box
Box(Ty<'tcx>),
- // Create an aggregate value, like a tuple or struct. This is
- // only needed because we want to distinguish `dest = Foo { x:
- // ..., y: ... }` from `dest.x = ...; dest.y = ...;` in the case
- // that `Foo` has a destructor. These rvalues can be optimized
- // away after type-checking and before lowering.
+ /// Create an aggregate value, like a tuple or struct. This is
+ /// only needed because we want to distinguish `dest = Foo { x:
+ /// ..., y: ... }` from `dest.x = ...; dest.y = ...;` in the case
+ /// that `Foo` has a destructor. These rvalues can be optimized
+ /// away after type-checking and before lowering.
Aggregate(AggregateKind<'tcx>, Vec<Operand<'tcx>>),
- // Generates a slice of the form `&input[from_start..L-from_end]`
- // where `L` is the length of the slice. This is only created by
- // slice pattern matching, so e.g. a pattern of the form `[x, y,
- // .., z]` might create a slice with `from_start=2` and
- // `from_end=1`.
+ /// Generates a slice of the form `&input[from_start..L-from_end]`
+ /// where `L` is the length of the slice. This is only created by
+ /// slice pattern matching, so e.g. a pattern of the form `[x, y,
+ /// .., z]` might create a slice with `from_start=2` and
+ /// `from_end=1`.
Slice {
input: Lvalue<'tcx>,
from_start: usize,
}
///////////////////////////////////////////////////////////////////////////
-// Constants
-//
-// Two constants are equal if they are the same constant. Note that
-// this does not necessarily mean that they are "==" in Rust -- in
-// particular one must be wary of `NaN`!
+/// Constants
+///
+/// Two constants are equal if they are the same constant. Note that
+/// this does not necessarily mean that they are "==" in Rust -- in
+/// particular one must be wary of `NaN`!
#[derive(Clone, PartialEq, RustcEncodable, RustcDecodable)]
pub struct Constant<'tcx> {
use std::path::{Path, PathBuf};
use std::cell::{Cell, RefCell};
-use std::collections::HashSet;
+use std::collections::{HashMap, HashSet};
use std::env;
use std::rc::Rc;
/// available in this crate
pub available_macros: RefCell<HashSet<Name>>,
+ /// Map from imported macro spans (which consist of
+ /// the localized span for the macro body) to the
+ /// macro name and defintion span in the source crate.
+ pub imported_macro_spans: RefCell<HashMap<Span, (String, Span)>>,
+
next_node_id: Cell<ast::NodeId>,
}
pub fn track_errors<F, T>(&self, f: F) -> Result<T, usize>
where F: FnOnce() -> T
{
- let count = self.err_count();
+ let old_count = self.err_count();
let result = f();
- let count = self.err_count() - count;
- if count == 0 {
+ let errors = self.err_count() - old_count;
+ if errors == 0 {
Ok(result)
} else {
- Err(count)
- }
- }
- pub fn abort_if_new_errors<F, T>(&self, f: F) -> T
- where F: FnOnce() -> T
- {
- match self.track_errors(f) {
- Ok(result) => result,
- Err(_) => {
- self.abort_if_errors();
- unreachable!();
- }
+ Err(errors)
}
}
pub fn span_warn<S: Into<MultiSpan>>(&self, sp: S, msg: &str) {
next_node_id: Cell::new(1),
injected_allocator: Cell::new(None),
available_macros: RefCell::new(HashSet::new()),
+ imported_macro_spans: RefCell::new(HashMap::new()),
};
sess
};
emitter.emit(None, msg, None, errors::Level::Warning);
}
+
+// Err(0) means compilation was stopped, but no errors were found.
+// This would be better as a dedicated enum, but using try! is so convenient.
+pub type CompileResult = Result<(), usize>;
+
+pub fn compile_result_from_err_count(err_count: usize) -> CompileResult {
+ if err_count == 0 {
+ Ok(())
+ } else {
+ Err(err_count)
+ }
+}
pub target_env: String,
/// Vendor name to use for conditional compilation.
pub target_vendor: String,
- /// Architecture to use for ABI considerations. Valid options: "x86", "x86_64", "arm",
- /// "aarch64", "mips", "powerpc", "powerpc64" and "powerpc64le". "mips" includes "mipsel".
+ /// Architecture to use for ABI considerations. Valid options: "x86",
+ /// "x86_64", "arm", "aarch64", "mips", "powerpc", and "powerpc64".
pub arch: String,
/// Optional settings with defaults.
pub options: TargetOptions,
llvm_target: "powerpc64le-unknown-linux-gnu".to_string(),
target_endian: "little".to_string(),
target_pointer_width: "64".to_string(),
- arch: "powerpc64le".to_string(),
+ arch: "powerpc64".to_string(),
target_os: "linux".to_string(),
target_env: "gnu".to_string(),
target_vendor: "unknown".to_string(),
}
```
+Moving out of a member of a mutably borrowed struct is fine if you put something
+back. `mem::replace` can be used for that:
+
+```
+struct TheDarkKnight;
+
+impl TheDarkKnight {
+ fn nothing_is_true(self) {}
+}
+
+struct Batcave {
+ knight: TheDarkKnight
+}
+
+fn main() {
+ use std::mem;
+
+ let mut cave = Batcave {
+ knight: TheDarkKnight
+ };
+ let borrowed = &mut cave;
+
+ borrowed.knight.nothing_is_true(); // E0507
+ mem::replace(&mut borrowed.knight, TheDarkKnight).nothing_is_true(); // ok!
+}
+```
+
You can find more information about borrowing in the rust-book:
http://doc.rust-lang.org/stable/book/references-and-borrowing.html
"##,
use rustc::front::map as hir_map;
use rustc_mir as mir;
use rustc_mir::mir_map::MirMap;
-use rustc::session::Session;
+use rustc::session::{Session, CompileResult, compile_result_from_err_count};
use rustc::session::config::{self, Input, OutputFilenames, OutputType};
use rustc::session::search_paths::PathKind;
use rustc::lint;
-use rustc::middle::{stability, ty, reachable};
-use rustc::middle::dependency_format;
+use rustc::middle::{dependency_format, stability, ty, reachable};
+use rustc::middle::privacy::AccessLevels;
use rustc::middle;
use rustc::util::common::time;
+use rustc::util::nodemap::NodeSet;
use rustc_borrowck as borrowck;
use rustc_resolve as resolve;
use rustc_metadata::macro_import;
use rustc_front::hir;
use rustc_front::lowering::{lower_crate, LoweringContext};
use rustc_passes::{no_asm, loops, consts, const_fn, rvalues, static_recursion};
-use super::{Compilation, CompileResult, compile_result_from_err_count};
+use super::Compilation;
use serialize::json;
use syntax;
use syntax_ext;
-macro_rules! throw_if_errors {
- ($tsess: expr) => {{
- let err_count = $tsess.err_count();
- if err_count > 0 {
- return Err(err_count);
- }
- }}
-}
-
pub fn compile_input(sess: &Session,
cstore: &CStore,
cfg: ast::CrateConfig,
output: &Option<PathBuf>,
addl_plugins: Option<Vec<String>>,
control: CompileController) -> CompileResult {
- macro_rules! controller_entry_point{($point: ident, $tsess: expr, $make_state: expr) => ({
- let state = $make_state;
- (control.$point.callback)(state);
+ macro_rules! controller_entry_point {
+ ($point: ident, $tsess: expr, $make_state: expr, $phase_result: expr) => {{
+ let state = $make_state;
+ let phase_result: &CompileResult = &$phase_result;
+ if phase_result.is_ok() || control.$point.run_callback_on_error {
+ (control.$point.callback)(state);
+ }
- if control.$point.stop == Compilation::Stop {
- return compile_result_from_err_count($tsess.err_count());
- }
- })}
+ if control.$point.stop == Compilation::Stop {
+ return compile_result_from_err_count($tsess.err_count());
+ }
+ }}
+ }
// We need nested scopes here, because the intermediate results can keep
// large chunks of memory alive and we want to free them as soon as
controller_entry_point!(after_parse,
sess,
- CompileState::state_after_parse(input, sess, outdir, &krate));
+ CompileState::state_after_parse(input, sess, outdir, &krate),
+ Ok(()));
let outputs = build_output_filenames(input, outdir, output, &krate.attrs, sess);
let id = link::find_crate_name(Some(sess), &krate.attrs, input);
sess,
outdir,
&expanded_crate,
- &id[..]));
+ &id[..]),
+ Ok(()));
let expanded_crate = assign_node_ids(sess, expanded_crate);
// Lower ast -> hir.
&expanded_crate,
&hir_map.krate(),
&id[..],
- &lcx));
+ &lcx),
+ Ok(()));
time(sess.time_passes(), "attribute checking", || {
front::check_attr::check_crate(sess, &expanded_crate);
};
try!(try!(phase_3_run_analysis_passes(sess,
- &cstore,
- hir_map,
- &arenas,
- &id,
- control.make_glob_map,
- |tcx, mir_map, analysis| {
+ &cstore,
+ hir_map,
+ &arenas,
+ &id,
+ control.make_glob_map,
+ |tcx, mir_map, analysis, result| {
{
- let state =
- CompileState::state_after_analysis(input,
- &tcx.sess,
- outdir,
- opt_crate,
- tcx.map.krate(),
- &analysis,
- &mir_map,
- tcx,
- &lcx,
- &id);
+ let state = CompileState::state_after_analysis(input,
+ &tcx.sess,
+ outdir,
+ opt_crate,
+ tcx.map.krate(),
+ &analysis,
+ mir_map.as_ref(),
+ tcx,
+ &lcx,
+ &id);
(control.after_analysis.callback)(state);
- throw_if_errors!(tcx.sess);
if control.after_analysis.stop == Compilation::Stop {
return Err(0usize);
}
}
+ try!(result);
+
if log_enabled!(::log::INFO) {
println!("Pre-trans");
tcx.print_debug_stats();
}
let trans = phase_4_translate_to_llvm(tcx,
- mir_map,
+ mir_map.unwrap(),
analysis);
if log_enabled!(::log::INFO) {
})))
};
- try!(phase_5_run_llvm_passes(sess, &trans, &outputs));
+ let phase5_result = phase_5_run_llvm_passes(sess, &trans, &outputs);
controller_entry_point!(after_llvm,
sess,
- CompileState::state_after_llvm(input, sess, outdir, &trans));
+ CompileState::state_after_llvm(input, sess, outdir, &trans),
+ phase5_result);
+ try!(phase5_result);
phase_6_link_output(sess, &trans, &outputs);
Ok(())
}
+
/// The name used for source code that doesn't originate in a file
/// (e.g. source from stdin or a string)
pub fn anon_src() -> String {
pub struct PhaseController<'a> {
pub stop: Compilation,
+ // If true then the compiler will try to run the callback even if the phase
+ // ends with an error. Note that this is not always possible.
+ pub run_callback_on_error: bool,
pub callback: Box<Fn(CompileState) -> () + 'a>,
}
pub fn basic() -> PhaseController<'a> {
PhaseController {
stop: Compilation::Continue,
+ run_callback_on_error: false,
callback: box |_| {},
}
}
krate: Option<&'a ast::Crate>,
hir_crate: &'a hir::Crate,
analysis: &'a ty::CrateAnalysis,
- mir_map: &'a MirMap<'tcx>,
+ mir_map: Option<&'a MirMap<'tcx>>,
tcx: &'a ty::ctxt<'tcx>,
lcx: &'a LoweringContext<'a>,
crate_name: &'a str)
-> CompileState<'a, 'ast, 'tcx> {
CompileState {
analysis: Some(analysis),
- mir_map: Some(mir_map),
+ mir_map: mir_map,
tcx: Some(tcx),
krate: krate,
hir_crate: Some(hir_crate),
})
}));
- time(time_passes,
- "const fn bodies and arguments",
- || const_fn::check_crate(sess, &krate));
+ try!(time(time_passes,
+ "const fn bodies and arguments",
+ || const_fn::check_crate(sess, &krate)));
if sess.opts.debugging_opts.input_stats {
println!("Post-expansion node count: {}", count_nodes(&krate));
make_glob_map: resolve::MakeGlobMap,
f: F)
-> Result<R, usize>
- where F: for<'a> FnOnce(&'a ty::ctxt<'tcx>, MirMap<'tcx>, ty::CrateAnalysis) -> R
+ where F: FnOnce(&ty::ctxt<'tcx>, Option<MirMap<'tcx>>, ty::CrateAnalysis, CompileResult) -> R
{
+ macro_rules! try_with_f {
+ ($e: expr, ($t: expr, $m: expr, $a: expr)) => {
+ match $e {
+ Ok(x) => x,
+ Err(x) => {
+ f($t, $m, $a, Err(x));
+ return Err(x);
+ }
+ }
+ }
+ }
+
let time_passes = sess.time_passes();
let krate = hir_map.krate();
"resolution",
|| resolve::resolve_crate(sess, &hir_map, make_glob_map));
- let named_region_map = time(time_passes,
- "lifetime resolution",
- || middle::resolve_lifetime::krate(sess, krate, &def_map.borrow()));
+ let mut analysis = ty::CrateAnalysis {
+ export_map: export_map,
+ access_levels: AccessLevels::default(),
+ reachable: NodeSet(),
+ name: name,
+ glob_map: glob_map,
+ };
+
+ let named_region_map = try!(time(time_passes,
+ "lifetime resolution",
+ || middle::resolve_lifetime::krate(sess,
+ krate,
+ &def_map.borrow())));
time(time_passes,
"looking for entry point",
"loop checking",
|| loops::check_crate(sess, krate));
- time(time_passes,
- "static item recursion checking",
- || static_recursion::check_crate(sess, krate, &def_map.borrow(), &hir_map));
+ try!(time(time_passes,
+ "static item recursion checking",
+ || static_recursion::check_crate(sess, krate, &def_map.borrow(), &hir_map)));
ty::ctxt::create_and_enter(sess,
arenas,
stability::Index::new(krate),
|tcx| {
// passes are timed inside typeck
- typeck::check_crate(tcx, trait_map);
+ try_with_f!(typeck::check_crate(tcx, trait_map), (tcx, None, analysis));
time(time_passes,
"const checking",
|| consts::check_crate(tcx));
- let access_levels =
+ analysis.access_levels =
time(time_passes, "privacy checking", || {
rustc_privacy::check_crate(tcx,
- &export_map,
+ &analysis.export_map,
external_exports)
});
// Do not move this check past lint
time(time_passes, "stability index", || {
- tcx.stability.borrow_mut().build(tcx, krate, &access_levels)
+ tcx.stability.borrow_mut().build(tcx, krate, &analysis.access_levels)
});
time(time_passes,
// lot of annoying errors in the compile-fail tests (basically,
// lint warnings and so on -- kindck used to do this abort, but
// kindck is gone now). -nmatsakis
- throw_if_errors!(tcx.sess);
+ if sess.err_count() > 0 {
+ return Ok(f(tcx, Some(mir_map), analysis, Err(sess.err_count())));
+ }
- let reachable_map =
+ analysis.reachable =
time(time_passes,
"reachability checking",
- || reachable::find_reachable(tcx, &access_levels));
+ || reachable::find_reachable(tcx, &analysis.access_levels));
time(time_passes, "death checking", || {
- middle::dead::check_crate(tcx, &access_levels);
+ middle::dead::check_crate(tcx, &analysis.access_levels);
});
let ref lib_features_used =
time(time_passes,
"lint checking",
- || lint::check_crate(tcx, &access_levels));
+ || lint::check_crate(tcx, &analysis.access_levels));
// The above three passes generate errors w/o aborting
- throw_if_errors!(tcx.sess);
-
- Ok(f(tcx,
- mir_map,
- ty::CrateAnalysis {
- export_map: export_map,
- access_levels: access_levels,
- reachable: reachable_map,
- name: name,
- glob_map: glob_map,
- }))
+ if sess.err_count() > 0 {
+ return Ok(f(tcx, Some(mir_map), analysis, Err(sess.err_count())));
+ }
+
+ Ok(f(tcx, Some(mir_map), analysis, Ok(())))
})
}
|| write::run_passes(sess, trans, &sess.opts.output_types, outputs));
}
- throw_if_errors!(sess);
- Ok(())
+ if sess.err_count() > 0 {
+ Err(sess.err_count())
+ } else {
+ Ok(())
+ }
}
/// Run the linker on any artifacts that resulted from the LLVM run.
use rustc_resolve as resolve;
use rustc_trans::back::link;
use rustc_trans::save;
-use rustc::session::{config, Session, build_session};
+use rustc::session::{config, Session, build_session, CompileResult};
use rustc::session::config::{Input, PrintRequest, OutputType, ErrorOutputType};
use rustc::middle::cstore::CrateStore;
use rustc::lint::Lint;
const BUG_REPORT_URL: &'static str = "https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.\
md#bug-reports";
-// Err(0) means compilation was stopped, but no errors were found.
-// This would be better as a dedicated enum, but using try! is so convenient.
-pub type CompileResult = Result<(), usize>;
-
-pub fn compile_result_from_err_count(err_count: usize) -> CompileResult {
- if err_count == 0 {
- Ok(())
- } else {
- Err(err_count)
- }
-}
-
#[inline]
fn abort_msg(err_count: usize) -> String {
match err_count {
let mut emitter =
errors::emitter::BasicEmitter::stderr(errors::ColorConfig::Auto);
emitter.emit(None, &abort_msg(err_count), None, errors::Level::Fatal);
- panic!(errors::FatalError);
+ exit_on_err();
}
}
}
state.out_dir)
});
};
+ control.after_analysis.run_callback_on_error = true;
control.make_glob_map = resolve::MakeGlobMap::Yes;
}
println!("{}", str::from_utf8(&data.lock().unwrap()).unwrap());
}
- // Panic so the process returns a failure code, but don't pollute the
- // output with some unnecessary panic messages, we've already
- // printed everything that we needed to.
- io::set_panic(box io::sink());
- panic!();
+ exit_on_err();
}
}
}
+fn exit_on_err() -> ! {
+ // Panic so the process returns a failure code, but don't pollute the
+ // output with some unnecessary panic messages, we've already
+ // printed everything that we needed to.
+ io::set_panic(box io::sink());
+ panic!();
+}
+
pub fn diagnostics_registry() -> diagnostics::registry::Registry {
use syntax::diagnostics::registry::Registry;
arenas,
id,
resolve::MakeGlobMap::No,
- |tcx, _, _| {
+ |tcx, _, _, _| {
let annotation = TypedAnnotation {
tcx: tcx,
};
&arenas,
&id,
resolve::MakeGlobMap::No,
- |tcx, _, _| {
+ |tcx, _, _, _| {
print_flowgraph(variants,
tcx,
code,
ty::ctxt::create_and_enter(&sess,
&arenas,
def_map,
- named_region_map,
+ named_region_map.unwrap(),
ast_map,
freevars,
region_map,
pub const tag_macro_defs: usize = 0x10e; // top-level only
pub const tag_macro_def: usize = 0x9e;
pub const tag_macro_def_body: usize = 0x9f;
+pub const tag_macro_def_span_lo: usize = 0xa8;
+pub const tag_macro_def_span_hi: usize = 0xa9;
pub const tag_paren_sugar: usize = 0xa0;
let mut macros = vec![];
decoder::each_exported_macro(ekrate.metadata.as_slice(),
&*self.cstore.intr,
- |name, attrs, body| {
+ |name, attrs, span, body| {
// NB: Don't use parse::parse_tts_from_source_str because it parses with
// quote_depth > 0.
let mut p = parse::new_parser_from_source_str(&self.sess.parse_sess,
panic!(FatalError);
}
};
- let span = mk_sp(lo, p.last_span.hi);
+ let local_span = mk_sp(lo, p.last_span.hi);
// Mark the attrs as used
for attr in &attrs {
ident: ast::Ident::with_empty_ctxt(name),
attrs: attrs,
id: ast::DUMMY_NODE_ID,
- span: span,
+ span: local_span,
imported_from: Some(item.ident),
// overridden in plugin/load.rs
export: false,
body: body,
});
+ self.sess.imported_macro_spans.borrow_mut()
+ .insert(local_span, (name.as_str().to_string(), span));
true
}
);
use syntax::parse::token;
use syntax::ast;
use syntax::abi;
-use syntax::codemap::{self, Span};
+use syntax::codemap::{self, Span, BytePos, NO_EXPANSION};
use syntax::print::pprust;
use syntax::ptr::P;
}
pub fn each_exported_macro<F>(data: &[u8], intr: &IdentInterner, mut f: F) where
- F: FnMut(ast::Name, Vec<ast::Attribute>, String) -> bool,
+ F: FnMut(ast::Name, Vec<ast::Attribute>, Span, String) -> bool,
{
let macros = reader::get_doc(rbml::Doc::new(data), tag_macro_defs);
for macro_doc in reader::tagged_docs(macros, tag_macro_def) {
let name = item_name(intr, macro_doc);
let attrs = get_attributes(macro_doc);
+ let span = get_macro_span(macro_doc);
let body = reader::get_doc(macro_doc, tag_macro_def_body);
- if !f(name, attrs, body.as_str().to_string()) {
+ if !f(name, attrs, span, body.as_str().to_string()) {
break;
}
}
}
+pub fn get_macro_span(doc: rbml::Doc) -> Span {
+ let lo_doc = reader::get_doc(doc, tag_macro_def_span_lo);
+ let lo = BytePos(reader::doc_as_u32(lo_doc));
+ let hi_doc = reader::get_doc(doc, tag_macro_def_span_hi);
+ let hi = BytePos(reader::doc_as_u32(hi_doc));
+ return Span { lo: lo, hi: hi, expn_id: NO_EXPANSION };
+}
+
pub fn get_dylib_dependency_formats(cdata: Cmd)
-> Vec<(ast::CrateNum, LinkagePreference)>
{
use std::u32;
use syntax::abi;
use syntax::ast::{self, NodeId, Name, CRATE_NODE_ID, CrateNum};
+use syntax::codemap::BytePos;
use syntax::attr;
use syntax::attr::AttrMetaMethods;
use syntax::errors::Handler;
encode_name(rbml_w, def.name);
encode_attributes(rbml_w, &def.attrs);
+ let &BytePos(lo) = &def.span.lo;
+ let &BytePos(hi) = &def.span.hi;
+ rbml_w.wr_tagged_u32(tag_macro_def_span_lo, lo);
+ rbml_w.wr_tagged_u32(tag_macro_def_span_hi, hi);
rbml_w.wr_tagged_str(tag_macro_def_body,
&::syntax::print::pprust::tts_to_string(&def.body));
}
///////////////////////////////////////////////////////////////////////////
-// The `BlockAnd` "monad" packages up the new basic block along with a
-// produced value (sometimes just unit, of course). The `unpack!`
-// macro (and methods below) makes working with `BlockAnd` much more
-// convenient.
+/// The `BlockAnd` "monad" packages up the new basic block along with a
+/// produced value (sometimes just unit, of course). The `unpack!`
+/// macro (and methods below) makes working with `BlockAnd` much more
+/// convenient.
#[must_use] // if you don't use one of these results, you're leaving a dangling edge
pub struct BlockAnd<T>(BasicBlock, T);
}
///////////////////////////////////////////////////////////////////////////
-// construct() -- the main entry point for building MIR for a function
+/// the main entry point for building MIR for a function
pub fn construct<'a,'tcx>(hir: Cx<'a,'tcx>,
_span: Span,
},
}
-// The Hair trait implementor translates their expressions (`&'tcx H::Expr`)
-// into instances of this `Expr` enum. This translation can be done
-// basically as lazilly or as eagerly as desired: every recursive
-// reference to an expression in this enum is an `ExprRef<'tcx>`, which
-// may in turn be another instance of this enum (boxed), or else an
-// untranslated `&'tcx H::Expr`. Note that instances of `Expr` are very
-// shortlived. They are created by `Hair::to_expr`, analyzed and
-// converted into MIR, and then discarded.
-//
-// If you compare `Expr` to the full compiler AST, you will see it is
-// a good bit simpler. In fact, a number of the more straight-forward
-// MIR simplifications are already done in the impl of `Hair`. For
-// example, method calls and overloaded operators are absent: they are
-// expected to be converted into `Expr::Call` instances.
+/// The Hair trait implementor translates their expressions (`&'tcx H::Expr`)
+/// into instances of this `Expr` enum. This translation can be done
+/// basically as lazilly or as eagerly as desired: every recursive
+/// reference to an expression in this enum is an `ExprRef<'tcx>`, which
+/// may in turn be another instance of this enum (boxed), or else an
+/// untranslated `&'tcx H::Expr`. Note that instances of `Expr` are very
+/// shortlived. They are created by `Hair::to_expr`, analyzed and
+/// converted into MIR, and then discarded.
+///
+/// If you compare `Expr` to the full compiler AST, you will see it is
+/// a good bit simpler. In fact, a number of the more straight-forward
+/// MIR simplifications are already done in the impl of `Hair`. For
+/// example, method calls and overloaded operators are absent: they are
+/// expected to be converted into `Expr::Call` instances.
#[derive(Clone, Debug)]
pub struct Expr<'tcx> {
- // type of this expression
+ /// type of this expression
pub ty: Ty<'tcx>,
- // lifetime of this expression if it should be spilled into a
- // temporary; should be None only if in a constant context
+ /// lifetime of this expression if it should be spilled into a
+ /// temporary; should be None only if in a constant context
pub temp_lifetime: Option<CodeExtent>,
- // span of the expression in the source
+ /// span of the expression in the source
pub span: Span,
- // kind of expression
+ /// kind of expression
pub kind: ExprKind<'tcx>,
}
VarRef {
id: ast::NodeId,
},
- SelfRef, // first argument, used for self in a closure
+ /// first argument, used for self in a closure
+ SelfRef,
StaticRef {
id: DefId,
},
pub enum PatternKind<'tcx> {
Wild,
- // x, ref x, x @ P, etc
+ /// x, ref x, x @ P, etc
Binding {
mutability: Mutability,
name: ast::Name,
subpattern: Option<Pattern<'tcx>>,
},
- // Foo(...) or Foo{...} or Foo, where `Foo` is a variant name from an adt with >1 variants
+ /// Foo(...) or Foo{...} or Foo, where `Foo` is a variant name from an adt with >1 variants
Variant {
adt_def: AdtDef<'tcx>,
variant_index: usize,
subpatterns: Vec<FieldPattern<'tcx>>,
},
- // (...), Foo(...), Foo{...}, or Foo, where `Foo` is a variant name from an adt with 1 variant
+ /// (...), Foo(...), Foo{...}, or Foo, where `Foo` is a variant name from an adt with 1 variant
Leaf {
subpatterns: Vec<FieldPattern<'tcx>>,
},
+ /// box P, &P, &mut P, etc
Deref {
subpattern: Pattern<'tcx>,
- }, // box P, &P, &mut P, etc
+ },
Constant {
value: ConstVal,
hi: Literal<'tcx>,
},
- // matches against a slice, checking the length and extracting elements
+ /// matches against a slice, checking the length and extracting elements
Slice {
prefix: Vec<Pattern<'tcx>>,
slice: Option<Pattern<'tcx>>,
suffix: Vec<Pattern<'tcx>>,
},
- // fixed match against an array, irrefutable
+ /// fixed match against an array, irrefutable
Array {
prefix: Vec<Pattern<'tcx>>,
slice: Option<Pattern<'tcx>>,
//! Verifies that const fn arguments are immutable by value bindings
//! and the const fn body doesn't contain any statements
-use rustc::session::Session;
+use rustc::session::{Session, CompileResult};
use syntax::ast;
use syntax::visit::{self, Visitor, FnKind};
use syntax::codemap::Span;
-pub fn check_crate(sess: &Session, krate: &ast::Crate) {
- sess.abort_if_new_errors(|| {
+pub fn check_crate(sess: &Session, krate: &ast::Crate) -> CompileResult {
+ sess.track_errors(|| {
visit::walk_crate(&mut CheckConstFn{ sess: sess }, krate);
- });
+ })
}
struct CheckConstFn<'a> {
// recursively.
use rustc::front::map as ast_map;
-use rustc::session::Session;
+use rustc::session::{Session, CompileResult};
use rustc::middle::def::{Def, DefMap};
use rustc::util::nodemap::NodeMap;
pub fn check_crate<'ast>(sess: &Session,
krate: &'ast hir::Crate,
def_map: &DefMap,
- ast_map: &ast_map::Map<'ast>) {
+ ast_map: &ast_map::Map<'ast>) -> CompileResult {
let mut visitor = CheckCrateVisitor {
sess: sess,
def_map: def_map,
ast_map: ast_map,
discriminant_map: RefCell::new(NodeMap()),
};
- sess.abort_if_new_errors(|| {
+ sess.track_errors(|| {
krate.visit_all_items(&mut visitor);
- });
+ })
}
struct CheckItemRecursionVisitor<'a, 'ast: 'a> {
match search_module.parent_link {
NoParentLink => {
// No more parents. This module was unresolved.
- debug!("(resolving item in lexical scope) unresolved module");
+ debug!("(resolving item in lexical scope) unresolved module: no parent module");
return Failed(None);
}
ModuleParentLink(parent_module_node, _) => {
}
Indeterminate => None,
Failed(err) => {
- debug!("(resolving item path by identifier in lexical scope) failed to resolve {}",
+ debug!("(resolving item path by identifier in lexical scope) failed to \
+ resolve `{}`",
name);
if let Some((span, msg)) = err {
use self::ImportDirectiveSubclass::*;
use DefModifiers;
+use DefOrModule;
use Module;
use Namespace::{self, TypeNS, ValueNS};
use NameBinding;
}
/// One import directive.
-#[derive(Debug)]
+#[derive(Debug,Clone)]
pub struct ImportDirective {
pub module_path: Vec<Name>,
pub subclass: ImportDirectiveSubclass,
}
}
-struct ImportResolvingError {
+struct ImportResolvingError<'a> {
+ /// Module where the error happened
+ source_module: Module<'a>,
+ import_directive: ImportDirective,
span: Span,
- path: String,
help: String,
}
// resolving failed
if errors.len() > 0 {
for e in errors {
- resolve_error(self.resolver,
- e.span,
- ResolutionError::UnresolvedImport(Some((&e.path, &e.help))));
+ self.import_resolving_error(e)
}
} else {
// Report unresolved imports only if no hard error was already reported
}
}
+ /// Resolves an `ImportResolvingError` into the correct enum discriminant
+ /// and passes that on to `resolve_error`.
+ fn import_resolving_error(&self, e: ImportResolvingError) {
+ // If it's a single failed import then create a "fake" import
+ // resolution for it so that later resolve stages won't complain.
+ if let SingleImport(target, _) = e.import_directive.subclass {
+ let mut import_resolutions = e.source_module.import_resolutions.borrow_mut();
+
+ let resolution = import_resolutions.entry((target, ValueNS)).or_insert_with(|| {
+ debug!("(resolving import error) adding import resolution for `{}`",
+ target);
+
+ ImportResolution::new(e.import_directive.id,
+ e.import_directive.is_public)
+ });
+
+ if resolution.target.is_none() {
+ debug!("(resolving import error) adding fake target to import resolution of `{}`",
+ target);
+
+ let name_binding = NameBinding {
+ modifiers: DefModifiers::IMPORTABLE,
+ def_or_module: DefOrModule::Def(Def::Err),
+ span: None,
+ };
+
+ // Create a fake target pointing to a fake name binding in our
+ // own module
+ let target = Target::new(e.source_module,
+ name_binding,
+ Shadowable::Always);
+
+ resolution.target = Some(target);
+ }
+ }
+
+ let path = import_path_to_string(&e.import_directive.module_path,
+ e.import_directive.subclass);
+
+ resolve_error(self.resolver,
+ e.span,
+ ResolutionError::UnresolvedImport(Some((&path, &e.help))));
+ }
+
/// Attempts to resolve imports for the given module and all of its
/// submodules.
fn resolve_imports_for_module_subtree(&mut self,
module_: Module<'b>)
- -> Vec<ImportResolvingError> {
+ -> Vec<ImportResolvingError<'b>> {
let mut errors = Vec::new();
debug!("(resolving imports for module subtree) resolving {}",
module_to_string(&*module_));
}
/// Attempts to resolve imports for the given module only.
- fn resolve_imports_for_module(&mut self, module: Module<'b>) -> Vec<ImportResolvingError> {
+ fn resolve_imports_for_module(&mut self, module: Module<'b>) -> Vec<ImportResolvingError<'b>> {
let mut errors = Vec::new();
if module.all_imports_resolved() {
None => (import_directive.span, String::new()),
};
errors.push(ImportResolvingError {
+ source_module: module,
+ import_directive: import_directive.clone(),
span: span,
- path: import_path_to_string(&import_directive.module_path,
- import_directive.subclass),
help: help,
});
}
namespace_name,
name);
span_err!(self.resolver.session, import_directive.span, E0251, "{}", msg);
- } else {
+ } else {
let target = Target::new(containing_module,
name_binding.clone(),
import_directive.shadowable);
use middle::ty;
use std::fs::File;
+use std::hash::*;
+use std::collections::HashSet;
use syntax::ast::{self, NodeId};
use syntax::codemap::*;
fmt: FmtStrs<'l, 'tcx>,
cur_scope: NodeId,
+
+ // Set of macro definition (callee) spans, and the set
+ // of macro use (callsite) spans. We store these to ensure
+ // we only write one macro def per unique macro definition, and
+ // one macro use per unique callsite span.
+ mac_defs: HashSet<Span>,
+ mac_uses: HashSet<Span>,
+
}
impl <'l, 'tcx> DumpCsvVisitor<'l, 'tcx> {
span_utils,
tcx),
cur_scope: 0,
+ mac_defs: HashSet::new(),
+ mac_uses: HashSet::new(),
}
}
"<mutable>".to_string()
};
let types = self.tcx.node_types();
- let typ = types.get(&id).unwrap().to_string();
+ let typ = types.get(&id).map(|t| t.to_string()).unwrap_or(String::new());
// Get the span only for the name of the variable (I hope the path
// is only ever a variable name, but who knows?).
let sub_span = self.span.span_for_last_ident(p.span);
&typ);
}
}
+
+ /// Extract macro use and definition information from the AST node defined
+ /// by the given NodeId, using the expansion information from the node's
+ /// span.
+ ///
+ /// If the span is not macro-generated, do nothing, else use callee and
+ /// callsite spans to record macro definition and use data, using the
+ /// mac_uses and mac_defs sets to prevent multiples.
+ fn process_macro_use(&mut self, span: Span, id: NodeId) {
+ let data = match self.save_ctxt.get_macro_use_data(span, id) {
+ None => return,
+ Some(data) => data,
+ };
+ let mut hasher = SipHasher::new();
+ data.callee_span.hash(&mut hasher);
+ let hash = hasher.finish();
+ let qualname = format!("{}::{}", data.name, hash);
+ // Don't write macro definition for imported macros
+ if !self.mac_defs.contains(&data.callee_span)
+ && !data.imported {
+ self.mac_defs.insert(data.callee_span);
+ self.fmt.macro_str(data.callee_span, data.callee_span,
+ data.name.clone(), qualname.clone());
+ }
+ if !self.mac_uses.contains(&data.span) {
+ self.mac_uses.insert(data.span);
+ self.fmt.macro_use_str(data.span, data.span, data.name,
+ qualname, data.scope);
+ }
+ }
}
impl<'l, 'tcx, 'v> Visitor<'v> for DumpCsvVisitor<'l, 'tcx> {
fn visit_item(&mut self, item: &ast::Item) {
+ self.process_macro_use(item.span, item.id);
match item.node {
ast::ItemUse(ref use_item) => {
match use_item.node {
}
fn visit_trait_item(&mut self, trait_item: &ast::TraitItem) {
+ self.process_macro_use(trait_item.span, trait_item.id);
match trait_item.node {
ast::ConstTraitItem(ref ty, Some(ref expr)) => {
self.process_const(trait_item.id,
}
fn visit_impl_item(&mut self, impl_item: &ast::ImplItem) {
+ self.process_macro_use(impl_item.span, impl_item.id);
match impl_item.node {
ast::ImplItemKind::Const(ref ty, ref expr) => {
self.process_const(impl_item.id,
}
fn visit_ty(&mut self, t: &ast::Ty) {
+ self.process_macro_use(t.span, t.id);
match t.node {
ast::TyPath(_, ref path) => {
match self.lookup_type_ref(t.id) {
}
fn visit_expr(&mut self, ex: &ast::Expr) {
+ self.process_macro_use(ex.span, ex.id);
match ex.node {
ast::ExprCall(ref _f, ref _args) => {
// Don't need to do anything for function calls,
}
}
- fn visit_mac(&mut self, _: &ast::Mac) {
- // Just stop, macros are poison to us.
+ fn visit_mac(&mut self, mac: &ast::Mac) {
+ // These shouldn't exist in the AST at this point, log a span bug.
+ self.sess.span_bug(mac.span, "macro invocation should have been expanded out of AST");
}
fn visit_pat(&mut self, p: &ast::Pat) {
+ self.process_macro_use(p.span, p.id);
self.process_pat(p);
}
}
fn visit_stmt(&mut self, s: &ast::Stmt) {
+ let id = s.node.id();
+ self.process_macro_use(s.span, id.unwrap());
visit::walk_stmt(self, s)
}
fn visit_local(&mut self, l: &ast::Local) {
+ self.process_macro_use(l.span, l.id);
let value = self.span.snippet(l.span);
self.process_var_decl(&l.pat, value);
FunctionCallData(FunctionCallData),
/// Data about a method call.
MethodCallData(MethodCallData),
+ /// Data about a macro use.
+ MacroUseData(MacroUseData),
}
/// Data for all kinds of functions and methods.
pub decl_id: Option<DefId>,
}
+/// Data about a macro use.
+#[derive(Debug)]
+pub struct MacroUseData {
+ pub span: Span,
+ pub name: String,
+ // Because macro expansion happens before ref-ids are determined,
+ // we use the callee span to reference the associated macro definition.
+ pub callee_span: Span,
+ pub scope: NodeId,
+ pub imported: bool,
+}
+
+macro_rules! option_try(
+ ($e:expr) => (match $e { Some(e) => e, None => return None })
+);
+
impl<'l, 'tcx: 'l> SaveContext<'l, 'tcx> {
}
pub fn get_expr_data(&self, expr: &ast::Expr) -> Option<Data> {
+ let hir_node = lowering::lower_expr(self.lcx, expr);
+ let ty = self.tcx.expr_ty_adjusted_opt(&hir_node);
+ if ty.is_none() || ty.unwrap().sty == ty::TyError {
+ return None;
+ }
match expr.node {
ast::ExprField(ref sub_ex, ident) => {
let hir_node = lowering::lower_expr(self.lcx, sub_ex);
- let ty = &self.tcx.expr_ty_adjusted(&hir_node).sty;
- match *ty {
+ match self.tcx.expr_ty_adjusted(&hir_node).sty {
ty::TyStruct(def, _) => {
let f = def.struct_variant().field_named(ident.node.name);
let sub_span = self.span_utils.span_for_last_ident(expr.span);
}
ast::ExprStruct(ref path, _, _) => {
let hir_node = lowering::lower_expr(self.lcx, expr);
- let ty = &self.tcx.expr_ty_adjusted(&hir_node).sty;
- match *ty {
+ match self.tcx.expr_ty_adjusted(&hir_node).sty {
ty::TyStruct(def, _) => {
let sub_span = self.span_utils.span_for_last_ident(path.span);
filter!(self.span_utils, sub_span, path.span, None);
})
}
+ /// Attempt to return MacroUseData for any AST node.
+ ///
+ /// For a given piece of AST defined by the supplied Span and NodeId,
+ /// returns None if the node is not macro-generated or the span is malformed,
+ /// else uses the expansion callsite and callee to return some MacroUseData.
+ pub fn get_macro_use_data(&self, span: Span, id: NodeId) -> Option<MacroUseData> {
+ if !generated_code(span) {
+ return None;
+ }
+ // Note we take care to use the source callsite/callee, to handle
+ // nested expansions and ensure we only generate data for source-visible
+ // macro uses.
+ let callsite = self.tcx.sess.codemap().source_callsite(span);
+ let callee = self.tcx.sess.codemap().source_callee(span);
+ let callee = option_try!(callee);
+ let callee_span = option_try!(callee.span);
+
+ // Ignore attribute macros, their spans are usually mangled
+ if let MacroAttribute(_) = callee.format {
+ return None;
+ }
+
+ // If the callee is an imported macro from an external crate, need to get
+ // the source span and name from the session, as their spans are localized
+ // when read in, and no longer correspond to the source.
+ if let Some(mac) = self.tcx.sess.imported_macro_spans.borrow().get(&callee_span) {
+ let &(ref mac_name, mac_span) = mac;
+ return Some(MacroUseData {
+ span: callsite,
+ name: mac_name.clone(),
+ callee_span: mac_span,
+ scope: self.enclosing_scope(id),
+ imported: true,
+ });
+ }
+
+ Some(MacroUseData {
+ span: callsite,
+ name: callee.name().to_string(),
+ callee_span: callee_span,
+ scope: self.enclosing_scope(id),
+ imported: false,
+ })
+ }
+
pub fn get_data_for_id(&self, _id: &NodeId) -> Data {
// FIXME
unimplemented!();
VarRef,
TypeRef,
FnRef,
+ Macro,
+ MacroUse,
}
impl<'a, 'tcx: 'a> FmtStrs<'a, 'tcx> {
vec!("refid", "refidcrate", "qualname", "scopeid"),
true,
true),
+ Macro => ("macro",
+ vec!("name", "qualname"),
+ true,
+ true),
+ MacroUse => ("macro_use",
+ vec!("callee_name", "qualname", "scopeid"),
+ true,
+ true),
}
}
sub_span,
svec!(id.index.as_usize(), id.krate, "", scope_id));
}
+
+ pub fn macro_str(&mut self, span: Span, sub_span: Span, name: String, qualname: String) {
+ self.record_with_span(Macro, span, sub_span, svec!(name, qualname));
+ }
+
+ pub fn macro_use_str(&mut self,
+ span: Span,
+ sub_span: Span,
+ name: String,
+ qualname: String,
+ scope_id: NodeId) {
+ let scope_id = self.normalize_node_id(scope_id);
+ self.record_with_span(MacroUse, span, sub_span,
+ svec!(name, qualname, scope_id));
+ }
}
}
}
+ // Given a macro_rules definition span, return the span of the macro's name.
+ pub fn span_for_macro_name(&self, span: Span) -> Option<Span> {
+ let mut toks = self.retokenise_span(span);
+ loop {
+ let ts = toks.real_token();
+ if ts.tok == token::Eof {
+ return None;
+ }
+ if ts.tok == token::Not {
+ let ts = toks.real_token();
+ if ts.tok.is_ident() {
+ return self.make_sub_span(span, Some(ts.sp));
+ } else {
+ return None;
+ }
+ }
+ }
+ }
+
/// Return true if the span is generated code, and
/// it is not a subspan of the root callsite.
///
if sub_span.is_none() {
return true;
}
- // A generated span is deemed invalid if it is not a sub-span of the root
+
+ //If the span comes from a fake filemap, filter it.
+ if !self.sess.codemap().lookup_char_pos(parent.lo).file.is_real_file() {
+ return true;
+ }
+
+ // Otherwise, a generated span is deemed invalid if it is not a sub-span of the root
// callsite. This filters out macro internal variables and most malformed spans.
let span = self.sess.codemap().source_callsite(parent);
- !(parent.lo >= span.lo && parent.hi <= span.hi)
+ !(span.contains(parent))
}
}
},
"mips" => cabi_mips::compute_abi_info(ccx, atys, rty, ret_def),
"powerpc" => cabi_powerpc::compute_abi_info(ccx, atys, rty, ret_def),
- "powerpc64" | "powerpc64le" => cabi_powerpc64::compute_abi_info(ccx, atys, rty, ret_def),
+ "powerpc64" => cabi_powerpc64::compute_abi_info(ccx, atys, rty, ret_def),
a => ccx.sess().fatal(&format!("unrecognized arch \"{}\" in target specification", a)
),
}
2)
}
+ // Indicate that we want CodeView debug information on MSVC
+ if cx.sess().target.target.options.is_like_msvc {
+ llvm::LLVMRustAddModuleFlag(cx.llmod(),
+ "CodeView\0".as_ptr() as *const _,
+ 1)
+ }
+
// Prevent bitcode readers from deleting the debug info.
let ptr = "Debug Info Version\0".as_ptr();
llvm::LLVMRustAddModuleFlag(cx.llmod(), ptr as *const _,
use middle::ty::util::Representability;
use require_c_abi_if_variadic;
use rscope::{ElisionFailureInfo, RegionScope};
-use session::Session;
+use session::{Session, CompileResult};
use {CrateCtxt, lookup_full_def};
use TypeAndSubsts;
use lint;
}
}
-pub fn check_wf_new(ccx: &CrateCtxt) {
- ccx.tcx.sess.abort_if_new_errors(|| {
+pub fn check_wf_new(ccx: &CrateCtxt) -> CompileResult {
+ ccx.tcx.sess.track_errors(|| {
let mut visit = wfcheck::CheckTypeWellFormedVisitor::new(ccx);
ccx.tcx.visit_all_items_in_krate(DepNode::WfCheck, &mut visit);
- });
+ })
}
-pub fn check_item_types(ccx: &CrateCtxt) {
- ccx.tcx.sess.abort_if_new_errors(|| {
+pub fn check_item_types(ccx: &CrateCtxt) -> CompileResult {
+ ccx.tcx.sess.track_errors(|| {
let mut visit = CheckItemTypesVisitor { ccx: ccx };
ccx.tcx.visit_all_items_in_krate(DepNode::TypeckItemType, &mut visit);
- });
+ })
}
-pub fn check_item_bodies(ccx: &CrateCtxt) {
- ccx.tcx.sess.abort_if_new_errors(|| {
+pub fn check_item_bodies(ccx: &CrateCtxt) -> CompileResult {
+ ccx.tcx.sess.track_errors(|| {
let mut visit = CheckItemBodiesVisitor { ccx: ccx };
ccx.tcx.visit_all_items_in_krate(DepNode::TypeckItemBody, &mut visit);
- });
+ })
}
-pub fn check_drop_impls(ccx: &CrateCtxt) {
- ccx.tcx.sess.abort_if_new_errors(|| {
+pub fn check_drop_impls(ccx: &CrateCtxt) -> CompileResult {
+ ccx.tcx.sess.track_errors(|| {
let _task = ccx.tcx.dep_graph.in_task(DepNode::Dropck);
let drop_trait = match ccx.tcx.lang_items.drop_trait() {
Some(id) => ccx.tcx.lookup_trait_def(id), None => { return }
}
}
});
- });
+ })
}
fn check_bare_fn<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>,
return;
}
_ => {
- span_err!(self.tcx.sess, item.span, E0118,
- "no base type found for inherent implementation; \
- implement a trait or new type instead");
+ struct_span_err!(self.tcx.sess, item.span, E0118,
+ "no base type found for inherent implementation")
+ .span_help(item.span,
+ "either implement a trait on it or create a newtype to wrap it \
+ instead")
+ .emit();
return;
}
}
"##,
E0118: r##"
-Rust can't find a base type for an implementation you are providing, or the type
-cannot have an implementation. For example, only a named type or a trait can
-have an implementation:
+You're trying to write an inherent implementation for something which isn't a
+struct nor an enum. Erroneous code example:
```
-type NineString = [char, ..9] // This isn't a named type (struct, enum or trait)
-impl NineString {
- // Some code here
+impl (u8, u8) { // error: no base type found for inherent implementation
+ fn get_state(&self) -> String {
+ // ...
+ }
+}
+```
+
+To fix this error, please implement a trait on the type or wrap it in a struct.
+Example:
+
+```
+// we create a trait here
+trait LiveLongAndProsper {
+ fn get_state(&self) -> String;
+}
+
+// and now you can implement it on (u8, u8)
+impl LiveLongAndProsper for (u8, u8) {
+ fn get_state(&self) -> String {
+ "He's dead, Jim!".to_owned()
+ }
}
```
-In the other, simpler case, Rust just can't find the type you are providing an
-impelementation for:
+Alternatively, you can create a newtype. A newtype is a wrapping tuple-struct.
+For example, `NewType` is a newtype over `Foo` in `struct NewType(Foo)`.
+Example:
```
-impl SomeTypeThatDoesntExist { }
+struct TypeWrapper((u8, u8));
+
+impl TypeWrapper {
+ fn get_state(&self) -> String {
+ "Fascinating!".to_owned()
+ }
+}
```
"##,
use middle::infer::{self, TypeOrigin};
use middle::subst;
use middle::ty::{self, Ty, TypeFoldable};
-use session::config;
+use session::{config, CompileResult};
use util::common::time;
use rustc_front::hir;
}
}
-pub fn check_crate(tcx: &ty::ctxt, trait_map: ty::TraitMap) {
+pub fn check_crate(tcx: &ty::ctxt, trait_map: ty::TraitMap) -> CompileResult {
let time_passes = tcx.sess.time_passes();
let ccx = CrateCtxt {
trait_map: trait_map,
// this ensures that later parts of type checking can assume that items
// have valid types and not error
- tcx.sess.abort_if_new_errors(|| {
+ try!(tcx.sess.track_errors(|| {
time(time_passes, "type collecting", ||
collect::collect_item_types(tcx));
- });
+ }));
time(time_passes, "variance inference", ||
variance::infer_variance(tcx));
- tcx.sess.abort_if_new_errors(|| {
+ try!(tcx.sess.track_errors(|| {
time(time_passes, "coherence checking", ||
coherence::check_coherence(&ccx));
- });
+ }));
- time(time_passes, "wf checking", ||
- check::check_wf_new(&ccx));
+ try!(time(time_passes, "wf checking", ||
+ check::check_wf_new(&ccx)));
- time(time_passes, "item-types checking", ||
- check::check_item_types(&ccx));
+ try!(time(time_passes, "item-types checking", ||
+ check::check_item_types(&ccx)));
- time(time_passes, "item-bodies checking", ||
- check::check_item_bodies(&ccx));
+ try!(time(time_passes, "item-bodies checking", ||
+ check::check_item_bodies(&ccx)));
- time(time_passes, "drop-impl checking", ||
- check::check_drop_impls(&ccx));
+ try!(time(time_passes, "drop-impl checking", ||
+ check::check_drop_impls(&ccx)));
check_for_entry_fn(&ccx);
- tcx.sess.abort_if_errors();
+
+ let err_count = tcx.sess.err_count();
+ if err_count == 0 {
+ Ok(())
+ } else {
+ Err(err_count)
+ }
}
__build_diagnostic_array! { librustc_typeck, DIAGNOSTICS }
/// * `a-z`
/// * `A-Z`
///
- /// # Failure
+ /// # Errors
///
/// Returns `None` if the `char` does not refer to a digit in the given radix.
///
&arenas,
&name,
resolve::MakeGlobMap::No,
- |tcx, _, analysis| {
+ |tcx, _, analysis, _| {
let _ignore = tcx.dep_graph.in_ignore();
let ty::CrateAnalysis { access_levels, .. } = analysis;
use serialize::json::{self, ToJson};
use syntax::{abi, ast};
+use syntax::feature_gate::UnstableFeatures;
use rustc::middle::cstore::LOCAL_CRATE;
use rustc::middle::def_id::{CRATE_DEF_INDEX, DefId};
use rustc::middle::privacy::AccessLevels;
use rustc::middle::stability;
+use rustc::session::config::get_unstable_features_setting;
use rustc_front::hir;
use clean::{self, SelfTy};
fn item_function(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item,
f: &clean::Function) -> fmt::Result {
+ let vis_constness = match get_unstable_features_setting() {
+ UnstableFeatures::Allow => f.constness,
+ _ => hir::Constness::NotConst
+ };
try!(write!(w, "<pre class='rust fn'>{vis}{constness}{unsafety}{abi}fn \
{name}{generics}{decl}{where_clause}</pre>",
vis = VisSpace(it.visibility),
- constness = ConstnessSpace(f.constness),
+ constness = ConstnessSpace(vis_constness),
unsafety = UnsafetySpace(f.unsafety),
abi = AbiSpace(f.abi),
name = it.name.as_ref().unwrap(),
href(did).map(|p| format!("{}{}", p.0, anchor)).unwrap_or(anchor)
}
};
+ let vis_constness = match get_unstable_features_setting() {
+ UnstableFeatures::Allow => constness,
+ _ => hir::Constness::NotConst
+ };
write!(w, "{}{}{}fn <a href='{href}' class='fnname'>{name}</a>\
{generics}{decl}{where_clause}",
- ConstnessSpace(constness),
+ ConstnessSpace(vis_constness),
UnsafetySpace(unsafety),
match abi {
Abi::Rust => String::new(),
//
// This doesn't have to be checked for overflow since allocation size
// in bytes will overflow earlier than multiplication by 10.
- cap * 10 / 11
+ //
+ // As per https://github.com/rust-lang/rust/pull/30991 this is updated
+ // to be: (cap * den + den - 1) / num
+ (cap * 10 + 10 - 1) / 11
}
}
assert_eq!(a[&2], "two");
assert_eq!(a[&3], "three");
}
+
+ #[test]
+ fn test_capacity_not_less_than_len() {
+ let mut a = HashMap::new();
+ let mut item = 0;
+
+ for _ in 0..116 {
+ a.insert(item, 0);
+ item += 1;
+ }
+
+ assert!(a.capacity() > a.len());
+
+ let free = a.capacity() - a.len();
+ for _ in 0..free {
+ a.insert(item, 0);
+ item += 1;
+ }
+
+ assert_eq!(a.len(), a.capacity());
+
+ // Insert at capacity should cause allocation.
+ a.insert(item, 0);
+ assert!(a.capacity() > a.len());
+ }
}
/// - arm
/// - aarch64
/// - mips
- /// - mipsel
/// - powerpc
/// - powerpc64
- /// - powerpc64le
#[stable(feature = "env", since = "1.0.0")]
pub const ARCH: &'static str = super::arch::ARCH;
pub const ARCH: &'static str = "mips";
}
-#[cfg(target_arch = "mipsel")]
-mod arch {
- pub const ARCH: &'static str = "mipsel";
-}
-
#[cfg(target_arch = "powerpc")]
mod arch {
pub const ARCH: &'static str = "powerpc";
pub const ARCH: &'static str = "powerpc64";
}
-#[cfg(target_arch = "powerpc64le")]
-mod arch {
- pub const ARCH: &'static str = "powerpc64le";
-}
-
#[cfg(target_arch = "le32")]
mod arch {
pub const ARCH: &'static str = "le32";
/// information like the entry's path and possibly other metadata can be
/// learned.
///
-/// # Failure
+/// # Errors
///
/// This `io::Result` will be an `Err` if there's some sort of intermittent
/// IO error during iteration.
/// stringification of all the tokens passed to the macro. No restrictions
/// are placed on the syntax of the macro invocation itself.
///
+ /// Note that the expanded results of the input tokens may change in the
+ /// future. You should be careful if you rely on the output.
+ ///
/// # Examples
///
/// ```
}
}
-#[cfg(any(target_arch = "mips",
- target_arch = "mipsel"))]
+#[cfg(target_arch = "mips")]
mod arch {
use super::mode_t;
use os::raw::{c_long, c_ulong};
}
}
-#[cfg(any(target_arch = "x86_64", target_arch = "powerpc64",
- target_arch = "powerpc64le"))]
+#[cfg(any(target_arch = "x86_64", target_arch = "powerpc64"))]
mod arch {
use super::{dev_t, mode_t};
use os::raw::{c_long, c_int};
all(target_os = "linux", any(target_arch = "aarch64",
target_arch = "arm",
target_arch = "powerpc",
- target_arch = "powerpc64",
- target_arch = "powerpc64le"))))]
+ target_arch = "powerpc64"))))]
#[stable(feature = "raw_os", since = "1.1.0")] pub type c_char = u8;
#[cfg(not(any(target_os = "android",
all(target_os = "linux", any(target_arch = "aarch64",
target_arch = "arm",
target_arch = "powerpc",
- target_arch = "powerpc64",
- target_arch = "powerpc64le")))))]
+ target_arch = "powerpc64")))))]
#[stable(feature = "raw_os", since = "1.1.0")] pub type c_char = i8;
#[stable(feature = "raw_os", since = "1.1.0")] pub type c_schar = i8;
#[stable(feature = "raw_os", since = "1.1.0")] pub type c_uchar = u8;
}
- /// Gets information on the file, directory, etc at this path.
+ /// Query the file system to get information about a file, directory, etc.
///
- /// Consult the `fs::metadata` documentation for more info.
+ /// This function will traverse symbolic links to query information about the
+ /// destination file.
///
- /// This call preserves identical runtime/error semantics with
- /// `fs::metadata`.
+ /// This is an alias to `fs::metadata`.
#[stable(feature = "path_ext", since = "1.5.0")]
pub fn metadata(&self) -> io::Result<fs::Metadata> {
fs::metadata(self)
}
- /// Gets information on the file, directory, etc at this path.
+ /// Query the metadata about a file without following symlinks.
///
- /// Consult the `fs::symlink_metadata` documentation for more info.
- ///
- /// This call preserves identical runtime/error semantics with
- /// `fs::symlink_metadata`.
+ /// This is an alias to `fs::symlink_metadata`.
#[stable(feature = "path_ext", since = "1.5.0")]
pub fn symlink_metadata(&self) -> io::Result<fs::Metadata> {
fs::symlink_metadata(self)
}
- /// Returns the canonical form of a path, normalizing all components and
- /// eliminate all symlinks.
+ /// Returns the canonical form of the path with all intermediate components
+ /// normalized and symbolic links resolved.
///
- /// This call preserves identical runtime/error semantics with
- /// `fs::canonicalize`.
+ /// This is an alias to `fs::canonicalize`.
#[stable(feature = "path_ext", since = "1.5.0")]
pub fn canonicalize(&self) -> io::Result<PathBuf> {
fs::canonicalize(self)
}
- /// Reads the symlink at this path.
+ /// Reads a symbolic link, returning the file that the link points to.
///
- /// For more information see `fs::read_link`.
+ /// This is an alias to `fs::read_link`.
#[stable(feature = "path_ext", since = "1.5.0")]
pub fn read_link(&self) -> io::Result<PathBuf> {
fs::read_link(self)
}
- /// Reads the directory at this path.
+ /// Returns an iterator over the entries within a directory.
+ ///
+ /// The iterator will yield instances of `io::Result<DirEntry>`. New errors may
+ /// be encountered after an iterator is initially constructed.
///
- /// For more information see `fs::read_dir`.
+ /// This is an alias to `fs::read_dir`.
#[stable(feature = "path_ext", since = "1.5.0")]
pub fn read_dir(&self) -> io::Result<fs::ReadDir> {
fs::read_dir(self)
}
- /// Boolean value indicator whether the underlying file exists on the local
- /// filesystem. Returns false in exactly the cases where `fs::metadata`
- /// fails.
+ /// Returns whether the path points at an existing entity.
+ ///
+ /// This function will traverse symbolic links to query information about the
+ /// destination file. In case of broken symbolic links this will return `false`.
+ ///
+ /// # Examples
+ ///
+ /// ```no_run
+ /// use std::path::Path;
+ /// assert_eq!(Path::new("does_not_exist.txt").exists(), false);
+ /// ```
#[stable(feature = "path_ext", since = "1.5.0")]
pub fn exists(&self) -> bool {
fs::metadata(self).is_ok()
}
- /// Whether the underlying implementation (be it a file path, or something
- /// else) points at a "regular file" on the FS. Will return false for paths
- /// to non-existent locations or directories or other non-regular files
- /// (named pipes, etc). Follows links when making this determination.
+ /// Returns whether the path is pointing at a regular file.
+ ///
+ /// This function will traverse symbolic links to query information about the
+ /// destination file. In case of broken symbolic links this will return `false`.
+ ///
+ /// # Examples
+ ///
+ /// ```no_run
+ /// use std::path::Path;
+ /// assert_eq!(Path::new("./is_a_directory/").is_file(), false);
+ /// assert_eq!(Path::new("a_file.txt").is_file(), true);
+ /// ```
#[stable(feature = "path_ext", since = "1.5.0")]
pub fn is_file(&self) -> bool {
fs::metadata(self).map(|m| m.is_file()).unwrap_or(false)
}
- /// Whether the underlying implementation (be it a file path, or something
- /// else) is pointing at a directory in the underlying FS. Will return
- /// false for paths to non-existent locations or if the item is not a
- /// directory (eg files, named pipes, etc). Follows links when making this
- /// determination.
+ /// Returns whether the path is pointing at a directory.
+ ///
+ /// This function will traverse symbolic links to query information about the
+ /// destination file. In case of broken symbolic links this will return `false`.
+ ///
+ /// # Examples
+ ///
+ /// ```no_run
+ /// use std::path::Path;
+ /// assert_eq!(Path::new("./is_a_directory/").is_dir(), true);
+ /// assert_eq!(Path::new("a_file.txt").is_dir(), false);
+ /// ```
#[stable(feature = "path_ext", since = "1.5.0")]
pub fn is_dir(&self) -> bool {
fs::metadata(self).map(|m| m.is_dir()).unwrap_or(false)
///
/// assert!(ecode.success());
/// ```
+///
+/// # Note
+///
+/// Take note that there is no implementation of
+/// [`Drop`](../../core/ops/trait.Drop.html) for child processes, so if you
+/// do not ensure the `Child` has exited then it will continue to run, even
+/// after the `Child` handle to the child process has gone out of scope.
+///
+/// Calling `wait` (or other functions that wrap around it) will make the
+/// parent process wait until the child has actually exited before continuing.
#[stable(feature = "process", since = "1.0.0")]
pub struct Child {
handle: imp::Process,
target_arch = "arm",
target_arch = "aarch64",
target_arch = "powerpc",
- target_arch = "powerpc64",
- target_arch = "powerpc64le")))]
+ target_arch = "powerpc64")))]
fn getrandom(buf: &mut [u8]) -> libc::c_long {
#[cfg(target_arch = "x86_64")]
const NR_GETRANDOM: libc::c_long = 318;
const NR_GETRANDOM: libc::c_long = 355;
#[cfg(target_arch = "arm")]
const NR_GETRANDOM: libc::c_long = 384;
- #[cfg(any(target_arch = "powerpc", target_arch = "powerpc64",
- target_arch = "powerpc64le"))]
+ #[cfg(any(target_arch = "powerpc", target_arch = "powerpc64"))]
const NR_GETRANDOM: libc::c_long = 359;
#[cfg(target_arch = "aarch64")]
const NR_GETRANDOM: libc::c_long = 278;
target_arch = "arm",
target_arch = "aarch64",
target_arch = "powerpc",
- target_arch = "powerpc64",
- target_arch = "powerpc64le"))))]
+ target_arch = "powerpc64"))))]
fn getrandom(_buf: &mut [u8]) -> libc::c_long { -1 }
fn getrandom_fill_bytes(v: &mut [u8]) {
target_arch = "arm",
target_arch = "aarch64",
target_arch = "powerpc",
- target_arch = "powerpc64",
- target_arch = "powerpc64le")))]
+ target_arch = "powerpc64")))]
fn is_getrandom_available() -> bool {
use sync::atomic::{AtomicBool, Ordering};
use sync::Once;
target_arch = "arm",
target_arch = "aarch64",
target_arch = "powerpc",
- target_arch = "powerpc64",
- target_arch = "powerpc64le"))))]
+ target_arch = "powerpc64"))))]
fn is_getrandom_available() -> bool { false }
/// A random number generator that retrieves randomness straight from
/// the predicate must always be checked each time this function returns to
/// protect against spurious wakeups.
///
- /// # Failure
+ /// # Errors
///
/// This function will return an error if the mutex being waited on is
/// poisoned when this thread re-acquires the lock. For more information,
/// held. An RAII guard is returned to allow scoped unlock of the lock. When
/// the guard goes out of scope, the mutex will be unlocked.
///
- /// # Failure
+ /// # Errors
///
/// If another user of this mutex panicked while holding the mutex, then
/// this call will return an error once the mutex is acquired.
///
/// This function does not block.
///
- /// # Failure
+ /// # Errors
///
/// If another user of this mutex panicked while holding the mutex, then
/// this call will return failure if the mutex would otherwise be
/// Consumes this mutex, returning the underlying data.
///
- /// # Failure
+ /// # Errors
///
/// If another user of this mutex panicked while holding the mutex, then
/// this call will return an error instead.
/// Since this call borrows the `Mutex` mutably, no actual locking needs to
/// take place---the mutable borrow statically guarantees no locks exist.
///
- /// # Failure
+ /// # Errors
///
/// If another user of this mutex panicked while holding the mutex, then
/// this call will return an error instead.
/// Returns an RAII guard which will release this thread's shared access
/// once it is dropped.
///
- /// # Failure
+ /// # Errors
///
/// This function will return an error if the RwLock is poisoned. An RwLock
/// is poisoned whenever a writer panics while holding an exclusive lock.
/// This function does not provide any guarantees with respect to the ordering
/// of whether contentious readers or writers will acquire the lock first.
///
- /// # Failure
+ /// # Errors
///
/// This function will return an error if the RwLock is poisoned. An RwLock
/// is poisoned whenever a writer panics while holding an exclusive lock. An
/// Returns an RAII guard which will drop the write access of this rwlock
/// when dropped.
///
- /// # Failure
+ /// # Errors
///
/// This function will return an error if the RwLock is poisoned. An RwLock
/// is poisoned whenever a writer panics while holding an exclusive lock.
/// This function does not provide any guarantees with respect to the ordering
/// of whether contentious readers or writers will acquire the lock first.
///
- /// # Failure
+ /// # Errors
///
/// This function will return an error if the RwLock is poisoned. An RwLock
/// is poisoned whenever a writer panics while holding an exclusive lock. An
/// Consumes this `RwLock`, returning the underlying data.
///
- /// # Failure
+ /// # Errors
///
/// This function will return an error if the RwLock is poisoned. An RwLock
/// is poisoned whenever a writer panics while holding an exclusive lock. An
/// Since this call borrows the `RwLock` mutably, no actual locking needs to
/// take place---the mutable borrow statically guarantees no locks exist.
///
- /// # Failure
+ /// # Errors
///
/// This function will return an error if the RwLock is poisoned. An RwLock
/// is poisoned whenever a writer panics while holding an exclusive lock. An
#[cfg(target_arch = "aarch64")]
pub const unwinder_private_data_size: usize = 2;
-#[cfg(any(target_arch = "mips", target_arch = "mipsel"))]
+#[cfg(target_arch = "mips")]
pub const unwinder_private_data_size: usize = 2;
-#[cfg(any(target_arch = "powerpc", target_arch = "powerpc64",
- target_arch = "powerpc64le"))]
+#[cfg(any(target_arch = "powerpc", target_arch = "powerpc64"))]
pub const unwinder_private_data_size: usize = 2;
#[repr(C)]
/// calling this method already holds the lock, the call shall succeed without
/// blocking.
///
- /// # Failure
+ /// # Errors
///
/// If another user of this mutex panicked while holding the mutex, then
/// this call will return failure if the mutex would otherwise be
///
/// This function does not block.
///
- /// # Failure
+ /// # Errors
///
/// If another user of this mutex panicked while holding the mutex, then
/// this call will return failure if the mutex would otherwise be
#[cfg(any(target_os = "macos",
target_os = "ios",
target_os = "netbsd",
- target_os = "openbsd"))]
- fn name_bytes(&self) -> &[u8] {
- unsafe {
- ::slice::from_raw_parts(self.entry.d_name.as_ptr() as *const u8,
- self.entry.d_namlen as usize)
- }
- }
- #[cfg(any(target_os = "freebsd",
+ target_os = "openbsd",
+ target_os = "freebsd",
target_os = "dragonfly",
target_os = "bitrig"))]
fn name_bytes(&self) -> &[u8] {
unsafe {
::slice::from_raw_parts(self.entry.d_name.as_ptr() as *const u8,
- self.entry.d_namelen as usize)
+ self.entry.d_namlen as usize)
}
}
#[cfg(any(target_os = "android",
Handler { _data: MAIN_ALTSTACK };
}
- pub unsafe fn make_handler() -> Handler {
- let alt_stack = mmap(ptr::null_mut(),
- SIGSTKSZ,
- PROT_READ | PROT_WRITE,
- MAP_PRIVATE | MAP_ANON,
- -1,
- 0);
- if alt_stack == MAP_FAILED {
+ unsafe fn get_stackp() -> *mut libc::c_void {
+ let stackp = mmap(ptr::null_mut(),
+ SIGSTKSZ,
+ PROT_READ | PROT_WRITE,
+ MAP_PRIVATE | MAP_ANON,
+ -1,
+ 0);
+ if stackp == MAP_FAILED {
panic!("failed to allocate an alternative stack");
}
+ stackp
+ }
- let mut stack: libc::stack_t = mem::zeroed();
+ #[cfg(any(target_os = "linux",
+ target_os = "macos",
+ target_os = "bitrig",
+ target_os = "netbsd",
+ target_os = "openbsd"))]
+ unsafe fn get_stack() -> libc::stack_t {
+ libc::stack_t { ss_sp: get_stackp(), ss_flags: 0, ss_size: SIGSTKSZ }
+ }
- stack.ss_sp = alt_stack;
- stack.ss_flags = 0;
- stack.ss_size = SIGSTKSZ;
+ #[cfg(any(target_os = "freebsd",
+ target_os = "dragonfly"))]
+ unsafe fn get_stack() -> libc::stack_t {
+ libc::stack_t { ss_sp: get_stackp() as *mut i8, ss_flags: 0, ss_size: SIGSTKSZ }
+ }
+ pub unsafe fn make_handler() -> Handler {
+ let stack = get_stack();
sigaltstack(&stack, ptr::null_mut());
-
- Handler { _data: alt_stack }
+ Handler { _data: stack.ss_sp as *mut libc::c_void }
}
pub unsafe fn drop_handler(handler: &mut Handler) {
/// A SyntaxContext represents a chain of macro-expandings
/// and renamings. Each macro expansion corresponds to
/// a fresh u32. This u32 is a reference to a table stored
-// in thread-local storage.
-// The special value EMPTY_CTXT is used to indicate an empty
-// syntax context.
+/// in thread-local storage.
+/// The special value EMPTY_CTXT is used to indicate an empty
+/// syntax context.
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, RustcEncodable, RustcDecodable)]
pub struct SyntaxContext(pub u32);
}
(&TokenTree::Token(sp, token::DocComment(name)), _) => {
let stripped = strip_doc_comment_decoration(&name.as_str());
+
+ // Searches for the occurrences of `"#*` and returns the minimum number of `#`s
+ // required to wrap the text.
+ let num_of_hashes = stripped.chars().scan(0, |cnt, x| {
+ *cnt = if x == '"' {
+ 1
+ } else if *cnt != 0 && x == '#' {
+ *cnt + 1
+ } else {
+ 0
+ };
+ Some(*cnt)
+ }).max().unwrap_or(0);
+
TokenTree::Delimited(sp, Rc::new(Delimited {
delim: token::Bracket,
open_span: sp,
token::Plain)),
TokenTree::Token(sp, token::Eq),
TokenTree::Token(sp, token::Literal(
- token::StrRaw(token::intern(&stripped), 0), None))],
+ token::StrRaw(token::intern(&stripped), num_of_hashes), None))],
close_span: sp,
}))
}
span
}
+ /// Return the source callee.
+ ///
+ /// Returns None if the supplied span has no expansion trace,
+ /// else returns the NameAndSpan for the macro definition
+ /// corresponding to the source callsite.
+ pub fn source_callee(&self, sp: Span) -> Option<NameAndSpan> {
+ let mut span = sp;
+ while let Some(callsite) = self.with_expn_info(span.expn_id,
+ |ei| ei.map(|ei| ei.call_site.clone())) {
+ if let Some(_) = self.with_expn_info(callsite.expn_id,
+ |ei| ei.map(|ei| ei.call_site.clone())) {
+ span = callsite;
+ }
+ else {
+ return self.with_expn_info(span.expn_id,
+ |ei| ei.map(|ei| ei.callee.clone()));
+ }
+ }
+ None
+ }
+
pub fn span_to_filename(&self, sp: Span) -> FileName {
self.lookup_char_pos(sp.lo).file.name.to_string()
}
};
let lo = self.cm.lookup_char_pos(sp.lo);
let hi = self.cm.lookup_char_pos(sp.hi);
- let elide_sp = (lo.line - hi.line) > MAX_SP_LINES;
+ let elide_sp = (hi.line - lo.line) >= MAX_SP_LINES;
let line_num = line.line_index + 1;
if !(lo.line <= line_num && hi.line >= line_num) {
\x20 ^ ^\n";
let expect0_end = "dummy.txt: 5 ccccc\n\
- \x20 ...\n\
+ dummy.txt: 6 xxxxx\n\
dummy.txt: 7 yyyyy\n\
\x20 ^\n\
\x20 ...\n\
float.trunc() as usize,
format!(".{}", fstr.splitn(2, ".").last().unwrap())));
}
- err.emit();
- self.abort_if_errors();
+ return Err(err);
}
_ => {
or did you mean the comma-separated arguments \
'a, Type?");
err.span_note(mk_sp(span_lo, span_hi), &msg);
- err.emit();
-
- self.abort_if_errors()
+ return Err(err);
}
// First parse types.
of possibly redeclaring it",
paths.name));
}
- err.emit();
- self.abort_if_errors();
+ return Err(err);
}
match paths.result {
token::Paren => try!(self.popen()),
token::Bracket => try!(word(&mut self.s, "[")),
token::Brace => {
- // head-ibox, will be closed by bopen()
- try!(self.ibox(0));
- // Don't ask me why the regular bopen() does
- // more then just opening a brace...
- try!(self.bopen())
+ try!(self.head(""));
+ try!(self.bopen());
}
}
try!(self.print_tts(&m.node.tts));
fn num_cpus() -> usize {
let mut cpus: libc::c_uint = 0;
let mut cpus_size = std::mem::size_of_val(&cpus);
- let mut mib = [libc::CTL_HW, libc::HW_AVAILCPU, 0, 0];
unsafe {
- libc::sysctl(mib.as_mut_ptr(),
- 2,
- &mut cpus as *mut _ as *mut _,
- &mut cpus_size as *mut _ as *mut _,
- 0 as *mut _,
- 0);
+ cpus = libc::sysconf(libc::_SC_NPROCESSORS_ONLN) as libc::c_uint;
}
if cpus < 1 {
- mib[1] = libc::HW_NCPU;
+ let mut mib = [libc::CTL_HW, libc::HW_NCPU, 0, 0];
unsafe {
libc::sysctl(mib.as_mut_ptr(),
2,
pub fn ham() { }
}
-fn main() { ham(); eggs(); }
-//~^ ERROR unresolved name `eggs`
+fn main() {
+ ham();
+ // Expect eggs to pass because the compiler inserts a fake name for it
+ eggs();
+}
use baz::zed::bar;
//~^ ERROR unresolved import `baz::zed::bar`. Could not find `zed` in `baz`
-
mod baz {}
mod zed {
pub fn bar() { println!("bar3"); }
}
-fn main() { bar(); }
-//~^ ERROR unresolved name `bar`
+fn main() {
+ bar();
+}
fn test1() {
use bar::gpriv;
//~^ ERROR unresolved import `bar::gpriv`. There is no `gpriv` in `bar`
+
+ // This should pass because the compiler will insert a fake name binding
+ // for `gpriv`
gpriv();
- //~^ ERROR unresolved name `gpriv`
}
#[start] fn main(_: isize, _: *const *const u8) -> isize { 3 }
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+//
+// Test pretty printing of macro with braces but without terminating semicolon,
+// this used to panic before fix.
+
+// pretty-compare-only
+// pp-exact
+
+fn main() { b!{ } c }
abort_on_err(driver::phase_3_run_analysis_passes(
&sess, &cstore, ast_map, &arenas, &id,
- MakeGlobMap::No, |tcx, mir_map, analysis| {
+ MakeGlobMap::No, |tcx, mir_map, analysis, _| {
- let trans = driver::phase_4_translate_to_llvm(tcx, mir_map, analysis);
+ let trans = driver::phase_4_translate_to_llvm(tcx, mir_map.unwrap(), analysis);
let crates = tcx.sess.cstore.used_crates(LinkagePreference::RequireDynamic);
// seemingly completely unrelated change.
// Unfortunately, LLVM has no "disable" option for this, so we have to set
// "enable" to 0 instead.
+
// compile-flags:-g -Cllvm-args=-enable-tail-merge=0
// ignore-pretty as this critically relies on line numbers
() => ((file!(), line!()))
}
-#[cfg(all(unix,
- not(target_os = "macos"),
- not(target_os = "ios"),
- not(target_os = "android"),
- not(all(target_os = "linux", target_arch = "arm"))))]
macro_rules! dump_and_die {
($($pos:expr),*) => ({
// FIXME(#18285): we cannot include the current position because
// the macro span takes over the last frame's file/line.
- dump_filelines(&[$($pos),*]);
- panic!();
+ if cfg!(target_os = "macos") ||
+ cfg!(target_os = "ios") ||
+ cfg!(target_os = "android") ||
+ cfg!(all(target_os = "linux", target_arch = "arm")) ||
+ cfg!(all(windows, target_env = "gnu")) {
+ // skip these platforms as this support isn't implemented yet.
+ } else {
+ dump_filelines(&[$($pos),*]);
+ panic!();
+ }
})
}
-// this does not work on Windows, Android, OSX or iOS
-#[cfg(not(all(unix,
- not(target_os = "macos"),
- not(target_os = "ios"),
- not(target_os = "android"),
- not(all(target_os = "linux", target_arch = "arm")))))]
-macro_rules! dump_and_die {
- ($($pos:expr),*) => ({ let _ = [$($pos),*]; })
-}
-
// we can't use a function as it will alter the backtrace
macro_rules! check {
($counter:expr; $($pos:expr),*) => ({
// no-pretty-expanded FIXME #15189
// ignore-android FIXME #17520
-// ignore-msvc FIXME #28133
+// compile-flags:-g
use std::env;
use std::process::{Command, Stdio};
use std::str;
-use std::ops::{Drop, FnMut, FnOnce};
#[inline(never)]
fn foo() {
let out = p.wait_with_output().unwrap();
assert!(!out.status.success());
let s = str::from_utf8(&out.stderr).unwrap();
- assert!(s.contains("stack backtrace") && s.contains("foo::h"),
+ assert!(s.contains("stack backtrace") && s.contains(" - foo"),
"bad output: {}", s);
// Make sure the stack trace is *not* printed
let out = p.wait_with_output().unwrap();
assert!(!out.status.success());
let s = str::from_utf8(&out.stderr).unwrap();
- assert!(!s.contains("stack backtrace") && !s.contains("foo::h"),
+ assert!(!s.contains("stack backtrace") && !s.contains(" - foo"),
"bad output2: {}", s);
// Make sure a stack trace is printed
let s = str::from_utf8(&out.stderr).unwrap();
// loosened the following from double::h to double:: due to
// spurious failures on mac, 32bit, optimized
- assert!(s.contains("stack backtrace") && s.contains("double::"),
+ assert!(s.contains("stack backtrace") && s.contains(" - double"),
"bad output3: {}", s);
// Make sure a stack trace isn't printed too many times
"bad output4: {}", s);
}
-#[cfg(not(all(windows, target_arch = "x86")))]
fn main() {
+ if cfg!(windows) && cfg!(target_arch = "x86") && cfg!(target_env = "gnu") {
+ return
+ }
+
let args: Vec<String> = env::args().collect();
if args.len() >= 2 && args[1] == "fail" {
foo();
runtest(&args[0]);
}
}
-
-// See issue 28218
-#[cfg(all(windows, target_arch = "x86"))]
-fn main() {}
#[cfg(target_arch = "powerpc64")]
pub fn main() { }
-
-#[cfg(target_arch = "powerpc64le")]
-pub fn main() { }
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// The number of `#`s used to wrap the documentation comment should differ regarding the content.
+//
+// Related issue: #27489
+
+macro_rules! homura {
+ ($x:expr, #[$y:meta]) => (assert_eq!($x, stringify!($y)))
+}
+
+fn main() {
+ homura! {
+ r#"doc = r" Madoka""#,
+ /// Madoka
+ };
+
+ homura! {
+ r##"doc = r#" One quote mark: ["]"#"##,
+ /// One quote mark: ["]
+ };
+
+ homura! {
+ r##"doc = r#" Two quote marks: [""]"#"##,
+ /// Two quote marks: [""]
+ };
+
+ homura! {
+ r#####"doc = r####" Raw string ending sequences: ["###]"####"#####,
+ /// Raw string ending sequences: ["###]
+ };
+}