$ pacman -Sy pacman-mirrors
```
-Download [MinGW from
-here](http://mingw-w64.org/doku.php/download/mingw-builds), and choose the
-`version=4.9.x,threads=win32,exceptions=dwarf/seh` flavor when installing. Also, make sure to install to a path without spaces in it. After installing,
-add its `bin` directory to your `PATH`. This is due to [#28260](https://github.com/rust-lang/rust/issues/28260), in the future,
-installing from pacman should be just fine.
+ Download [MinGW from
+ here](http://mingw-w64.org/doku.php/download/mingw-builds), and choose the
+ `version=4.9.x,threads=win32,exceptions=dwarf/seh` flavor when installing. Also, make sure to install to a path without spaces in it. After installing,
+ add its `bin` directory to your `PATH`. This is due to [#28260](https://github.com/rust-lang/rust/issues/28260), in the future,
+ installing from pacman should be just fine.
- ```
+ ```sh
# Make git available in MSYS2 (if not already available on path)
$ pacman -S git
3. Run `mingw32_shell.bat` or `mingw64_shell.bat` from wherever you installed
MSYS2 (i.e. `C:\msys`), depending on whether you want 32-bit or 64-bit Rust.
+ (As of the latest version of MSYS2 you have to run `msys2_shell.cmd -mingw32`
+ or `msys2_shell.cmd -mingw64` from the command line instead)
4. Navigate to Rust's source code, configure and build it:
opt compiler-docs 0 "build compiler documentation"
opt optimize-tests 1 "build tests with optimizations"
opt debuginfo-tests 0 "build tests with debugger metadata"
-opt libcpp 1 "build with llvm with libc++ instead of libstdc++ when using clang"
+opt libcpp 1 "build llvm with libc++ instead of libstdc++ when using clang"
opt llvm-assertions 0 "build LLVM with assertions"
opt debug-assertions 0 "build with debugging assertions"
opt fast-make 0 "use .gitmodules as timestamp for submodule deps"
check-stage$(1)-T-$(2)-H-$(3)-incremental-exec \
check-stage$(1)-T-$(2)-H-$(3)-ui-exec \
check-stage$(1)-T-$(2)-H-$(3)-doc-exec \
+ check-stage$(1)-T-$(2)-H-$(3)-doc-error-index-exec \
check-stage$(1)-T-$(2)-H-$(3)-pretty-exec
ifndef CFG_DISABLE_CODEGEN_TESTS
fn search<P: AsRef<Path>>
(file_path: P, city: &str)
- -> Result<Vec<PopulationCount>, Box<Error+Send+Sync>> {
+ -> Result<Vec<PopulationCount>, Box<Error>> {
let mut found = vec![];
let file = try!(File::open(file_path));
let mut rdr = csv::Reader::from_reader(file);
`Result<T, E>`, the `try!` macro will return early from the function if an
error occurs.
-There is one big gotcha in this code: we used `Box<Error + Send + Sync>`
-instead of `Box<Error>`. We did this so we could convert a plain string to an
-error type. We need these extra bounds so that we can use the
-[corresponding `From`
-impls](../std/convert/trait.From.html):
+At the end of `search` we also convert a plain string to an error type
+by using the [corresponding `From` impls](../std/convert/trait.From.html):
```rust,ignore
// We are making use of this impl in the code above, since we call `From::from`
// on a `&'static str`.
-impl<'a, 'b> From<&'b str> for Box<Error + Send + Sync + 'a>
+impl<'a> From<&'a str> for Box<Error>
// But this is also useful when you need to allocate a new string for an
// error message, usually with `format!`.
-impl From<String> for Box<Error + Send + Sync>
+impl From<String> for Box<Error>
```
Since `search` now returns a `Result<T, E>`, `main` should use case analysis
fn search<P: AsRef<Path>>
(file_path: &Option<P>, city: &str)
- -> Result<Vec<PopulationCount>, Box<Error+Send+Sync>> {
+ -> Result<Vec<PopulationCount>, Box<Error>> {
let mut found = vec![];
let input: Box<io::Read> = match *file_path {
None => Box::new(io::stdin()),
`unwrap`. Be warned: if it winds up in someone else's hands, don't be
surprised if they are agitated by poor error messages!
* If you're writing a quick 'n' dirty program and feel ashamed about panicking
- anyway, then use either a `String` or a `Box<Error + Send + Sync>` for your
- error type (the `Box<Error + Send + Sync>` type is because of the
- [available `From` impls](../std/convert/trait.From.html)).
+ anyway, then use either a `String` or a `Box<Error>` for your
+ error type.
* Otherwise, in a program, define your own error types with appropriate
[`From`](../std/convert/trait.From.html)
and
We now loop forever with `loop` and use `break` to break out early. Issuing an explicit `return` statement will also serve to terminate the loop early.
-`continue` is similar, but instead of ending the loop, goes to the next
+`continue` is similar, but instead of ending the loop, it goes to the next
iteration. This will only print the odd numbers:
```rust
> `#![no_std]`](using-rust-without-the-standard-library.html)
Obviously there's more to life than just libraries: one can use
-`#[no_std]` with an executable, controlling the entry point is
-possible in two ways: the `#[start]` attribute, or overriding the
-default shim for the C `main` function with your own.
+`#[no_std]` with an executable.
+
+### Using libc
+
+In order to build a `#[no_std]` executable we will need libc as a dependency. We can specify
+this using our `Cargo.toml` file:
+
+```toml
+[dependencies]
+libc = { version = "0.2.11", default-features = false }
+```
+
+Note that the default features have been disabled. This is a critical step -
+**the default features of libc include the standard library and so must be
+disabled.**
+
+### Writing an executable without stdlib
+
+Controlling the entry point is possible in two ways: the `#[start]` attribute,
+or overriding the default shim for the C `main` function with your own.
The function marked `#[start]` is passed the command line parameters
in the same format as C:
# // fn main() {} tricked you, rustdoc!
```
-
The compiler currently makes a few assumptions about symbols which are available
in the executable to call. Normally these functions are provided by the standard
library, but without it you must define your own.
useful for allowing safe, efficient access to a portion of an array without
copying. For example, you might want to reference only one line of a file read
into memory. By nature, a slice is not created directly, but from an existing
-variable binding. Slices have a defined length, can be mutable or immutable.
+variable binding. Slices have a defined length, and can be mutable or immutable.
Internally, slices are represented as a pointer to the beginning of the data
and a length.
fn sum_vec(v: &Vec<i32>) -> i32 {
return v.iter().fold(0, |a, &b| a + b);
}
- // Borrow two vectors and and sum them.
+ // Borrow two vectors and sum them.
// This kind of borrowing does not allow mutation to the borrowed.
fn foo(v1: &Vec<i32>, v2: &Vec<i32>) -> i32 {
// do stuff with v1 and v2
functions defined in Rust. The Rust compiler automatically translates between
the Rust ABI and the foreign ABI.
-A number of [attributes](#attributes) control the behavior of external blocks.
+A number of [attributes](#ffi-attributes) control the behavior of external blocks.
By default external blocks assume that the library they are calling uses the
standard C "cdecl" ABI. Other ABIs may be specified using an `abi` string, as
jemalloc.parent().unwrap().display());
let stem = jemalloc.file_stem().unwrap().to_str().unwrap();
let name = jemalloc.file_name().unwrap().to_str().unwrap();
- let kind = if name.ends_with(".a") {"static"} else {"dylib"};
+ let kind = if name.ends_with(".a") {
+ "static"
+ } else {
+ "dylib"
+ };
println!("cargo:rustc-link-lib={}={}", kind, &stem[3..]);
- return
+ return;
}
let compiler = gcc::Config::new().get_compiler();
let ar = build_helper::cc2ar(compiler.path(), &target);
- let cflags = compiler.args().iter().map(|s| s.to_str().unwrap())
- .collect::<Vec<_>>().join(" ");
+ let cflags = compiler.args()
+ .iter()
+ .map(|s| s.to_str().unwrap())
+ .collect::<Vec<_>>()
+ .join(" ");
let mut stack = src_dir.join("../jemalloc")
- .read_dir().unwrap()
+ .read_dir()
+ .unwrap()
.map(|e| e.unwrap())
.collect::<Vec<_>>();
while let Some(entry) = stack.pop() {
}
let mut cmd = Command::new("sh");
- cmd.arg(src_dir.join("../jemalloc/configure").to_str().unwrap()
+ cmd.arg(src_dir.join("../jemalloc/configure")
+ .to_str()
+ .unwrap()
.replace("C:\\", "/c/")
.replace("\\", "/"))
.current_dir(&build_dir)
run(&mut cmd);
run(Command::new("make")
- .current_dir(&build_dir)
- .arg("build_lib_static")
- .arg("-j").arg(env::var("NUM_JOBS").unwrap()));
+ .current_dir(&build_dir)
+ .arg("build_lib_static")
+ .arg("-j")
+ .arg(env::var("NUM_JOBS").unwrap()));
if target.contains("windows") {
println!("cargo:rustc-link-lib=static=jemalloc");
not(target_env = "musl")),
link(name = "pthread"))]
#[cfg(not(cargobuild))]
-extern {}
+extern "C" {}
// Note that the symbols here are prefixed by default on OSX and Windows (we
// don't explicitly request it), and on Android and DragonFly we explicitly
// request it as unprefixing cause segfaults (mismatches in allocators).
-extern {
+extern "C" {
#[cfg_attr(any(target_os = "macos", target_os = "android", target_os = "ios",
target_os = "dragonfly", target_os = "windows"),
link_name = "je_mallocx")]
// are available.
#[no_mangle]
#[cfg(target_os = "android")]
-pub extern fn pthread_atfork(_prefork: *mut u8,
- _postfork_parent: *mut u8,
- _postfork_child: *mut u8) -> i32 {
+pub extern "C" fn pthread_atfork(_prefork: *mut u8,
+ _postfork_parent: *mut u8,
+ _postfork_child: *mut u8)
+ -> i32 {
0
}
use core::{fmt, intrinsics, mem, ptr};
use borrow::Borrow;
-use Bound::{self, Included, Excluded, Unbounded};
+use Bound::{self, Excluded, Included, Unbounded};
-use super::node::{self, NodeRef, Handle, marker};
+use super::node::{self, Handle, NodeRef, marker};
use super::search;
use super::node::InsertResult::*;
/// // would be `BTreeMap<&str, &str>` in this example).
/// let mut movie_reviews = BTreeMap::new();
///
-/// // review some books.
+/// // review some movies.
/// movie_reviews.insert("Office Space", "Deals with real issues in the workplace.");
/// movie_reviews.insert("Pulp Fiction", "Masterpiece.");
/// movie_reviews.insert("The Godfather", "Very enjoyable.");
#[stable(feature = "rust1", since = "1.0.0")]
pub struct BTreeMap<K, V> {
root: node::Root<K, V>,
- length: usize
+ length: usize,
}
impl<K, V> Drop for BTreeMap<K, V> {
#[unsafe_destructor_blind_to_params]
fn drop(&mut self) {
unsafe {
- for _ in ptr::read(self).into_iter() { }
+ for _ in ptr::read(self).into_iter() {
+ }
}
}
}
impl<K: Clone, V: Clone> Clone for BTreeMap<K, V> {
fn clone(&self) -> BTreeMap<K, V> {
- fn clone_subtree<K: Clone, V: Clone>(
- node: node::NodeRef<marker::Immut, K, V, marker::LeafOrInternal>)
- -> BTreeMap<K, V> {
+ fn clone_subtree<K: Clone, V: Clone>(node: node::NodeRef<marker::Immut,
+ K,
+ V,
+ marker::LeafOrInternal>)
+ -> BTreeMap<K, V> {
match node.force() {
Leaf(leaf) => {
let mut out_tree = BTreeMap {
root: node::Root::new_leaf(),
- length: 0
+ length: 0,
};
{
let mut out_node = match out_tree.root.as_mut().force() {
Leaf(leaf) => leaf,
- Internal(_) => unreachable!()
+ Internal(_) => unreachable!(),
};
let mut in_edge = leaf.first_edge();
}
out_tree
- },
+ }
Internal(internal) => {
let mut out_tree = clone_subtree(internal.first_edge().descend());
fn get(&self, key: &Q) -> Option<&K> {
match search::search_tree(self.root.as_ref(), key) {
Found(handle) => Some(handle.into_kv().0),
- GoDown(_) => None
+ GoDown(_) => None,
}
}
match search::search_tree(self.root.as_mut(), key) {
Found(handle) => {
Some(OccupiedEntry {
- handle: handle,
- length: &mut self.length,
- _marker: PhantomData,
- }.remove_kv().0)
- },
- GoDown(_) => None
+ handle: handle,
+ length: &mut self.length,
+ _marker: PhantomData,
+ }
+ .remove_kv()
+ .0)
+ }
+ GoDown(_) => None,
}
}
handle: handle,
length: &mut self.length,
_marker: PhantomData,
- }.insert(());
+ }
+ .insert(());
None
}
}
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Iter<'a, K: 'a, V: 'a> {
range: Range<'a, K, V>,
- length: usize
+ length: usize,
}
/// A mutable iterator over a BTreeMap's entries.
#[stable(feature = "rust1", since = "1.0.0")]
pub struct IterMut<'a, K: 'a, V: 'a> {
range: RangeMut<'a, K, V>,
- length: usize
+ length: usize,
}
/// An owning iterator over a BTreeMap's entries.
pub struct IntoIter<K, V> {
front: Handle<NodeRef<marker::Owned, K, V, marker::Leaf>, marker::Edge>,
back: Handle<NodeRef<marker::Owned, K, V, marker::Leaf>, marker::Edge>,
- length: usize
+ length: usize,
}
/// An iterator over a BTreeMap's keys.
/// An iterator over a sub-range of BTreeMap's entries.
pub struct Range<'a, K: 'a, V: 'a> {
front: Handle<NodeRef<marker::Immut<'a>, K, V, marker::Leaf>, marker::Edge>,
- back: Handle<NodeRef<marker::Immut<'a>, K, V, marker::Leaf>, marker::Edge>
+ back: Handle<NodeRef<marker::Immut<'a>, K, V, marker::Leaf>, marker::Edge>,
}
/// A mutable iterator over a sub-range of BTreeMap's entries.
pub enum Entry<'a, K: 'a, V: 'a> {
/// A vacant Entry
#[stable(feature = "rust1", since = "1.0.0")]
- Vacant(
- #[stable(feature = "rust1", since = "1.0.0")] VacantEntry<'a, K, V>
- ),
+ Vacant(#[stable(feature = "rust1", since = "1.0.0")]
+ VacantEntry<'a, K, V>),
/// An occupied Entry
#[stable(feature = "rust1", since = "1.0.0")]
- Occupied(
- #[stable(feature = "rust1", since = "1.0.0")] OccupiedEntry<'a, K, V>
- ),
+ Occupied(#[stable(feature = "rust1", since = "1.0.0")]
+ OccupiedEntry<'a, K, V>),
}
/// A vacant Entry.
/// An occupied Entry.
#[stable(feature = "rust1", since = "1.0.0")]
pub struct OccupiedEntry<'a, K: 'a, V: 'a> {
- handle: Handle<NodeRef<
- marker::Mut<'a>,
- K, V,
- marker::LeafOrInternal
- >, marker::KV>,
+ handle: Handle<NodeRef<marker::Mut<'a>, K, V, marker::LeafOrInternal>, marker::KV>,
length: &'a mut usize,
}
// An iterator for merging two sorted sequences into one
-struct MergeIter<K, V, I: Iterator<Item=(K, V)>> {
+struct MergeIter<K, V, I: Iterator<Item = (K, V)>> {
left: Peekable<I>,
right: Peekable<I>,
}
pub fn new() -> BTreeMap<K, V> {
BTreeMap {
root: node::Root::new_leaf(),
- length: 0
+ length: 0,
}
}
/// assert_eq!(map.get(&2), None);
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
- pub fn get<Q: ?Sized>(&self, key: &Q) -> Option<&V> where K: Borrow<Q>, Q: Ord {
+ pub fn get<Q: ?Sized>(&self, key: &Q) -> Option<&V>
+ where K: Borrow<Q>,
+ Q: Ord
+ {
match search::search_tree(self.root.as_ref(), key) {
Found(handle) => Some(handle.into_kv().1),
- GoDown(_) => None
+ GoDown(_) => None,
}
}
/// assert_eq!(map.contains_key(&2), false);
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
- pub fn contains_key<Q: ?Sized>(&self, key: &Q) -> bool where K: Borrow<Q>, Q: Ord {
+ pub fn contains_key<Q: ?Sized>(&self, key: &Q) -> bool
+ where K: Borrow<Q>,
+ Q: Ord
+ {
self.get(key).is_some()
}
/// ```
// See `get` for implementation notes, this is basically a copy-paste with mut's added
#[stable(feature = "rust1", since = "1.0.0")]
- pub fn get_mut<Q: ?Sized>(&mut self, key: &Q) -> Option<&mut V> where K: Borrow<Q>, Q: Ord {
+ pub fn get_mut<Q: ?Sized>(&mut self, key: &Q) -> Option<&mut V>
+ where K: Borrow<Q>,
+ Q: Ord
+ {
match search::search_tree(self.root.as_mut(), key) {
Found(handle) => Some(handle.into_kv_mut().1),
- GoDown(_) => None
+ GoDown(_) => None,
}
}
/// assert_eq!(map.remove(&1), None);
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
- pub fn remove<Q: ?Sized>(&mut self, key: &Q) -> Option<V> where K: Borrow<Q>, Q: Ord {
+ pub fn remove<Q: ?Sized>(&mut self, key: &Q) -> Option<V>
+ where K: Borrow<Q>,
+ Q: Ord
+ {
match search::search_tree(self.root.as_mut(), key) {
Found(handle) => {
Some(OccupiedEntry {
- handle: handle,
- length: &mut self.length,
- _marker: PhantomData,
- }.remove())
- },
- GoDown(_) => None
+ handle: handle,
+ length: &mut self.length,
+ _marker: PhantomData,
+ }
+ .remove())
+ }
+ GoDown(_) => None,
}
}
/// assert_eq!(a[&5], "f");
/// ```
#[unstable(feature = "btree_append", reason = "recently added as part of collections reform 2",
- issue = "19986")]
+ issue = "34152")]
pub fn append(&mut self, other: &mut Self) {
// Do we have to append anything at all?
if other.len() == 0 {
min: Bound<&Min>,
max: Bound<&Max>)
-> Range<K, V>
- where K: Borrow<Min> + Borrow<Max>,
+ where K: Borrow<Min> + Borrow<Max>
{
let front = match min {
- Included(key) => match search::search_tree(self.root.as_ref(), key) {
- Found(kv_handle) => match kv_handle.left_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => last_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Excluded(key) => match search::search_tree(self.root.as_ref(), key) {
- Found(kv_handle) => match kv_handle.right_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => first_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Unbounded => first_leaf_edge(self.root.as_ref())
+ Included(key) => {
+ match search::search_tree(self.root.as_ref(), key) {
+ Found(kv_handle) => {
+ match kv_handle.left_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => last_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Excluded(key) => {
+ match search::search_tree(self.root.as_ref(), key) {
+ Found(kv_handle) => {
+ match kv_handle.right_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => first_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Unbounded => first_leaf_edge(self.root.as_ref()),
};
let back = match max {
- Included(key) => match search::search_tree(self.root.as_ref(), key) {
- Found(kv_handle) => match kv_handle.right_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => first_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Excluded(key) => match search::search_tree(self.root.as_ref(), key) {
- Found(kv_handle) => match kv_handle.left_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => last_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Unbounded => last_leaf_edge(self.root.as_ref())
+ Included(key) => {
+ match search::search_tree(self.root.as_ref(), key) {
+ Found(kv_handle) => {
+ match kv_handle.right_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => first_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Excluded(key) => {
+ match search::search_tree(self.root.as_ref(), key) {
+ Found(kv_handle) => {
+ match kv_handle.left_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => last_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Unbounded => last_leaf_edge(self.root.as_ref()),
};
Range {
front: front,
- back: back
+ back: back,
}
}
min: Bound<&Min>,
max: Bound<&Max>)
-> RangeMut<K, V>
- where K: Borrow<Min> + Borrow<Max>,
+ where K: Borrow<Min> + Borrow<Max>
{
let root1 = self.root.as_mut();
let root2 = unsafe { ptr::read(&root1) };
let front = match min {
- Included(key) => match search::search_tree(root1, key) {
- Found(kv_handle) => match kv_handle.left_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => last_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Excluded(key) => match search::search_tree(root1, key) {
- Found(kv_handle) => match kv_handle.right_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => first_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Unbounded => first_leaf_edge(root1)
+ Included(key) => {
+ match search::search_tree(root1, key) {
+ Found(kv_handle) => {
+ match kv_handle.left_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => last_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Excluded(key) => {
+ match search::search_tree(root1, key) {
+ Found(kv_handle) => {
+ match kv_handle.right_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => first_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Unbounded => first_leaf_edge(root1),
};
let back = match max {
- Included(key) => match search::search_tree(root2, key) {
- Found(kv_handle) => match kv_handle.right_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => first_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Excluded(key) => match search::search_tree(root2, key) {
- Found(kv_handle) => match kv_handle.left_edge().force() {
- Leaf(bottom) => bottom,
- Internal(internal) => last_leaf_edge(internal.descend())
- },
- GoDown(bottom) => bottom
- },
- Unbounded => last_leaf_edge(root2)
+ Included(key) => {
+ match search::search_tree(root2, key) {
+ Found(kv_handle) => {
+ match kv_handle.right_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => first_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Excluded(key) => {
+ match search::search_tree(root2, key) {
+ Found(kv_handle) => {
+ match kv_handle.left_edge().force() {
+ Leaf(bottom) => bottom,
+ Internal(internal) => last_leaf_edge(internal.descend()),
+ }
+ }
+ GoDown(bottom) => bottom,
+ }
+ }
+ Unbounded => last_leaf_edge(root2),
};
RangeMut {
front: front,
back: back,
- _marker: PhantomData
+ _marker: PhantomData,
}
}
#[stable(feature = "rust1", since = "1.0.0")]
pub fn entry(&mut self, key: K) -> Entry<K, V> {
match search::search_tree(self.root.as_mut(), &key) {
- Found(handle) => Occupied(OccupiedEntry {
- handle: handle,
- length: &mut self.length,
- _marker: PhantomData,
- }),
- GoDown(handle) => Vacant(VacantEntry {
- key: key,
- handle: handle,
- length: &mut self.length,
- _marker: PhantomData,
- })
+ Found(handle) => {
+ Occupied(OccupiedEntry {
+ handle: handle,
+ length: &mut self.length,
+ _marker: PhantomData,
+ })
+ }
+ GoDown(handle) => {
+ Vacant(VacantEntry {
+ key: key,
+ handle: handle,
+ length: &mut self.length,
+ _marker: PhantomData,
+ })
+ }
}
}
- fn from_sorted_iter<I: Iterator<Item=(K, V)>>(&mut self, iter: I) {
+ fn from_sorted_iter<I: Iterator<Item = (K, V)>>(&mut self, iter: I) {
let mut cur_node = last_leaf_edge(self.root.as_mut()).into_node();
// Iterate through all key-value pairs, pushing them into nodes at the right level.
for (key, value) in iter {
// Go up again.
test_node = parent.forget_type();
}
- },
+ }
Err(node) => {
// We are at the top, create a new root node and push there.
open_node = node.into_root_mut().push_level();
break;
- },
+ }
}
}
/// ```
#[unstable(feature = "btree_split_off",
reason = "recently added as part of collections reform 2",
- issue = "19986")]
- pub fn split_off<Q: ?Sized + Ord>(&mut self, key: &Q) -> Self where K: Borrow<Q> {
+ issue = "34152")]
+ pub fn split_off<Q: ?Sized + Ord>(&mut self, key: &Q) -> Self
+ where K: Borrow<Q>
+ {
if self.is_empty() {
return Self::new();
}
let mut split_edge = match search::search_node(left_node, key) {
// key is going to the right tree
Found(handle) => handle.left_edge(),
- GoDown(handle) => handle
+ GoDown(handle) => handle,
};
split_edge.move_suffix(&mut right_node);
left_node = edge.descend();
right_node = node.first_edge().descend();
}
- (Leaf(_), Leaf(_)) => { break; },
- _ => { unreachable!(); }
+ (Leaf(_), Leaf(_)) => {
+ break;
+ }
+ _ => {
+ unreachable!();
+ }
}
}
}
loop {
res += dfs(edge.reborrow().descend());
match edge.right_kv() {
- Ok(right_kv) => { edge = right_kv.right_edge(); },
- Err(_) => { break; }
+ Ok(right_kv) => {
+ edge = right_kv.right_edge();
+ }
+ Err(_) => {
+ break;
+ }
}
}
}
}
impl<'a, K: 'a, V: 'a> ExactSizeIterator for Iter<'a, K, V> {
- fn len(&self) -> usize { self.length }
+ fn len(&self) -> usize {
+ self.length
+ }
}
impl<'a, K, V> Clone for Iter<'a, K, V> {
fn clone(&self) -> Iter<'a, K, V> {
Iter {
range: self.range.clone(),
- length: self.length
+ length: self.length,
}
}
}
}
impl<'a, K: 'a, V: 'a> ExactSizeIterator for IterMut<'a, K, V> {
- fn len(&self) -> usize { self.length }
+ fn len(&self) -> usize {
+ self.length
+ }
}
impl<K, V> IntoIterator for BTreeMap<K, V> {
IntoIter {
front: first_leaf_edge(root1),
back: last_leaf_edge(root2),
- length: len
+ length: len,
}
}
}
impl<K, V> Drop for IntoIter<K, V> {
fn drop(&mut self) {
- for _ in &mut *self { }
+ for _ in &mut *self {
+ }
unsafe {
let leaf_node = ptr::read(&self.front).into_node();
if let Some(first_parent) = leaf_node.deallocate_and_ascend() {
let v = unsafe { ptr::read(kv.reborrow().into_kv().1) };
self.front = kv.right_edge();
return Some((k, v));
- },
+ }
Err(last_edge) => unsafe {
unwrap_unchecked(last_edge.into_node().deallocate_and_ascend())
- }
+ },
};
loop {
let v = unsafe { ptr::read(kv.reborrow().into_kv().1) };
self.front = first_leaf_edge(kv.right_edge().descend());
return Some((k, v));
- },
+ }
Err(last_edge) => unsafe {
cur_handle = unwrap_unchecked(last_edge.into_node().deallocate_and_ascend());
- }
+ },
}
}
}
let v = unsafe { ptr::read(kv.reborrow().into_kv().1) };
self.back = kv.left_edge();
return Some((k, v));
- },
+ }
Err(last_edge) => unsafe {
unwrap_unchecked(last_edge.into_node().deallocate_and_ascend())
- }
+ },
};
loop {
let v = unsafe { ptr::read(kv.reborrow().into_kv().1) };
self.back = last_leaf_edge(kv.left_edge().descend());
return Some((k, v));
- },
+ }
Err(last_edge) => unsafe {
cur_handle = unwrap_unchecked(last_edge.into_node().deallocate_and_ascend());
- }
+ },
}
}
}
}
impl<K, V> ExactSizeIterator for IntoIter<K, V> {
- fn len(&self) -> usize { self.length }
+ fn len(&self) -> usize {
+ self.length
+ }
}
impl<'a, K, V> Iterator for Keys<'a, K, V> {
impl<'a, K, V> Clone for Keys<'a, K, V> {
fn clone(&self) -> Keys<'a, K, V> {
- Keys {
- inner: self.inner.clone()
- }
+ Keys { inner: self.inner.clone() }
}
}
impl<'a, K, V> Clone for Values<'a, K, V> {
fn clone(&self) -> Values<'a, K, V> {
- Values {
- inner: self.inner.clone()
- }
+ Values { inner: self.inner.clone() }
}
}
let ret = kv.into_kv();
self.front = kv.right_edge();
return ret;
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
unwrap_unchecked(next_level)
let ret = kv.into_kv();
self.front = first_leaf_edge(kv.right_edge().descend());
return ret;
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
cur_handle = unwrap_unchecked(next_level);
let ret = kv.into_kv();
self.back = kv.left_edge();
return ret;
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
unwrap_unchecked(next_level)
let ret = kv.into_kv();
self.back = last_leaf_edge(kv.left_edge().descend());
return ret;
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
cur_handle = unwrap_unchecked(next_level);
fn clone(&self) -> Range<'a, K, V> {
Range {
front: self.front,
- back: self.back
+ back: self.back,
}
}
}
if self.front == self.back {
None
} else {
- unsafe { Some (self.next_unchecked()) }
+ unsafe { Some(self.next_unchecked()) }
}
}
}
let (k, v) = ptr::read(&kv).into_kv_mut();
self.front = kv.right_edge();
return (k, v);
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
unwrap_unchecked(next_level)
let (k, v) = ptr::read(&kv).into_kv_mut();
self.front = first_leaf_edge(kv.right_edge().descend());
return (k, v);
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
cur_handle = unwrap_unchecked(next_level);
let (k, v) = ptr::read(&kv).into_kv_mut();
self.back = kv.left_edge();
return (k, v);
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
unwrap_unchecked(next_level)
let (k, v) = ptr::read(&kv).into_kv_mut();
self.back = last_leaf_edge(kv.left_edge().descend());
return (k, v);
- },
+ }
Err(last_edge) => {
let next_level = last_edge.into_node().ascend().ok();
cur_handle = unwrap_unchecked(next_level);
}
impl<K: Ord, V> FromIterator<(K, V)> for BTreeMap<K, V> {
- fn from_iter<T: IntoIterator<Item=(K, V)>>(iter: T) -> BTreeMap<K, V> {
+ fn from_iter<T: IntoIterator<Item = (K, V)>>(iter: T) -> BTreeMap<K, V> {
let mut map = BTreeMap::new();
map.extend(iter);
map
impl<K: Ord, V> Extend<(K, V)> for BTreeMap<K, V> {
#[inline]
- fn extend<T: IntoIterator<Item=(K, V)>>(&mut self, iter: T) {
+ fn extend<T: IntoIterator<Item = (K, V)>>(&mut self, iter: T) {
for (k, v) in iter {
self.insert(k, v);
}
}
impl<'a, K: Ord + Copy, V: Copy> Extend<(&'a K, &'a V)> for BTreeMap<K, V> {
- fn extend<I: IntoIterator<Item=(&'a K, &'a V)>>(&mut self, iter: I) {
+ fn extend<I: IntoIterator<Item = (&'a K, &'a V)>>(&mut self, iter: I) {
self.extend(iter.into_iter().map(|(&key, &value)| (key, value)));
}
}
impl<K: PartialEq, V: PartialEq> PartialEq for BTreeMap<K, V> {
fn eq(&self, other: &BTreeMap<K, V>) -> bool {
- self.len() == other.len() &&
- self.iter().zip(other).all(|(a, b)| a == b)
+ self.len() == other.len() && self.iter().zip(other).all(|(a, b)| a == b)
}
}
}
impl<'a, K: Ord, Q: ?Sized, V> Index<&'a Q> for BTreeMap<K, V>
- where K: Borrow<Q>, Q: Ord
+ where K: Borrow<Q>,
+ Q: Ord
{
type Output = V;
}
}
-fn first_leaf_edge<BorrowType, K, V>(
- mut node: NodeRef<BorrowType,
- K, V,
- marker::LeafOrInternal>
- ) -> Handle<NodeRef<BorrowType, K, V, marker::Leaf>, marker::Edge> {
+fn first_leaf_edge<BorrowType, K, V>
+ (mut node: NodeRef<BorrowType, K, V, marker::LeafOrInternal>)
+ -> Handle<NodeRef<BorrowType, K, V, marker::Leaf>, marker::Edge> {
loop {
match node.force() {
Leaf(leaf) => return leaf.first_edge(),
}
}
-fn last_leaf_edge<BorrowType, K, V>(
- mut node: NodeRef<BorrowType,
- K, V,
- marker::LeafOrInternal>
- ) -> Handle<NodeRef<BorrowType, K, V, marker::Leaf>, marker::Edge> {
+fn last_leaf_edge<BorrowType, K, V>
+ (mut node: NodeRef<BorrowType, K, V, marker::LeafOrInternal>)
+ -> Handle<NodeRef<BorrowType, K, V, marker::Leaf>, marker::Edge> {
loop {
match node.force() {
Leaf(leaf) => return leaf.last_edge(),
Iter {
range: Range {
front: first_leaf_edge(self.root.as_ref()),
- back: last_leaf_edge(self.root.as_ref())
+ back: last_leaf_edge(self.root.as_ref()),
},
- length: self.length
+ length: self.length,
}
}
back: last_leaf_edge(root2),
_marker: PhantomData,
},
- length: self.length
+ length: self.length,
}
}
&self.key
}
+ /// Take ownership of the key.
+ #[unstable(feature = "map_entry_recover_keys", issue = "34285")]
+ pub fn into_key(self) -> K {
+ self.key
+ }
+
/// Sets the value of the entry with the VacantEntry's key,
/// and returns a mutable reference to it.
#[stable(feature = "rust1", since = "1.0.0")]
loop {
match cur_parent {
- Ok(parent) => match parent.insert(ins_k, ins_v, ins_edge) {
- Fit(_) => return unsafe { &mut *out_ptr },
- Split(left, k, v, right) => {
- ins_k = k;
- ins_v = v;
- ins_edge = right;
- cur_parent = left.ascend().map_err(|n| n.into_root_mut());
+ Ok(parent) => {
+ match parent.insert(ins_k, ins_v, ins_edge) {
+ Fit(_) => return unsafe { &mut *out_ptr },
+ Split(left, k, v, right) => {
+ ins_k = k;
+ ins_v = v;
+ ins_edge = right;
+ cur_parent = left.ascend().map_err(|n| n.into_root_mut());
+ }
}
- },
+ }
Err(root) => {
root.push_level().push(ins_k, ins_v, ins_edge);
return unsafe { &mut *out_ptr };
self.handle.reborrow().into_kv().0
}
+ /// Take ownership of the key and value from the map.
+ #[unstable(feature = "map_entry_recover_keys", issue = "34285")]
+ pub fn remove_pair(self) -> (K, V) {
+ self.remove_kv()
+ }
+
/// Gets a reference to the value in the entry.
#[stable(feature = "rust1", since = "1.0.0")]
pub fn get(&self) -> &V {
Leaf(leaf) => {
let (hole, old_key, old_val) = leaf.remove();
(hole.into_node(), old_key, old_val)
- },
+ }
Internal(mut internal) => {
let key_loc = internal.kv_mut().0 as *mut K;
let val_loc = internal.kv_mut().1 as *mut V;
let (hole, key, val) = to_remove.remove();
- let old_key = unsafe {
- mem::replace(&mut *key_loc, key)
- };
- let old_val = unsafe {
- mem::replace(&mut *val_loc, val)
- };
+ let old_key = unsafe { mem::replace(&mut *key_loc, key) };
+ let old_val = unsafe { mem::replace(&mut *val_loc, val) };
(hole.into_node(), old_key, old_val)
}
match handle_underfull_node(cur_node) {
AtRoot => break,
EmptyParent(_) => unreachable!(),
- Merged(parent) => if parent.len() == 0 {
- // We must be at the root
- parent.into_root_mut().pop_level();
- break;
- } else {
- cur_node = parent.forget_type();
- },
- Stole(_) => break
+ Merged(parent) => {
+ if parent.len() == 0 {
+ // We must be at the root
+ parent.into_root_mut().pop_level();
+ break;
+ } else {
+ cur_node = parent.forget_type();
+ }
+ }
+ Stole(_) => break,
}
}
AtRoot,
EmptyParent(NodeRef<marker::Mut<'a>, K, V, marker::Internal>),
Merged(NodeRef<marker::Mut<'a>, K, V, marker::Internal>),
- Stole(NodeRef<marker::Mut<'a>, K, V, marker::Internal>)
+ Stole(NodeRef<marker::Mut<'a>, K, V, marker::Internal>),
}
-fn handle_underfull_node<'a, K, V>(node: NodeRef<marker::Mut<'a>,
- K, V,
- marker::LeafOrInternal>)
- -> UnderflowResult<'a, K, V> {
+fn handle_underfull_node<'a, K, V>(node: NodeRef<marker::Mut<'a>, K, V, marker::LeafOrInternal>)
+ -> UnderflowResult<'a, K, V> {
let parent = if let Ok(parent) = node.ascend() {
parent
} else {
let (is_left, mut handle) = match parent.left_kv() {
Ok(left) => (true, left),
- Err(parent) => match parent.right_kv() {
- Ok(right) => (false, right),
- Err(parent) => {
- return EmptyParent(parent.into_node());
+ Err(parent) => {
+ match parent.right_kv() {
+ Ok(right) => (false, right),
+ Err(parent) => {
+ return EmptyParent(parent.into_node());
+ }
}
}
};
}
}
-impl<K: Ord, V, I: Iterator<Item=(K, V)>> Iterator for MergeIter<K, V, I> {
+impl<K: Ord, V, I: Iterator<Item = (K, V)>> Iterator for MergeIter<K, V, I> {
type Item = (K, V);
fn next(&mut self) -> Option<(K, V)> {
// Check which elements comes first and only advance the corresponding iterator.
// If two keys are equal, take the value from `right`.
match res {
- Ordering::Less => {
- self.left.next()
- },
- Ordering::Greater => {
- self.right.next()
- },
+ Ordering::Less => self.left.next(),
+ Ordering::Greater => self.right.next(),
Ordering::Equal => {
self.left.next();
self.right.next()
- },
+ }
}
}
}
/// assert!(a.contains(&5));
/// ```
#[unstable(feature = "btree_append", reason = "recently added as part of collections reform 2",
- issue = "19986")]
+ issue = "34152")]
pub fn append(&mut self, other: &mut Self) {
self.map.append(&mut other.map);
}
/// ```
#[unstable(feature = "btree_split_off",
reason = "recently added as part of collections reform 2",
- issue = "19986")]
+ issue = "34152")]
pub fn split_off<Q: ?Sized + Ord>(&mut self, key: &Q) -> Self where T: Borrow<Q> {
BTreeSet { map: self.map.split_off(key) }
}
fn test_concat_for_different_types() {
test_concat!("ab", vec![s("a"), s("b")]);
test_concat!("ab", vec!["a", "b"]);
- test_concat!("ab", vec!["a", "b"]);
- test_concat!("ab", vec![s("a"), s("b")]);
}
#[test]
#[test]
fn test_starts_with() {
- assert!(("".starts_with("")));
- assert!(("abc".starts_with("")));
- assert!(("abc".starts_with("a")));
- assert!((!"a".starts_with("abc")));
- assert!((!"".starts_with("abc")));
- assert!((!"ödd".starts_with("-")));
- assert!(("ödd".starts_with("öd")));
+ assert!("".starts_with(""));
+ assert!("abc".starts_with(""));
+ assert!("abc".starts_with("a"));
+ assert!(!"a".starts_with("abc"));
+ assert!(!"".starts_with("abc"));
+ assert!(!"ödd".starts_with("-"));
+ assert!("ödd".starts_with("öd"));
}
#[test]
fn test_ends_with() {
- assert!(("".ends_with("")));
- assert!(("abc".ends_with("")));
- assert!(("abc".ends_with("c")));
- assert!((!"a".ends_with("abc")));
- assert!((!"".ends_with("abc")));
- assert!((!"ddö".ends_with("-")));
- assert!(("ddö".ends_with("dö")));
+ assert!("".ends_with(""));
+ assert!("abc".ends_with(""));
+ assert!("abc".ends_with("c"));
+ assert!(!"a".ends_with("abc"));
+ assert!(!"".ends_with("abc"));
+ assert!(!"ddö".ends_with("-"));
+ assert!("ddö".ends_with("dö"));
}
#[test]
/// A type to emulate dynamic typing.
///
-/// Every type with no non-`'static` references implements `Any`.
+/// Most types implement `Any`. However, any type which contains a non-`'static` reference does not.
/// See the [module-level documentation][mod] for more details.
///
/// [mod]: index.html
/// assert_eq!(arr[1..3], [ 1,2 ]);
/// }
/// ```
-#[derive(Copy, Clone, PartialEq, Eq)]
+#[derive(Copy, Clone, PartialEq, Eq, Hash)]
#[stable(feature = "rust1", since = "1.0.0")]
pub struct RangeFull;
/// assert_eq!(arr[1..3], [ 1,2 ]); // Range
/// }
/// ```
-#[derive(Clone, PartialEq, Eq)]
+#[derive(Clone, PartialEq, Eq, Hash)] // not Copy -- see #27186
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Range<Idx> {
/// The lower bound of the range (inclusive).
/// assert_eq!(arr[1..3], [ 1,2 ]);
/// }
/// ```
-#[derive(Clone, PartialEq, Eq)]
+#[derive(Clone, PartialEq, Eq, Hash)] // not Copy -- see #27186
#[stable(feature = "rust1", since = "1.0.0")]
pub struct RangeFrom<Idx> {
/// The lower bound of the range (inclusive).
/// assert_eq!(arr[1..3], [ 1,2 ]);
/// }
/// ```
-#[derive(Copy, Clone, PartialEq, Eq)]
+#[derive(Copy, Clone, PartialEq, Eq, Hash)]
#[stable(feature = "rust1", since = "1.0.0")]
pub struct RangeTo<Idx> {
/// The upper bound of the range (exclusive).
/// assert_eq!(arr[1...2], [ 1,2 ]); // RangeInclusive
/// }
/// ```
-#[derive(Copy, Clone, PartialEq, Eq)]
+#[derive(Clone, PartialEq, Eq, Hash)] // not Copy -- see #27186
#[unstable(feature = "inclusive_range", reason = "recently added, follows RFC", issue = "28237")]
pub enum RangeInclusive<Idx> {
/// Empty range (iteration has finished)
/// assert_eq!(arr[1...2], [ 1,2 ]);
/// }
/// ```
-#[derive(Copy, Clone, PartialEq, Eq)]
+#[derive(Copy, Clone, PartialEq, Eq, Hash)]
#[unstable(feature = "inclusive_range", reason = "recently added, follows RFC", issue = "28237")]
pub struct RangeToInclusive<Idx> {
/// The upper bound of the range (inclusive)
/// Stores a value into the `bool` if the current value is the same as the `current` value.
///
/// The return value is a result indicating whether the new value was written and containing
- /// the previous value. On success this value is guaranteed to be equal to `new`.
+ /// the previous value. On success this value is guaranteed to be equal to `current`.
///
/// `compare_exchange` takes two `Ordering` arguments to describe the memory ordering of this
/// operation. The first describes the required ordering if the operation succeeds while the
/// Stores a value into the pointer if the current value is the same as the `current` value.
///
/// The return value is a result indicating whether the new value was written and containing
- /// the previous value. On success this value is guaranteed to be equal to `new`.
+ /// the previous value. On success this value is guaranteed to be equal to `current`.
///
/// `compare_exchange` takes two `Ordering` arguments to describe the memory ordering of this
/// operation. The first describes the required ordering if the operation succeeds while the
///
/// The return value is a result indicating whether the new value was written and
/// containing the previous value. On success this value is guaranteed to be equal to
- /// `new`.
+ /// `current`.
///
/// `compare_exchange` takes two `Ordering` arguments to describe the memory ordering of
/// this operation. The first describes the required ordering if the operation succeeds
extern crate libc;
-use libc::{c_void, size_t, c_int};
+use libc::{c_int, c_void, size_t};
use std::fmt;
use std::ops::Deref;
use std::ptr::Unique;
#[link(name = "miniz", kind = "static")]
#[cfg(not(cargobuild))]
-extern {}
+extern "C" {}
-extern {
+extern "C" {
/// Raw miniz compression function.
fn tdefl_compress_mem_to_heap(psrc_buf: *const c_void,
src_buf_len: size_t,
#[cfg(test)]
mod tests {
#![allow(deprecated)]
- use super::{inflate_bytes, deflate_bytes};
- use std::__rand::{thread_rng, Rng};
+ use super::{deflate_bytes, inflate_bytes};
+ use std::__rand::{Rng, thread_rng};
#[test]
fn test_flate_round_trip() {
```
Or in a generic context, an erroneous code example would look like:
+
```compile_fail
fn some_func<T>(foo: T) {
println!("{:?}", foo); // error: the trait `core::fmt::Debug` is not
still rejects the function: It must work with all possible input types. In
order to make this example compile, we need to restrict the generic type we're
accepting:
+
```
use std::fmt;
// struct WithoutDebug;
// some_func(WithoutDebug);
}
+```
Rust only looks at the signature of the called function, as such it must
already specify all requirements that will be used for every type parameter.
-```
-
"##,
E0281: r##"
struct Foo<T> {
foo: &'static T
}
+```
This will compile, because it has the constraint on the type parameter:
```
"##,
+E0453: r##"
+A lint check attribute was overruled by a `forbid` directive set as an
+attribute on an enclosing scope, or on the command line with the `-F` option.
+
+Example of erroneous code:
+
+```compile_fail
+#![forbid(non_snake_case)]
+
+#[allow(non_snake_case)]
+fn main() {
+ let MyNumber = 2; // error: allow(non_snake_case) overruled by outer
+ // forbid(non_snake_case)
+}
+```
+
+The `forbid` lint setting, like `deny`, turns the corresponding compiler
+warning into a hard error. Unlike `deny`, `forbid` prevents itself from being
+overridden by inner attributes.
+
+If you're sure you want to override the lint check, you can change `forbid` to
+`deny` (or use `-D` instead of `-F` if the `forbid` setting was given as a
+command-line option) to allow the inner lint check attribute:
+
+```
+#![deny(non_snake_case)]
+
+#[allow(non_snake_case)]
+fn main() {
+ let MyNumber = 2; // ok!
+}
+```
+
+Otherwise, edit the code to pass the lint check, and remove the overruled
+attribute:
+
+```
+#![forbid(non_snake_case)]
+
+fn main() {
+ let my_number = 2;
+}
+```
+"##,
+
E0496: r##"
A lifetime name is shadowing another lifetime name. Erroneous code example:
E0314, // closure outlives stack frame
E0315, // cannot invoke closure outside of its lifetime
E0316, // nested quantification of lifetimes
- E0453, // overruled by outer forbid
E0473, // dereference of reference outside its lifetime
E0474, // captured variable `..` does not outlive the enclosing closure
E0475, // index of slice outside its lifetime
use std::iter;
use syntax::ast::*;
use syntax::attr::{ThinAttributes, ThinAttributesExt};
-use syntax::ext::mtwt;
use syntax::ptr::P;
use syntax::codemap::{respan, Spanned, Span};
-use syntax::parse::token::{self, keywords};
+use syntax::parse::token;
use syntax::std_inject;
use syntax::visit::{self, Visitor};
result
}
- fn lower_ident(&mut self, ident: Ident) -> Name {
- if ident.name != keywords::Invalid.name() {
- mtwt::resolve(ident)
- } else {
- ident.name
- }
- }
-
fn lower_opt_sp_ident(&mut self, o_id: Option<Spanned<Ident>>) -> Option<Spanned<Name>> {
- o_id.map(|sp_ident| respan(sp_ident.span, self.lower_ident(sp_ident.node)))
+ o_id.map(|sp_ident| respan(sp_ident.span, sp_ident.node.name))
}
fn lower_attrs(&mut self, attrs: &Vec<Attribute>) -> hir::HirVec<Attribute> {
}
}
- fn lower_path_full(&mut self, p: &Path, rename: bool) -> hir::Path {
+ fn lower_path(&mut self, p: &Path) -> hir::Path {
hir::Path {
global: p.global,
segments: p.segments
.iter()
.map(|&PathSegment { identifier, ref parameters }| {
hir::PathSegment {
- name: if rename {
- self.lower_ident(identifier)
- } else {
- identifier.name
- },
+ name: identifier.name,
parameters: self.lower_path_parameters(parameters),
}
})
}
}
- fn lower_path(&mut self, p: &Path) -> hir::Path {
- self.lower_path_full(p, false)
- }
-
fn lower_path_parameters(&mut self, path_parameters: &PathParameters) -> hir::PathParameters {
match *path_parameters {
PathParameters::AngleBracketed(ref data) =>
// `None` can occur in body-less function signatures
None | Some(Def::Local(..)) => {
hir::PatKind::Binding(this.lower_binding_mode(binding_mode),
- respan(pth1.span,
- this.lower_ident(pth1.node)),
+ respan(pth1.span, pth1.node.name),
sub.as_ref().map(|x| this.lower_pat(x)))
}
_ => hir::PatKind::Path(hir::Path::from_name(pth1.span, pth1.node.name))
position: position,
}
});
- // Only local variables are renamed
- let rename = match self.resolver.get_resolution(e.id).map(|d| d.base_def) {
- Some(Def::Local(..)) | Some(Def::Upvar(..)) => true,
- _ => false,
- };
- hir::ExprPath(hir_qself, self.lower_path_full(path, rename))
+ hir::ExprPath(hir_qself, self.lower_path(path))
}
ExprKind::Break(opt_ident) => hir::ExprBreak(self.lower_opt_sp_ident(opt_ident)),
ExprKind::Again(opt_ident) => hir::ExprAgain(self.lower_opt_sp_ident(opt_ident)),
impl Arg {
pub fn to_self(&self) -> Option<ExplicitSelf> {
if let PatKind::Binding(BindByValue(mutbl), name, _) = self.pat.node {
- if name.node.unhygienize() == keywords::SelfValue.name() {
+ if name.node == keywords::SelfValue.name() {
return match self.ty.node {
TyInfer => Some(respan(self.pat.span, SelfKind::Value(mutbl))),
TyRptr(lt, MutTy{ref ty, mutbl}) if ty.node == TyInfer => {
pub fn is_self(&self) -> bool {
if let PatKind::Binding(_, name, _) = self.pat.node {
- name.node.unhygienize() == keywords::SelfValue.name()
+ name.node == keywords::SelfValue.name()
} else {
false
}
}
}
self.print_name(path1.node)?;
- match *sub {
- Some(ref p) => {
- word(&mut self.s, "@")?;
- self.print_pat(&p)?;
- }
- None => (),
+ if let Some(ref p) = *sub {
+ word(&mut self.s, "@")?;
+ self.print_pat(&p)?;
}
}
PatKind::TupleStruct(ref path, ref elts, ddpos) => {
Some(cm) => cm,
_ => return Ok(()),
};
- match self.next_comment() {
- Some(ref cmnt) => {
- if (*cmnt).style != comments::Trailing {
- return Ok(());
- }
- let span_line = cm.lookup_char_pos(span.hi);
- let comment_line = cm.lookup_char_pos((*cmnt).pos);
- let mut next = (*cmnt).pos + BytePos(1);
- match next_pos {
- None => (),
- Some(p) => next = p,
- }
- if span.hi < (*cmnt).pos && (*cmnt).pos < next &&
- span_line.line == comment_line.line {
- self.print_comment(cmnt)?;
- self.cur_cmnt_and_lit.cur_cmnt += 1;
- }
+ if let Some(ref cmnt) = self.next_comment() {
+ if (*cmnt).style != comments::Trailing {
+ return Ok(());
+ }
+ let span_line = cm.lookup_char_pos(span.hi);
+ let comment_line = cm.lookup_char_pos((*cmnt).pos);
+ let mut next = (*cmnt).pos + BytePos(1);
+ if let Some(p) = next_pos {
+ next = p;
+ }
+ if span.hi < (*cmnt).pos && (*cmnt).pos < next &&
+ span_line.line == comment_line.line {
+ self.print_comment(cmnt)?;
+ self.cur_cmnt_and_lit.cur_cmnt += 1;
}
- _ => (),
}
Ok(())
}
},
None => None
};
- if method_id_opt.is_some() {
- let method_id = method_id_opt.unwrap();
+ if let Some(method_id) = method_id_opt {
let parent = tcx.map.get_parent(method_id);
- match tcx.map.find(parent) {
- Some(node) => match node {
+ if let Some(node) = tcx.map.find(parent) {
+ match node {
ast_map::NodeItem(item) => match item.node {
hir::ItemImpl(_, _, ref gen, _, _, _) => {
taken.extend_from_slice(&gen.lifetimes);
_ => ()
},
_ => ()
- },
- None => ()
+ }
}
}
return taken;
span: codemap::DUMMY_SP,
name: name }
}
-
}
scanned.insert(id);
- match self.tcx.map.find(id) {
- Some(ref node) => {
- self.live_symbols.insert(id);
- self.visit_node(node);
- }
- None => (),
+ if let Some(ref node) = self.tcx.map.find(id) {
+ self.live_symbols.insert(id);
+ self.visit_node(node);
}
}
}
}
// Seed entry point
- match *tcx.sess.entry_fn.borrow() {
- Some((id, _)) => worklist.push(id),
- None => ()
+ if let Some((id, _)) = *tcx.sess.entry_fn.borrow() {
+ worklist.push(id);
}
// Seed implemented trait items
// method of a private type is used, but the type itself is never
// called directly.
let impl_items = self.tcx.impl_items.borrow();
- match self.tcx.inherent_impls.borrow().get(&self.tcx.map.local_def_id(id)) {
- None => (),
- Some(impl_list) => {
- for impl_did in impl_list.iter() {
- for item_did in impl_items.get(impl_did).unwrap().iter() {
- if let Some(item_node_id) =
- self.tcx.map.as_local_node_id(item_did.def_id()) {
- if self.live_symbols.contains(&item_node_id) {
- return true;
- }
+ if let Some(impl_list) =
+ self.tcx.inherent_impls.borrow().get(&self.tcx.map.local_def_id(id)) {
+ for impl_did in impl_list.iter() {
+ for item_did in impl_items.get(impl_did).unwrap().iter() {
+ if let Some(item_node_id) =
+ self.tcx.map.as_local_node_id(item_did.def_id()) {
+ if self.live_symbols.contains(&item_node_id) {
+ return true;
}
}
}
fn expression_label(ex: &hir::Expr) -> Option<(ast::Name, Span)> {
match ex.node {
hir::ExprWhile(_, _, Some(label)) |
- hir::ExprLoop(_, Some(label)) => Some((label.node.unhygienize(),
- label.span)),
+ hir::ExprLoop(_, Some(label)) => Some((label.node, label.span)),
_ => None,
}
}
#[cfg(test)]
#[allow(non_upper_case_globals)]
mod tests {
- use std::hash::{Hasher, Hash, SipHasher};
- use std::option::Option::{Some, None};
+ use std::hash::{Hash, Hasher, SipHasher};
+ use std::option::Option::{None, Some};
bitflags! {
#[doc = "> The first principle is that you must not fool yourself — and"]
let potentially_illegal_move =
check_and_get_illegal_move_origin(bccx, &move_info.cmt);
- match potentially_illegal_move {
- Some(illegal_move_origin) => {
- debug!("illegal_move_origin={:?}", illegal_move_origin);
- let error = MoveError::with_move_info(illegal_move_origin,
- move_info.span_path_opt);
- move_error_collector.add_error(error);
- return
- }
- None => ()
+ if let Some(illegal_move_origin) = potentially_illegal_move {
+ debug!("illegal_move_origin={:?}", illegal_move_origin);
+ let error = MoveError::with_move_info(illegal_move_origin,
+ move_info.span_path_opt);
+ move_error_collector.add_error(error);
+ return;
}
match opt_loan_path(&move_info.cmt) {
/// ELAB(drop location.2 [target=`c.unwind])
fn drop_ladder<'a>(&mut self,
c: &DropCtxt<'a, 'tcx>,
- fields: &[(Lvalue<'tcx>, Option<MovePathIndex>)])
+ fields: Vec<(Lvalue<'tcx>, Option<MovePathIndex>)>)
-> BasicBlock
{
debug!("drop_ladder({:?}, {:?})", c, fields);
+
+ let mut fields = fields;
+ fields.retain(|&(ref lvalue, _)| {
+ let ty = self.mir.lvalue_ty(self.tcx, lvalue).to_ty(self.tcx);
+ self.tcx.type_needs_drop_given_env(ty, self.param_env())
+ });
+
+ debug!("drop_ladder - fields needing drop: {:?}", fields);
+
let unwind_ladder = if c.is_cleanup {
None
} else {
Some(self.drop_halfladder(c, None, c.unwind.unwrap(), &fields, true))
};
- self.drop_halfladder(c, unwind_ladder, c.succ, fields, c.is_cleanup)
+ self.drop_halfladder(c, unwind_ladder, c.succ, &fields, c.is_cleanup)
.last().cloned().unwrap_or(c.succ)
}
{
debug!("open_drop_for_tuple({:?}, {:?})", c, tys);
- let fields: Vec<_> = tys.iter().enumerate().map(|(i, &ty)| {
+ let fields = tys.iter().enumerate().map(|(i, &ty)| {
(c.lvalue.clone().field(Field::new(i), ty),
super::move_path_children_matching(
&self.move_data().move_paths, c.path, |proj| match proj {
))
}).collect();
- self.drop_ladder(c, &fields)
+ self.drop_ladder(c, fields)
}
fn open_drop_for_box<'a>(&mut self, c: &DropCtxt<'a, 'tcx>, ty: Ty<'tcx>)
variant_path,
&adt.variants[variant_index],
substs);
- self.drop_ladder(c, &fields)
+ self.drop_ladder(c, fields)
} else {
// variant not found - drop the entire enum
if let None = *drop_block {
&adt.variants[0],
substs
);
- self.drop_ladder(c, &fields)
+ self.drop_ladder(c, fields)
}
_ => {
let variant_drops : Vec<BasicBlock> =
// as immutable
}
```
+
To fix this error, ensure that you don't have any other references to the
variable before trying to access it mutably:
+
```
fn bar(x: &mut i32) {}
fn foo(a: &mut i32) {
let ref y = a; // ok!
}
```
+
For more information on the rust ownership system, take a look at
https://doc.rust-lang.org/stable/book/references-and-borrowing.html.
"##,
+E0503: r##"
+A value was used after it was mutably borrowed.
+
+Example of erroneous code:
+
+```compile_fail
+fn main() {
+ let mut value = 3;
+ // Create a mutable borrow of `value`. This borrow
+ // lives until the end of this function.
+ let _borrow = &mut value;
+ let _sum = value + 1; // error: cannot use `value` because
+ // it was mutably borrowed
+}
+```
+
+In this example, `value` is mutably borrowed by `borrow` and cannot be
+used to calculate `sum`. This is not possible because this would violate
+Rust's mutability rules.
+
+You can fix this error by limiting the scope of the borrow:
+
+```
+fn main() {
+ let mut value = 3;
+ // By creating a new block, you can limit the scope
+ // of the reference.
+ {
+ let _borrow = &mut value; // Use `_borrow` inside this block.
+ }
+ // The block has ended and with it the borrow.
+ // You can now use `value` again.
+ let _sum = value + 1;
+}
+```
+
+Or by cloning `value` before borrowing it:
+
+```
+fn main() {
+ let mut value = 3;
+ // We clone `value`, creating a copy.
+ let value_cloned = value.clone();
+ // The mutable borrow is a reference to `value` and
+ // not to `value_cloned`...
+ let _borrow = &mut value;
+ // ... which means we can still use `value_cloned`,
+ let _sum = value_cloned + 1;
+ // even though the borrow only ends here.
+}
+```
+
+You can find more information about borrowing in the rust-book:
+http://doc.rust-lang.org/stable/book/references-and-borrowing.html
+"##,
+
E0504: r##"
This error occurs when an attempt is made to move a borrowed variable into a
closure.
http://doc.rust-lang.org/stable/book/references-and-borrowing.html
"##,
+E0508: r##"
+A value was moved out of a non-copy fixed-size array.
+
+Example of erroneous code:
+
+```compile_fail
+struct NonCopy;
+
+fn main() {
+ let array = [NonCopy; 1];
+ let _value = array[0]; // error: cannot move out of type `[NonCopy; 1]`,
+ // a non-copy fixed-size array
+}
+```
+
+The first element was moved out of the array, but this is not
+possible because `NonCopy` does not implement the `Copy` trait.
+
+Consider borrowing the element instead of moving it:
+
+```
+struct NonCopy;
+
+fn main() {
+ let array = [NonCopy; 1];
+ let _value = &array[0]; // Borrowing is allowed, unlike moving.
+}
+```
+
+Alternatively, if your type implements `Clone` and you need to own the value,
+consider borrowing and then cloning:
+
+```
+#[derive(Clone)]
+struct NonCopy;
+
+fn main() {
+ let array = [NonCopy; 1];
+ // Now you can clone the array element.
+ let _value = array[0].clone();
+}
+```
+"##,
+
E0509: r##"
This error occurs when an attempt is made to move out of a value whose type
implements the `Drop` trait.
register_diagnostics! {
E0385, // {} in an aliasable location
E0388, // {} in a static location
- E0503, // cannot use `..` because it was mutably borrowed
- E0508, // cannot move out of type `..`, a non-copy fixed-size array
E0524, // two closures require unique access to `..` at the same time
}
if let ty::TyEnum(edef, _) = pat_ty.sty {
if let Def::Local(..) = cx.tcx.expect_def(p.id) {
if edef.variants.iter().any(|variant|
- variant.name == name.node.unhygienize()
- && variant.kind() == VariantKind::Unit
+ variant.name == name.node && variant.kind() == VariantKind::Unit
) {
let ty_path = cx.tcx.item_path_str(edef.did);
let mut err = struct_span_warn!(cx.tcx.sess, p.span, E0170,
(&ty::TyInt(IntTy::I32), Infer(i)) => Ok(I32(i as i64 as i32)),
(&ty::TyInt(IntTy::I64), Infer(i)) => Ok(I64(i as i64)),
(&ty::TyInt(IntTy::Is), Infer(i)) => {
- match ConstIsize::new(i as i64, tcx.sess.target.int_type) {
- Ok(val) => Ok(Isize(val)),
- Err(_) => Ok(Isize(ConstIsize::Is32(i as i64 as i32))),
- }
+ Ok(Isize(ConstIsize::new_truncating(i as i64, tcx.sess.target.int_type)))
},
(&ty::TyInt(IntTy::I8), InferSigned(i)) => Ok(I8(i as i8)),
(&ty::TyInt(IntTy::I32), InferSigned(i)) => Ok(I32(i as i32)),
(&ty::TyInt(IntTy::I64), InferSigned(i)) => Ok(I64(i)),
(&ty::TyInt(IntTy::Is), InferSigned(i)) => {
- match ConstIsize::new(i, tcx.sess.target.int_type) {
- Ok(val) => Ok(Isize(val)),
- Err(_) => Ok(Isize(ConstIsize::Is32(i as i32))),
- }
+ Ok(Isize(ConstIsize::new_truncating(i, tcx.sess.target.int_type)))
},
(&ty::TyUint(UintTy::U8), Infer(i)) => Ok(U8(i as u8)),
(&ty::TyUint(UintTy::U32), Infer(i)) => Ok(U32(i as u32)),
(&ty::TyUint(UintTy::U64), Infer(i)) => Ok(U64(i)),
(&ty::TyUint(UintTy::Us), Infer(i)) => {
- match ConstUsize::new(i, tcx.sess.target.uint_type) {
- Ok(val) => Ok(Usize(val)),
- Err(_) => Ok(Usize(ConstUsize::Us32(i as u32))),
- }
+ Ok(Usize(ConstUsize::new_truncating(i, tcx.sess.target.uint_type)))
},
(&ty::TyUint(_), InferSigned(_)) => Err(IntermediateUnsignedNegative),
ty::TyInt(ast::IntTy::I32) => Ok(Integral(I32(v as i64 as i32))),
ty::TyInt(ast::IntTy::I64) => Ok(Integral(I64(v as i64))),
ty::TyInt(ast::IntTy::Is) => {
- match ConstIsize::new(v as i64, tcx.sess.target.int_type) {
- Ok(val) => Ok(Integral(Isize(val))),
- Err(_) => Ok(Integral(Isize(ConstIsize::Is32(v as i64 as i32)))),
- }
+ Ok(Integral(Isize(ConstIsize::new_truncating(v as i64, tcx.sess.target.int_type))))
},
ty::TyUint(ast::UintTy::U8) => Ok(Integral(U8(v as u8))),
ty::TyUint(ast::UintTy::U16) => Ok(Integral(U16(v as u16))),
ty::TyUint(ast::UintTy::U32) => Ok(Integral(U32(v as u32))),
ty::TyUint(ast::UintTy::U64) => Ok(Integral(U64(v))),
ty::TyUint(ast::UintTy::Us) => {
- match ConstUsize::new(v, tcx.sess.target.uint_type) {
- Ok(val) => Ok(Integral(Usize(val))),
- Err(_) => Ok(Integral(Usize(ConstUsize::Us32(v as u32)))),
- }
+ Ok(Integral(Usize(ConstUsize::new_truncating(v, tcx.sess.target.uint_type))))
},
ty::TyFloat(ast::FloatTy::F64) => match val.erase_type() {
Infer(u) => Ok(Float(F64(u as f64))),
(Is16(i), ast::IntTy::I16) => i as i64,
(Is32(i), ast::IntTy::I32) => i as i64,
(Is64(i), ast::IntTy::I64) => i,
- _ => panic!("got invalid isize size for target"),
+ _ => panic!("unable to convert self ({:?}) to target isize ({:?})",
+ self, target_int_ty),
}
}
pub fn new(i: i64, target_int_ty: ast::IntTy) -> Result<Self, ConstMathErr> {
_ => unreachable!(),
}
}
+ pub fn new_truncating(i: i64, target_int_ty: ast::IntTy) -> Self {
+ match target_int_ty {
+ ast::IntTy::I16 => Is16(i as i16),
+ ast::IntTy::I32 => Is32(i as i32),
+ ast::IntTy::I64 => Is64(i),
+ _ => unreachable!(),
+ }
+ }
}
(Us16(i), ast::UintTy::U16) => i as u64,
(Us32(i), ast::UintTy::U32) => i as u64,
(Us64(i), ast::UintTy::U64) => i,
- _ => panic!("got invalid usize size for target"),
+ _ => panic!("unable to convert self ({:?}) to target usize ({:?})",
+ self, target_uint_ty),
}
}
pub fn new(i: u64, target_uint_ty: ast::UintTy) -> Result<Self, ConstMathErr> {
_ => unreachable!(),
}
}
+ pub fn new_truncating(i: u64, target_uint_ty: ast::UintTy) -> Self {
+ match target_uint_ty {
+ ast::UintTy::U16 => Us16(i as u16),
+ ast::UintTy::U32 => Us32(i as u32),
+ ast::UintTy::U64 => Us64(i),
+ _ => unreachable!(),
+ }
+ }
}
[1, 10, 19, 62, 63, 64, 65, 66, 99]);
}
-#[test]
-fn bitvec_iter_works_2() {
- let mut bitvec = BitVector::new(300);
- bitvec.insert(1);
- bitvec.insert(10);
- bitvec.insert(19);
- bitvec.insert(62);
- bitvec.insert(66);
- bitvec.insert(99);
- bitvec.insert(299);
- assert_eq!(bitvec.iter().collect::<Vec<_>>(),
- [1, 10, 19, 62, 66, 99, 299]);
-
-}
#[test]
-fn bitvec_iter_works_3() {
+fn bitvec_iter_works_2() {
let mut bitvec = BitVector::new(319);
bitvec.insert(0);
bitvec.insert(127);
#[test]
fn test_sliced_tuples() {
- let t2 = (100i32, 101i32);
- assert_eq!(t2.as_slice(), &[100i32, 101i32]);
+ let t2 = (100, 101);
+ assert_eq!(t2.as_slice(), &[100, 101]);
- let t3 = (102i32, 103i32, 104i32);
- assert_eq!(t3.as_slice(), &[102i32, 103i32, 104i32]);
+ let t3 = (102, 103, 104);
+ assert_eq!(t3.as_slice(), &[102, 103, 104]);
- let t4 = (105i32, 106i32, 107i32, 108i32);
- assert_eq!(t4.as_slice(), &[105i32, 106i32, 107i32, 108i32]);
+ let t4 = (105, 106, 107, 108);
+ assert_eq!(t4.as_slice(), &[105, 106, 107, 108]);
+
+ let t5 = (109, 110, 111, 112, 113);
+ assert_eq!(t5.as_slice(), &[109, 110, 111, 112, 113]);
+
+ let t6 = (114, 115, 116, 117, 118, 119);
+ assert_eq!(t6.as_slice(), &[114, 115, 116, 117, 118, 119]);
+
+ let t7 = (120, 121, 122, 123, 124, 125, 126);
+ assert_eq!(t7.as_slice(), &[120, 121, 122, 123, 124, 125, 126]);
+
+ let t8 = (127, 128, 129, 130, 131, 132, 133, 134);
+ assert_eq!(t8.as_slice(), &[127, 128, 129, 130, 131, 132, 133, 134]);
- let t5 = (109i32, 110i32, 111i32, 112i32, 113i32);
- assert_eq!(t5.as_slice(), &[109i32, 110i32, 111i32, 112i32, 113i32]);
}
sess.track_errors(|| {
syntax::config::strip_unconfigured_items(sess.diagnostic(),
krate,
+ sess.opts.test,
&mut feature_gated_cfgs)
})
})?;
features: Some(&features),
recursion_limit: sess.recursion_limit.get(),
trace_mac: sess.opts.debugging_opts.trace_macros,
+ should_test: sess.opts.test,
};
let mut loader = macro_import::MacroLoader::new(sess, &cstore, crate_name);
let mut ecx = syntax::ext::base::ExtCtxt::new(&sess.parse_sess,
continue;
}
if let PatKind::Binding(_, ident, None) = fieldpat.node.pat.node {
- if ident.node.unhygienize() == fieldpat.node.name {
+ if ident.node == fieldpat.node.name {
cx.span_lint(NON_SHORTHAND_FIELD_PATTERNS, fieldpat.span,
&format!("the `{}:` in this pattern is redundant and can \
be removed", ident.node))
pub use self::Linkage::*;
pub use self::DLLStorageClassTypes::*;
+use std::str::FromStr;
use std::ffi::{CString, CStr};
use std::cell::RefCell;
use std::slice;
K_COFF,
}
+impl FromStr for ArchiveKind {
+ type Err = ();
+
+ fn from_str(s: &str) -> Result<Self, Self::Err> {
+ match s {
+ "gnu" => Ok(ArchiveKind::K_GNU),
+ "mips64" => Ok(ArchiveKind::K_MIPS64),
+ "bsd" => Ok(ArchiveKind::K_BSD),
+ "coff" => Ok(ArchiveKind::K_COFF),
+ _ => Err(()),
+ }
+ }
+}
+
/// Represents the different LLVM passes Rust supports
#[derive(Copy, Clone, PartialEq, Debug)]
#[repr(C)]
return;
}
- match self.creader.extract_crate_info(i) {
- Some(info) => {
- let (cnum, _, _) = self.creader.resolve_crate(&None,
- &info.ident,
- &info.name,
- None,
- i.span,
- PathKind::Crate,
- true);
-
- let def_id = self.definitions.opt_local_def_id(i.id).unwrap();
- let len = self.definitions.def_path(def_id.index).data.len();
-
- self.creader.update_extern_crate(cnum,
- ExternCrate {
- def_id: def_id,
- span: i.span,
- direct: true,
- path_len: len,
- });
- self.cstore.add_extern_mod_stmt_cnum(info.id, cnum);
- }
- None => ()
+ if let Some(info) = self.creader.extract_crate_info(i) {
+ let (cnum, _, _) = self.creader.resolve_crate(&None,
+ &info.ident,
+ &info.name,
+ None,
+ i.span,
+ PathKind::Crate,
+ true);
+
+ let def_id = self.definitions.opt_local_def_id(i.id).unwrap();
+ let len = self.definitions.def_path(def_id.index).data.len();
+
+ self.creader.update_extern_crate(cnum,
+ ExternCrate {
+ def_id: def_id,
+ span: i.span,
+ direct: true,
+ path_len: len,
+ });
+ self.cstore.add_extern_mod_stmt_cnum(info.id, cnum);
}
}
ast::ItemKind::ForeignMod(ref fm) => self.process_foreign_mod(i, fm),
ast::IntTy::Is => {
let int_ty = self.hir.tcx().sess.target.int_type;
let min = match int_ty {
+ ast::IntTy::I16 => std::i16::MIN as i64,
ast::IntTy::I32 => std::i32::MIN as i64,
ast::IntTy::I64 => std::i64::MIN,
_ => unreachable!()
//! A helper class for dealing with static archives
-use std::env;
use std::ffi::{CString, CStr, OsString};
-use std::fs::{self, File};
-use std::io::prelude::*;
use std::io;
use std::mem;
use std::path::{Path, PathBuf};
-use std::process::{Command, Output, Stdio};
use std::ptr;
use std::str;
use llvm::archive_ro::{ArchiveRO, Child};
use llvm::{self, ArchiveKind};
use rustc::session::Session;
-use rustc_back::tempdir::TempDir;
pub struct ArchiveConfig<'a> {
pub sess: &'a Session,
#[must_use = "must call build() to finish building the archive"]
pub struct ArchiveBuilder<'a> {
config: ArchiveConfig<'a>,
- work_dir: TempDir,
removals: Vec<String>,
additions: Vec<Addition>,
should_update_symbols: bool,
},
Archive {
archive: ArchiveRO,
- archive_name: String,
skip: Box<FnMut(&str) -> bool>,
},
}
-enum Action<'a> {
- Remove(&'a [String]),
- AddObjects(&'a [&'a PathBuf], bool),
- UpdateSymbols,
-}
-
pub fn find_library(name: &str, search_paths: &[PathBuf], sess: &Session)
-> PathBuf {
// On Windows, static libraries sometimes show up as libfoo.a and other
pub fn new(config: ArchiveConfig<'a>) -> ArchiveBuilder<'a> {
ArchiveBuilder {
config: config,
- work_dir: TempDir::new("rsar").unwrap(),
removals: Vec::new(),
additions: Vec::new(),
should_update_symbols: false,
pub fn add_native_library(&mut self, name: &str) {
let location = find_library(name, &self.config.lib_search_paths,
self.config.sess);
- self.add_archive(&location, name, |_| false).unwrap_or_else(|e| {
+ self.add_archive(&location, |_| false).unwrap_or_else(|e| {
self.config.sess.fatal(&format!("failed to add native library {}: {}",
location.to_string_lossy(), e));
});
let metadata_filename =
self.config.sess.cstore.metadata_filename().to_owned();
- self.add_archive(rlib, &name[..], move |fname: &str| {
+ self.add_archive(rlib, move |fname: &str| {
let skip_obj = lto && fname.starts_with(&obj_start)
&& fname.ends_with(".o");
skip_obj || fname.ends_with(bc_ext) || fname == metadata_filename
})
}
- fn add_archive<F>(&mut self, archive: &Path, name: &str, skip: F)
+ fn add_archive<F>(&mut self, archive: &Path, skip: F)
-> io::Result<()>
where F: FnMut(&str) -> bool + 'static
{
};
self.additions.push(Addition::Archive {
archive: archive,
- archive_name: name.to_string(),
skip: Box::new(skip),
});
Ok(())
/// Combine the provided files, rlibs, and native libraries into a single
/// `Archive`.
pub fn build(&mut self) {
- let res = match self.llvm_archive_kind() {
- Some(kind) => self.build_with_llvm(kind),
- None => self.build_with_ar_cmd(),
- };
- if let Err(e) = res {
- self.config.sess.fatal(&format!("failed to build archive: {}", e));
- }
- }
-
- pub fn llvm_archive_kind(&self) -> Option<ArchiveKind> {
- if unsafe { llvm::LLVMVersionMinor() < 7 } {
- return None
- }
-
- // Currently LLVM only supports writing archives in the 'gnu' format.
- match &self.config.sess.target.target.options.archive_format[..] {
- "gnu" => Some(ArchiveKind::K_GNU),
- "mips64" => Some(ArchiveKind::K_MIPS64),
- "bsd" => Some(ArchiveKind::K_BSD),
- "coff" => Some(ArchiveKind::K_COFF),
- _ => None,
- }
- }
-
- pub fn using_llvm(&self) -> bool {
- self.llvm_archive_kind().is_some()
- }
-
- fn build_with_ar_cmd(&mut self) -> io::Result<()> {
- let removals = mem::replace(&mut self.removals, Vec::new());
- let additions = mem::replace(&mut self.additions, Vec::new());
- let should_update_symbols = mem::replace(&mut self.should_update_symbols,
- false);
-
- // Don't use fs::copy because libs may be installed as read-only and we
- // want to modify this archive, so we use `io::copy` to not preserve
- // permission bits.
- if let Some(ref s) = self.config.src {
- io::copy(&mut File::open(s)?,
- &mut File::create(&self.config.dst)?)?;
- }
-
- if removals.len() > 0 {
- self.run(None, Action::Remove(&removals));
- }
-
- let mut members = Vec::new();
- for addition in additions {
- match addition {
- Addition::File { path, name_in_archive } => {
- let dst = self.work_dir.path().join(&name_in_archive);
- fs::copy(&path, &dst)?;
- members.push(PathBuf::from(name_in_archive));
- }
- Addition::Archive { archive, archive_name, mut skip } => {
- self.add_archive_members(&mut members, archive,
- &archive_name, &mut *skip)?;
- }
- }
- }
-
- // Get an absolute path to the destination, so `ar` will work even
- // though we run it from `self.work_dir`.
- let mut objects = Vec::new();
- let mut total_len = self.config.dst.to_string_lossy().len();
-
- if members.is_empty() {
- if should_update_symbols {
- self.run(Some(self.work_dir.path()), Action::UpdateSymbols);
- }
- return Ok(())
- }
-
- // Don't allow the total size of `args` to grow beyond 32,000 bytes.
- // Windows will raise an error if the argument string is longer than
- // 32,768, and we leave a bit of extra space for the program name.
- const ARG_LENGTH_LIMIT: usize = 32_000;
-
- for member_name in &members {
- let len = member_name.to_string_lossy().len();
-
- // `len + 1` to account for the space that's inserted before each
- // argument. (Windows passes command-line arguments as a single
- // string, not an array of strings.)
- if total_len + len + 1 > ARG_LENGTH_LIMIT {
- // Add the archive members seen so far, without updating the
- // symbol table.
- self.run(Some(self.work_dir.path()),
- Action::AddObjects(&objects, false));
-
- objects.clear();
- total_len = self.config.dst.to_string_lossy().len();
- }
-
- objects.push(member_name);
- total_len += len + 1;
- }
-
- // Add the remaining archive members, and update the symbol table if
- // necessary.
- self.run(Some(self.work_dir.path()),
- Action::AddObjects(&objects, should_update_symbols));
- Ok(())
- }
-
- fn add_archive_members(&mut self, members: &mut Vec<PathBuf>,
- archive: ArchiveRO, name: &str,
- skip: &mut FnMut(&str) -> bool) -> io::Result<()> {
- // Next, we must rename all of the inputs to "guaranteed unique names".
- // We write each file into `self.work_dir` under its new unique name.
- // The reason for this renaming is that archives are keyed off the name
- // of the files, so if two files have the same name they will override
- // one another in the archive (bad).
- //
- // We skip any files explicitly desired for skipping, and we also skip
- // all SYMDEF files as these are just magical placeholders which get
- // re-created when we make a new archive anyway.
- for file in archive.iter() {
- let file = file.map_err(string_to_io_error)?;
- if !is_relevant_child(&file) {
- continue
- }
- let filename = file.name().unwrap();
- if skip(filename) {
- continue
+ let kind = match self.llvm_archive_kind() {
+ Ok(kind) => kind,
+ Err(kind) => {
+ self.config.sess.fatal(&format!("Don't know how to build archive of type: {}",
+ kind));
}
- let filename = Path::new(filename).file_name().unwrap()
- .to_str().unwrap();
-
- // Archives on unix systems typically do not have slashes in
- // filenames as the `ar` utility generally only uses the last
- // component of a path for the filename list in the archive. On
- // Windows, however, archives assembled with `lib.exe` will preserve
- // the full path to the file that was placed in the archive,
- // including path separators.
- //
- // The code below is munging paths so it'll go wrong pretty quickly
- // if there's some unexpected slashes in the filename, so here we
- // just chop off everything but the filename component. Note that
- // this can cause duplicate filenames, but that's also handled below
- // as well.
- let filename = Path::new(filename).file_name().unwrap()
- .to_str().unwrap();
-
- // An archive can contain files of the same name multiple times, so
- // we need to be sure to not have them overwrite one another when we
- // extract them. Consequently we need to find a truly unique file
- // name for us!
- let mut new_filename = String::new();
- for n in 0.. {
- let n = if n == 0 {String::new()} else {format!("-{}", n)};
- new_filename = format!("r{}-{}-{}", n, name, filename);
-
- // LLDB (as mentioned in back::link) crashes on filenames of
- // exactly
- // 16 bytes in length. If we're including an object file with
- // exactly 16-bytes of characters, give it some prefix so
- // that it's not 16 bytes.
- new_filename = if new_filename.len() == 16 {
- format!("lldb-fix-{}", new_filename)
- } else {
- new_filename
- };
-
- let present = members.iter().filter_map(|p| {
- p.file_name().and_then(|f| f.to_str())
- }).any(|s| s == new_filename);
- if !present {
- break
- }
- }
- let dst = self.work_dir.path().join(&new_filename);
- File::create(&dst)?.write_all(file.data())?;
- members.push(PathBuf::from(new_filename));
- }
- Ok(())
- }
-
- fn run(&self, cwd: Option<&Path>, action: Action) -> Output {
- let abs_dst = env::current_dir().unwrap().join(&self.config.dst);
- let ar = &self.config.ar_prog;
- let mut cmd = Command::new(ar);
- cmd.env("PATH", &self.config.command_path);
- cmd.stdout(Stdio::piped()).stderr(Stdio::piped());
- self.prepare_ar_action(&mut cmd, &abs_dst, action);
- info!("{:?}", cmd);
+ };
- if let Some(p) = cwd {
- cmd.current_dir(p);
- info!("inside {:?}", p.display());
+ if let Err(e) = self.build_with_llvm(kind) {
+ self.config.sess.fatal(&format!("failed to build archive: {}", e));
}
- let sess = &self.config.sess;
- match cmd.spawn() {
- Ok(prog) => {
- let o = prog.wait_with_output().unwrap();
- if !o.status.success() {
- sess.struct_err(&format!("{:?} failed with: {}", cmd, o.status))
- .note(&format!("stdout ---\n{}",
- str::from_utf8(&o.stdout).unwrap()))
- .note(&format!("stderr ---\n{}",
- str::from_utf8(&o.stderr).unwrap()))
- .emit();
- sess.abort_if_errors();
- }
- o
- },
- Err(e) => {
- sess.fatal(&format!("could not exec `{}`: {}",
- self.config.ar_prog, e));
- }
- }
}
- fn prepare_ar_action(&self, cmd: &mut Command, dst: &Path, action: Action) {
- match action {
- Action::Remove(files) => {
- cmd.arg("d").arg(dst).args(files);
- }
- Action::AddObjects(objs, update_symbols) => {
- cmd.arg(if update_symbols {"crs"} else {"crS"})
- .arg(dst)
- .args(objs);
- }
- Action::UpdateSymbols => {
- cmd.arg("s").arg(dst);
- }
- }
+ fn llvm_archive_kind(&self) -> Result<ArchiveKind, &str> {
+ let kind = &self.config.sess.target.target.options.archive_format[..];
+ kind.parse().map_err(|_| kind)
}
fn build_with_llvm(&mut self, kind: ArchiveKind) -> io::Result<()> {
strings.push(path);
strings.push(name);
}
- Addition::Archive { archive, archive_name: _, mut skip } => {
+ Addition::Archive { archive, mut skip } => {
for child in archive.iter() {
let child = child.map_err(string_to_io_error)?;
if !is_relevant_child(&child) {
// symbol table of the archive.
ab.update_symbols();
- // For OSX/iOS, we must be careful to update symbols only when adding
- // object files. We're about to start adding non-object files, so run
- // `ar` now to process the object files.
- if sess.target.target.options.is_like_osx && !ab.using_llvm() {
- ab.build();
- }
-
// Note that it is important that we add all of our non-object "magical
// files" *after* all of the object files in the archive. The reason for
// this is as follows:
// After adding all files to the archive, we need to update the
// symbol table of the archive. This currently dies on OSX (see
// #11162), and isn't necessary there anyway
- if !sess.target.target.options.is_like_osx || ab.using_llvm() {
+ if !sess.target.target.options.is_like_osx {
ab.update_symbols();
}
}
fn link_staticlib(sess: &Session, objects: &[PathBuf], out_filename: &Path,
tempdir: &Path) {
let mut ab = link_rlib(sess, None, objects, out_filename, tempdir);
- if sess.target.target.options.is_like_osx && !ab.using_llvm() {
- ab.build();
- }
if !sess.target.target.options.no_compiler_rt {
ab.add_native_library("compiler-rt");
}
}
fn set_global_section(ccx: &CrateContext, llval: ValueRef, i: &hir::Item) {
- match attr::first_attr_value_str_by_name(&i.attrs, "link_section") {
- Some(sect) => {
- if contains_null(§) {
- ccx.sess().fatal(&format!("Illegal null byte in link_section value: `{}`", §));
- }
- unsafe {
- let buf = CString::new(sect.as_bytes()).unwrap();
- llvm::LLVMSetSection(llval, buf.as_ptr());
- }
- },
- None => ()
+ if let Some(sect) = attr::first_attr_value_str_by_name(&i.attrs, "link_section") {
+ if contains_null(§) {
+ ccx.sess().fatal(&format!("Illegal null byte in link_section value: `{}`", §));
+ }
+ unsafe {
+ let buf = CString::new(sect.as_bytes()).unwrap();
+ llvm::LLVMSetSection(llval, buf.as_ptr());
+ }
}
}
ifn!("llvm.localrecover", fn(i8p, i8p, t_i32) -> i8p);
ifn!("llvm.x86.seh.recoverfp", fn(i8p, i8p) -> i8p);
- // Some intrinsics were introduced in later versions of LLVM, but they have
- // fallbacks in libc or libm and such.
- macro_rules! compatible_ifn {
- ($name:expr, noop($cname:ident ($($arg:expr),*) -> void), $llvm_version:expr) => (
- if unsafe { llvm::LLVMVersionMinor() >= $llvm_version } {
- // The `if key == $name` is already in ifn!
- ifn!($name, fn($($arg),*) -> void);
- } else if key == $name {
- let f = declare::declare_cfn(ccx, stringify!($cname),
- Type::func(&[$($arg),*], &void));
- llvm::SetLinkage(f, llvm::InternalLinkage);
-
- let bld = ccx.builder();
- let llbb = unsafe {
- llvm::LLVMAppendBasicBlockInContext(ccx.llcx(), f,
- "entry-block\0".as_ptr() as *const _)
- };
-
- bld.position_at_end(llbb);
- bld.ret_void();
-
- ccx.intrinsics().borrow_mut().insert($name, f.clone());
- return Some(f);
- }
- );
- ($name:expr, $cname:ident ($($arg:expr),*) -> $ret:expr, $llvm_version:expr) => (
- if unsafe { llvm::LLVMVersionMinor() >= $llvm_version } {
- // The `if key == $name` is already in ifn!
- ifn!($name, fn($($arg),*) -> $ret);
- } else if key == $name {
- let f = declare::declare_cfn(ccx, stringify!($cname),
- Type::func(&[$($arg),*], &$ret));
- ccx.intrinsics().borrow_mut().insert($name, f.clone());
- return Some(f);
- }
- )
- }
-
- compatible_ifn!("llvm.assume", noop(llvmcompat_assume(i1) -> void), 6);
+ ifn!("llvm.assume", fn(i1) -> void);
if ccx.sess().opts.debuginfo != NoDebugInfo {
ifn!("llvm.dbg.declare", fn(Type::metadata(ccx), Type::metadata(ccx)) -> void);
for arg in args {
pat_util::pat_bindings(&arg.pat, |_, node_id, _, path1| {
scope_stack.push(ScopeStackEntry { scope_metadata: fn_metadata,
- name: Some(path1.node.unhygienize()) });
+ name: Some(path1.node) });
scope_map.insert(node_id, fn_metadata);
})
}
// N.B.: this comparison must be UNhygienic... because
// gdb knows nothing about the context, so any two
// variables with the same name will cause the problem.
- let name = path1.node.unhygienize();
+ let name = path1.node;
let need_new_scope = scope_stack
.iter()
.any(|entry| entry.name == Some(name));
}
fn file_metadata_(cx: &CrateContext, key: &str, file_name: &str, work_dir: &str) -> DIFile {
- match debug_context(cx).created_files.borrow().get(key) {
- Some(file_metadata) => return *file_metadata,
- None => ()
+ if let Some(file_metadata) = debug_context(cx).created_files.borrow().get(key) {
+ return *file_metadata;
}
debug!("file_metadata: file_name: {}, work_dir: {}", file_name, work_dir);
let mono_ty = apply_param_substs(ccx.tcx(), psubsts, &item_ty);
debug!("mono_ty = {:?} (post-substitution)", mono_ty);
- match ccx.instances().borrow().get(&instance) {
- Some(&val) => {
- debug!("leaving monomorphic fn {:?}", instance);
- return (val, mono_ty);
- }
- None => ()
+ if let Some(&val) = ccx.instances().borrow().get(&instance) {
+ debug!("leaving monomorphic fn {:?}", instance);
+ return (val, mono_ty);
}
debug!("monomorphic_fn({:?})", instance);
let mut expected_arg_tys = expected_arg_tys;
let expected_arg_count = fn_inputs.len();
+
+ fn parameter_count_error<'tcx>(sess: &Session, sp: Span, fn_inputs: &[Ty<'tcx>],
+ expected_count: usize, arg_count: usize, error_code: &str,
+ variadic: bool) {
+ let mut err = sess.struct_span_err_with_code(sp,
+ &format!("this function takes {}{} parameter{} but {} parameter{} supplied",
+ if variadic {"at least "} else {""},
+ expected_count,
+ if expected_count == 1 {""} else {"s"},
+ arg_count,
+ if arg_count == 1 {" was"} else {"s were"}),
+ error_code);
+ let input_types = fn_inputs.iter().map(|i| format!("{:?}", i)).collect::<Vec<String>>();
+ if input_types.len() > 0 {
+ err.note(&format!("the following parameter type{} expected: {}",
+ if expected_count == 1 {" was"} else {"s were"},
+ input_types.join(", ")));
+ }
+ err.emit();
+ }
+
let formal_tys = if tuple_arguments == TupleArguments {
let tuple_type = self.structurally_resolved_type(sp, fn_inputs[0]);
match tuple_type.sty {
+ ty::TyTuple(arg_types) if arg_types.len() != args.len() => {
+ parameter_count_error(tcx.sess, sp, fn_inputs, arg_types.len(), args.len(),
+ "E0057", false);
+ expected_arg_tys = &[];
+ self.err_args(args.len())
+ }
ty::TyTuple(arg_types) => {
- if arg_types.len() != args.len() {
- span_err!(tcx.sess, sp, E0057,
- "this function takes {} parameter{} but {} parameter{} supplied",
- arg_types.len(),
- if arg_types.len() == 1 {""} else {"s"},
- args.len(),
- if args.len() == 1 {" was"} else {"s were"});
- expected_arg_tys = &[];
- self.err_args(args.len())
- } else {
- expected_arg_tys = match expected_arg_tys.get(0) {
- Some(&ty) => match ty.sty {
- ty::TyTuple(ref tys) => &tys,
- _ => &[]
- },
- None => &[]
- };
- arg_types.to_vec()
- }
+ expected_arg_tys = match expected_arg_tys.get(0) {
+ Some(&ty) => match ty.sty {
+ ty::TyTuple(ref tys) => &tys,
+ _ => &[]
+ },
+ None => &[]
+ };
+ arg_types.to_vec()
}
_ => {
span_err!(tcx.sess, sp, E0059,
if supplied_arg_count >= expected_arg_count {
fn_inputs.to_vec()
} else {
- span_err!(tcx.sess, sp, E0060,
- "this function takes at least {} parameter{} \
- but {} parameter{} supplied",
- expected_arg_count,
- if expected_arg_count == 1 {""} else {"s"},
- supplied_arg_count,
- if supplied_arg_count == 1 {" was"} else {"s were"});
+ parameter_count_error(tcx.sess, sp, fn_inputs, expected_arg_count,
+ supplied_arg_count, "E0060", true);
expected_arg_tys = &[];
self.err_args(supplied_arg_count)
}
} else {
- span_err!(tcx.sess, sp, E0061,
- "this function takes {} parameter{} but {} parameter{} supplied",
- expected_arg_count,
- if expected_arg_count == 1 {""} else {"s"},
- supplied_arg_count,
- if supplied_arg_count == 1 {" was"} else {"s were"});
+ parameter_count_error(tcx.sess, sp, fn_inputs, expected_arg_count, supplied_arg_count,
+ "E0061", false);
expected_arg_tys = &[];
self.err_args(supplied_arg_count)
};
```compile_fail
#[repr(i32)]
-enum NightWatch {} // error: unsupported representation for zero-variant enum
+enum NightsWatch {} // error: unsupported representation for zero-variant enum
```
It is impossible to define an integer type to be used to represent zero-variant
```
#[repr(i32)]
-enum NightWatch {
- JohnSnow,
+enum NightsWatch {
+ JonSnow,
Commander,
}
```
or you remove the integer represention of your enum:
```
-enum NightWatch {}
+enum NightsWatch {}
```
"##,
/// Basic usage:
///
/// ```
- /// assert_eq!('C'.to_lowercase().next(), Some('c'));
+ /// assert_eq!('C'.to_lowercase().collect::<String>(), "c");
+ ///
+ /// // Sometimes the result is more than one character:
+ /// assert_eq!('İ'.to_lowercase().collect::<String>(), "i\u{307}");
///
/// // Japanese scripts do not have case, and so:
- /// assert_eq!('山'.to_lowercase().next(), Some('山'));
+ /// assert_eq!('山'.to_lowercase().collect::<String>(), "山");
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
#[inline]
/// Basic usage:
///
/// ```
- /// assert_eq!('c'.to_uppercase().next(), Some('C'));
+ /// assert_eq!('c'.to_uppercase().collect::<String>(), "C");
+ ///
+ /// // Sometimes the result is more than one character:
+ /// assert_eq!('ß'.to_uppercase().collect::<String>(), "SS");
///
/// // Japanese does not have case, and so:
- /// assert_eq!('山'.to_uppercase().next(), Some('山'));
+ /// assert_eq!('山'.to_uppercase().collect::<String>(), "山");
/// ```
///
/// In Turkish, the equivalent of 'i' in Latin has five forms instead of two:
/// Note that the lowercase dotted 'i' is the same as the Latin. Therefore:
///
/// ```
- /// let upper_i = 'i'.to_uppercase().next();
+ /// let upper_i: String = 'i'.to_uppercase().collect();
/// ```
///
/// The value of `upper_i` here relies on the language of the text: if we're
- /// in `en-US`, it should be `Some('I')`, but if we're in `tr_TR`, it should
- /// be `Some('İ')`. `to_uppercase()` does not take this into account, and so:
+ /// in `en-US`, it should be `"I"`, but if we're in `tr_TR`, it should
+ /// be `"İ"`. `to_uppercase()` does not take this into account, and so:
///
/// ```
- /// let upper_i = 'i'.to_uppercase().next();
+ /// let upper_i: String = 'i'.to_uppercase().collect();
///
- /// assert_eq!(Some('I'), upper_i);
+ /// assert_eq!(upper_i, "I");
/// ```
///
/// holds across languages.
// We consider inlining the documentation of `pub use` statements, but we
// forcefully don't inline if this is not public or if the
// #[doc(no_inline)] attribute is present.
+ // Don't inline doc(hidden) imports so they can be stripped at a later stage.
let denied = self.vis != hir::Public || self.attrs.iter().any(|a| {
&a.name()[..] == "doc" && match a.meta_item_list() {
- Some(l) => attr::contains_name(l, "no_inline"),
+ Some(l) => attr::contains_name(l, "no_inline") ||
+ attr::contains_name(l, "hidden"),
None => false,
}
});
info!("Recursing into {}", self.dst.display());
- mkdir(&self.dst).unwrap();
let ret = f(self);
info!("Recursed; leaving {}", self.dst.display());
fn item<F>(&mut self, item: clean::Item, mut f: F) -> Result<(), Error> where
F: FnMut(&mut Context, clean::Item),
{
- fn render(w: File, cx: &Context, it: &clean::Item,
+ fn render(writer: &mut io::Write, cx: &Context, it: &clean::Item,
pushname: bool) -> io::Result<()> {
// A little unfortunate that this is done like this, but it sure
// does make formatting *a lot* nicer.
reset_ids(true);
- // We have a huge number of calls to write, so try to alleviate some
- // of the pain by using a buffered writer instead of invoking the
- // write syscall all the time.
- let mut writer = BufWriter::new(w);
if !cx.render_redirect_pages {
- layout::render(&mut writer, &cx.shared.layout, &page,
+ layout::render(writer, &cx.shared.layout, &page,
&Sidebar{ cx: cx, item: it },
&Item{ cx: cx, item: it },
cx.shared.css_file_extension.is_some())?;
} else {
let mut url = repeat("../").take(cx.current.len())
.collect::<String>();
- if let Some(&(ref names, _)) = cache().paths.get(&it.def_id) {
+ if let Some(&(ref names, ty)) = cache().paths.get(&it.def_id) {
for name in &names[..names.len() - 1] {
url.push_str(name);
url.push_str("/");
}
- url.push_str(&item_path(it));
- layout::redirect(&mut writer, &url)?;
+ url.push_str(&item_path(ty, names.last().unwrap()));
+ layout::redirect(writer, &url)?;
}
}
- writer.flush()
+ Ok(())
}
// Stripped modules survive the rustdoc passes (i.e. `strip-private`)
let mut item = Some(item);
self.recurse(name, |this| {
let item = item.take().unwrap();
- let joint_dst = this.dst.join("index.html");
- let dst = try_err!(File::create(&joint_dst), &joint_dst);
- try_err!(render(dst, this, &item, false), &joint_dst);
+
+ let mut buf = Vec::new();
+ render(&mut buf, this, &item, false).unwrap();
+ // buf will be empty if the module is stripped and there is no redirect for it
+ if !buf.is_empty() {
+ let joint_dst = this.dst.join("index.html");
+ try_err!(fs::create_dir_all(&this.dst), &this.dst);
+ let mut dst = try_err!(File::create(&joint_dst), &joint_dst);
+ try_err!(dst.write_all(&buf), &joint_dst);
+ }
let m = match item.inner {
clean::StrippedItem(box clean::ModuleItem(m)) |
};
// render sidebar-items.js used throughout this module
- {
+ if !this.render_redirect_pages {
let items = this.build_sidebar_items(&m);
let js_dst = this.dst.join("sidebar-items.js");
let mut js_out = BufWriter::new(try_err!(File::create(&js_dst), &js_dst));
Ok(())
})
} else if item.name.is_some() {
- let joint_dst = self.dst.join(&item_path(&item));
-
- let dst = try_err!(File::create(&joint_dst), &joint_dst);
- try_err!(render(dst, self, &item, true), &joint_dst);
+ let mut buf = Vec::new();
+ render(&mut buf, self, &item, true).unwrap();
+ // buf will be empty if the item is stripped and there is no redirect for it
+ if !buf.is_empty() {
+ let joint_dst = self.dst.join(&item_path(shortty(&item),
+ item.name.as_ref().unwrap()));
+ try_err!(fs::create_dir_all(&self.dst), &self.dst);
+ let mut dst = try_err!(File::create(&joint_dst), &joint_dst);
+ try_err!(dst.write_all(&buf), &joint_dst);
+ }
Ok(())
} else {
Ok(())
Some(format!("{root}{path}/{file}?gotosrc={goto}",
root = root,
path = path[..path.len() - 1].join("/"),
- file = item_path(self.item),
+ file = item_path(shortty(self.item), self.item.name.as_ref().unwrap()),
goto = self.item.def_id.index.as_usize()))
}
}
}
}
-fn item_path(item: &clean::Item) -> String {
- if item.is_mod() {
- format!("{}/index.html", item.name.as_ref().unwrap())
- } else {
- format!("{}.{}.html",
- shortty(item).to_static_str(),
- *item.name.as_ref().unwrap())
+fn item_path(ty: ItemType, name: &str) -> String {
+ match ty {
+ ItemType::Module => format!("{}/index.html", name),
+ _ => format!("{}.{}.html", ty.to_static_str(), name),
}
}
docs = shorter(Some(&Markdown(doc_value).to_string())),
class = shortty(myitem),
stab = myitem.stability_class(),
- href = item_path(myitem),
+ href = item_path(shortty(myitem), myitem.name.as_ref().unwrap()),
title = full_path(cx, myitem))?;
}
}
if fields.peek().is_some() {
write!(w, "<h2 class='fields'>Fields</h2>")?;
for (field, ty) in fields {
- write!(w, "<span id='{shortty}.{name}'><code>{name}: {ty}</code></span>
- <span class='stab {stab}'></span>",
+ write!(w, "<span id='{shortty}.{name}' class='{shortty}'><code>{name}: {ty}</code>
+ </span><span class='stab {stab}'></span>",
shortty = ItemType::StructField,
stab = field.stability_class(),
name = field.name.as_ref().unwrap(),
margin-bottom: 25px;
}
+.variant, .structfield {
+ display: block;
+}
+
:target > code {
background: #FDFFD3;
}
let def_did = def.def_id();
let use_attrs = tcx.map.attrs(id).clean(self.cx);
- let is_no_inline = use_attrs.list("doc").has_word("no_inline");
+ // Don't inline doc(hidden) imports so they can be stripped at a later stage.
+ let is_no_inline = use_attrs.list("doc").has_word("no_inline") ||
+ use_attrs.list("doc").has_word("hidden");
// For cross-crate impl inlining we need to know whether items are
// reachable in documentation - a previously nonreachable item can be
self.elem.read().0
}
+ /// Take the ownership of the key and value from the map.
+ #[unstable(feature = "map_entry_recover_keys", issue = "34285")]
+ pub fn remove_pair(self) -> (K, V) {
+ pop_internal(self.elem)
+ }
+
/// Gets a reference to the value in the entry.
#[stable(feature = "rust1", since = "1.0.0")]
pub fn get(&self) -> &V {
pub fn remove(self) -> V {
pop_internal(self.elem).1
}
+
/// Returns a key that was used for search.
///
/// The key was retained for further use.
&self.key
}
+ /// Take ownership of the key.
+ #[unstable(feature = "map_entry_recover_keys", issue = "34285")]
+ pub fn into_key(self) -> K {
+ self.key
+ }
+
/// Sets the value of the entry with the VacantEntry's key,
/// and returns a mutable reference to it
#[stable(feature = "rust1", since = "1.0.0")]
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-use alloc::heap::{allocate, deallocate, EMPTY};
+use alloc::heap::{EMPTY, allocate, deallocate};
use cmp;
-use hash::{Hash, Hasher, BuildHasher};
+use hash::{BuildHasher, Hash, Hasher};
use intrinsics::needs_drop;
use marker;
use mem::{align_of, size_of};
#[unsafe_no_drop_flag]
pub struct RawTable<K, V> {
capacity: usize,
- size: usize,
- hashes: Unique<u64>,
+ size: usize,
+ hashes: Unique<u64>,
// Because K/V do not appear directly in any of the types in the struct,
// inform rustc that in fact instances of K and V are reachable from here.
- marker: marker::PhantomData<(K,V)>,
+ marker: marker::PhantomData<(K, V)>,
}
unsafe impl<K: Send, V: Send> Send for RawTable<K, V> {}
hash: *mut u64,
// We use *const to ensure covariance with respect to K and V
- key: *const K,
- val: *const V,
- _marker: marker::PhantomData<(K,V)>,
+ key: *const K,
+ val: *const V,
+ _marker: marker::PhantomData<(K, V)>,
}
-impl<K,V> Copy for RawBucket<K,V> {}
-impl<K,V> Clone for RawBucket<K,V> {
- fn clone(&self) -> RawBucket<K, V> { *self }
+impl<K, V> Copy for RawBucket<K, V> {}
+impl<K, V> Clone for RawBucket<K, V> {
+ fn clone(&self) -> RawBucket<K, V> {
+ *self
+ }
}
pub struct Bucket<K, V, M> {
- raw: RawBucket<K, V>,
- idx: usize,
- table: M
+ raw: RawBucket<K, V>,
+ idx: usize,
+ table: M,
}
-impl<K,V,M:Copy> Copy for Bucket<K,V,M> {}
-impl<K,V,M:Copy> Clone for Bucket<K,V,M> {
- fn clone(&self) -> Bucket<K,V,M> { *self }
+impl<K, V, M: Copy> Copy for Bucket<K, V, M> {}
+impl<K, V, M: Copy> Clone for Bucket<K, V, M> {
+ fn clone(&self) -> Bucket<K, V, M> {
+ *self
+ }
}
pub struct EmptyBucket<K, V, M> {
- raw: RawBucket<K, V>,
- idx: usize,
- table: M
+ raw: RawBucket<K, V>,
+ idx: usize,
+ table: M,
}
pub struct FullBucket<K, V, M> {
- raw: RawBucket<K, V>,
- idx: usize,
- table: M
+ raw: RawBucket<K, V>,
+ idx: usize,
+ table: M,
}
pub type EmptyBucketImm<'table, K, V> = EmptyBucket<K, V, &'table RawTable<K, V>>;
-pub type FullBucketImm<'table, K, V> = FullBucket<K, V, &'table RawTable<K, V>>;
+pub type FullBucketImm<'table, K, V> = FullBucket<K, V, &'table RawTable<K, V>>;
pub type EmptyBucketMut<'table, K, V> = EmptyBucket<K, V, &'table mut RawTable<K, V>>;
-pub type FullBucketMut<'table, K, V> = FullBucket<K, V, &'table mut RawTable<K, V>>;
+pub type FullBucketMut<'table, K, V> = FullBucket<K, V, &'table mut RawTable<K, V>>;
pub enum BucketState<K, V, M> {
Empty(EmptyBucket<K, V, M>),
impl SafeHash {
/// Peek at the hash value, which is guaranteed to be non-zero.
#[inline(always)]
- pub fn inspect(&self) -> u64 { self.hash }
+ pub fn inspect(&self) -> u64 {
+ self.hash
+ }
}
/// We need to remove hashes of 0. That's reserved for empty buckets.
/// This function wraps up `hash_keyed` to be the only way outside this
/// module to generate a SafeHash.
pub fn make_hash<T: ?Sized, S>(hash_state: &S, t: &T) -> SafeHash
- where T: Hash, S: BuildHasher
+ where T: Hash,
+ S: BuildHasher
{
let mut state = hash_state.build_hasher();
t.hash(&mut state);
unsafe fn offset(self, count: isize) -> RawBucket<K, V> {
RawBucket {
hash: self.hash.offset(count),
- key: self.key.offset(count),
- val: self.val.offset(count),
+ key: self.key.offset(count),
+ val: self.val.offset(count),
_marker: marker::PhantomData,
}
}
}
}
-impl<K, V, M> Deref for FullBucket<K, V, M> where M: Deref<Target=RawTable<K, V>> {
+impl<K, V, M> Deref for FullBucket<K, V, M>
+ where M: Deref<Target = RawTable<K, V>>
+{
type Target = RawTable<K, V>;
fn deref(&self) -> &RawTable<K, V> {
&self.table
}
}
-impl<K, V, M> Put<K, V> for Bucket<K, V, M> where M: Put<K, V> {
+impl<K, V, M> Put<K, V> for Bucket<K, V, M>
+ where M: Put<K, V>
+{
unsafe fn borrow_table_mut(&mut self) -> &mut RawTable<K, V> {
self.table.borrow_table_mut()
}
}
-impl<K, V, M> Put<K, V> for FullBucket<K, V, M> where M: Put<K, V> {
+impl<K, V, M> Put<K, V> for FullBucket<K, V, M>
+ where M: Put<K, V>
+{
unsafe fn borrow_table_mut(&mut self) -> &mut RawTable<K, V> {
self.table.borrow_table_mut()
}
}
-impl<K, V, M: Deref<Target=RawTable<K, V>>> Bucket<K, V, M> {
+impl<K, V, M: Deref<Target = RawTable<K, V>>> Bucket<K, V, M> {
pub fn new(table: M, hash: SafeHash) -> Bucket<K, V, M> {
Bucket::at_index(table, hash.inspect() as usize)
}
pub fn at_index(table: M, ib_index: usize) -> Bucket<K, V, M> {
// if capacity is 0, then the RawBucket will be populated with bogus pointers.
// This is an uncommon case though, so avoid it in release builds.
- debug_assert!(table.capacity() > 0, "Table should have capacity at this point");
+ debug_assert!(table.capacity() > 0,
+ "Table should have capacity at this point");
let ib_index = ib_index & (table.capacity() - 1);
Bucket {
- raw: unsafe {
- table.first_bucket_raw().offset(ib_index as isize)
- },
+ raw: unsafe { table.first_bucket_raw().offset(ib_index as isize) },
idx: ib_index,
- table: table
+ table: table,
}
}
Bucket {
raw: table.first_bucket_raw(),
idx: 0,
- table: table
+ table: table,
}
}
/// this module.
pub fn peek(self) -> BucketState<K, V, M> {
match unsafe { *self.raw.hash } {
- EMPTY_BUCKET =>
+ EMPTY_BUCKET => {
Empty(EmptyBucket {
raw: self.raw,
idx: self.idx,
- table: self.table
- }),
- _ =>
+ table: self.table,
+ })
+ }
+ _ => {
Full(FullBucket {
raw: self.raw,
idx: self.idx,
- table: self.table
+ table: self.table,
})
+ }
}
}
}
}
-impl<K, V, M: Deref<Target=RawTable<K, V>>> EmptyBucket<K, V, M> {
+impl<K, V, M: Deref<Target = RawTable<K, V>>> EmptyBucket<K, V, M> {
#[inline]
pub fn next(self) -> Bucket<K, V, M> {
let mut bucket = self.into_bucket();
Bucket {
raw: self.raw,
idx: self.idx,
- table: self.table
+ table: self.table,
}
}
let gap = EmptyBucket {
raw: self.raw,
idx: self.idx,
- table: ()
+ table: (),
};
match self.next().peek() {
Full(bucket) => {
Some(GapThenFull {
gap: gap,
- full: bucket
+ full: bucket,
})
}
- Empty(..) => None
+ Empty(..) => None,
}
}
}
-impl<K, V, M> EmptyBucket<K, V, M> where M: Put<K, V> {
+impl<K, V, M> EmptyBucket<K, V, M>
+ where M: Put<K, V>
+{
/// Puts given key and value pair, along with the key's hash,
/// into this bucket in the hashtable. Note how `self` is 'moved' into
/// this function, because this slot will no longer be empty when
/// the newly-filled slot in the hashtable.
///
/// Use `make_hash` to construct a `SafeHash` to pass to this function.
- pub fn put(mut self, hash: SafeHash, key: K, value: V)
- -> FullBucket<K, V, M> {
+ pub fn put(mut self, hash: SafeHash, key: K, value: V) -> FullBucket<K, V, M> {
unsafe {
*self.raw.hash = hash.inspect();
ptr::write(self.raw.key as *mut K, key);
self.table.borrow_table_mut().size += 1;
}
- FullBucket { raw: self.raw, idx: self.idx, table: self.table }
+ FullBucket {
+ raw: self.raw,
+ idx: self.idx,
+ table: self.table,
+ }
}
}
-impl<K, V, M: Deref<Target=RawTable<K, V>>> FullBucket<K, V, M> {
+impl<K, V, M: Deref<Target = RawTable<K, V>>> FullBucket<K, V, M> {
#[inline]
pub fn next(self) -> Bucket<K, V, M> {
let mut bucket = self.into_bucket();
Bucket {
raw: self.raw,
idx: self.idx,
- table: self.table
+ table: self.table,
}
}
#[inline]
pub fn hash(&self) -> SafeHash {
- unsafe {
- SafeHash {
- hash: *self.raw.hash
- }
- }
+ unsafe { SafeHash { hash: *self.raw.hash } }
}
/// Gets references to the key and value at a given index.
pub fn read(&self) -> (&K, &V) {
- unsafe {
- (&*self.raw.key,
- &*self.raw.val)
- }
+ unsafe { (&*self.raw.key, &*self.raw.val) }
}
}
unsafe {
*self.raw.hash = EMPTY_BUCKET;
- (
- EmptyBucket {
- raw: self.raw,
- idx: self.idx,
- table: self.table
- },
- ptr::read(self.raw.key),
- ptr::read(self.raw.val)
- )
+ (EmptyBucket {
+ raw: self.raw,
+ idx: self.idx,
+ table: self.table,
+ },
+ ptr::read(self.raw.key),
+ ptr::read(self.raw.val))
}
}
}
// This use of `Put` is misleading and restrictive, but safe and sufficient for our use cases
// where `M` is a full bucket or table reference type with mutable access to the table.
-impl<K, V, M> FullBucket<K, V, M> where M: Put<K, V> {
+impl<K, V, M> FullBucket<K, V, M>
+ where M: Put<K, V>
+{
pub fn replace(&mut self, h: SafeHash, k: K, v: V) -> (SafeHash, K, V) {
unsafe {
let old_hash = ptr::replace(self.raw.hash as *mut SafeHash, h);
- let old_key = ptr::replace(self.raw.key as *mut K, k);
- let old_val = ptr::replace(self.raw.val as *mut V, v);
+ let old_key = ptr::replace(self.raw.key as *mut K, k);
+ let old_val = ptr::replace(self.raw.val as *mut V, v);
(old_hash, old_key, old_val)
}
}
}
-impl<K, V, M> FullBucket<K, V, M> where M: Deref<Target=RawTable<K, V>> + DerefMut {
+impl<K, V, M> FullBucket<K, V, M>
+ where M: Deref<Target = RawTable<K, V>> + DerefMut
+{
/// Gets mutable references to the key and value at a given index.
pub fn read_mut(&mut self) -> (&mut K, &mut V) {
- unsafe {
- (&mut *(self.raw.key as *mut K),
- &mut *(self.raw.val as *mut V))
- }
+ unsafe { (&mut *(self.raw.key as *mut K), &mut *(self.raw.val as *mut V)) }
}
}
-impl<'t, K, V, M> FullBucket<K, V, M> where M: Deref<Target=RawTable<K, V>> + 't {
+impl<'t, K, V, M> FullBucket<K, V, M>
+ where M: Deref<Target = RawTable<K, V>> + 't
+{
/// Exchange a bucket state for immutable references into the table.
/// Because the underlying reference to the table is also consumed,
/// no further changes to the structure of the table are possible;
/// in exchange for this, the returned references have a longer lifetime
/// than the references returned by `read()`.
pub fn into_refs(self) -> (&'t K, &'t V) {
- unsafe {
- (&*self.raw.key,
- &*self.raw.val)
- }
+ unsafe { (&*self.raw.key, &*self.raw.val) }
}
}
-impl<'t, K, V, M> FullBucket<K, V, M> where M: Deref<Target=RawTable<K, V>> + DerefMut + 't {
+impl<'t, K, V, M> FullBucket<K, V, M>
+ where M: Deref<Target = RawTable<K, V>> + DerefMut + 't
+{
/// This works similarly to `into_refs`, exchanging a bucket state
/// for mutable references into the table.
pub fn into_mut_refs(self) -> (&'t mut K, &'t mut V) {
- unsafe {
- (&mut *(self.raw.key as *mut K),
- &mut *(self.raw.val as *mut V))
- }
+ unsafe { (&mut *(self.raw.key as *mut K), &mut *(self.raw.val as *mut V)) }
}
}
-impl<K, V, M> GapThenFull<K, V, M> where M: Deref<Target=RawTable<K, V>> {
+impl<K, V, M> GapThenFull<K, V, M>
+ where M: Deref<Target = RawTable<K, V>>
+{
#[inline]
pub fn full(&self) -> &FullBucket<K, V, M> {
&self.full
Some(self)
}
- Empty(..) => None
+ Empty(..) => None,
}
}
}
// from the start of a mallocated array.
#[inline]
fn calculate_offsets(hashes_size: usize,
- keys_size: usize, keys_align: usize,
+ keys_size: usize,
+ keys_align: usize,
vals_align: usize)
-> (usize, usize, bool) {
let keys_offset = round_up_to_next(hashes_size, keys_align);
// Returns a tuple of (minimum required malloc alignment, hash_offset,
// array_size), from the start of a mallocated array.
-fn calculate_allocation(hash_size: usize, hash_align: usize,
- keys_size: usize, keys_align: usize,
- vals_size: usize, vals_align: usize)
+fn calculate_allocation(hash_size: usize,
+ hash_align: usize,
+ keys_size: usize,
+ keys_align: usize,
+ vals_size: usize,
+ vals_align: usize)
-> (usize, usize, usize, bool) {
let hash_offset = 0;
- let (_, vals_offset, oflo) = calculate_offsets(hash_size,
- keys_size, keys_align,
- vals_align);
+ let (_, vals_offset, oflo) = calculate_offsets(hash_size, keys_size, keys_align, vals_align);
let (end_of_vals, oflo2) = vals_offset.overflowing_add(vals_size);
let align = cmp::max(hash_align, cmp::max(keys_align, vals_align));
#[test]
fn test_offset_calculation() {
- assert_eq!(calculate_allocation(128, 8, 15, 1, 4, 4), (8, 0, 148, false));
- assert_eq!(calculate_allocation(3, 1, 2, 1, 1, 1), (1, 0, 6, false));
- assert_eq!(calculate_allocation(6, 2, 12, 4, 24, 8), (8, 0, 48, false));
+ assert_eq!(calculate_allocation(128, 8, 15, 1, 4, 4),
+ (8, 0, 148, false));
+ assert_eq!(calculate_allocation(3, 1, 2, 1, 1, 1), (1, 0, 6, false));
+ assert_eq!(calculate_allocation(6, 2, 12, 4, 24, 8), (8, 0, 48, false));
assert_eq!(calculate_offsets(128, 15, 1, 4), (128, 144, false));
- assert_eq!(calculate_offsets(3, 2, 1, 1), (3, 5, false));
- assert_eq!(calculate_offsets(6, 12, 4, 8), (8, 24, false));
+ assert_eq!(calculate_offsets(3, 2, 1, 1), (3, 5, false));
+ assert_eq!(calculate_offsets(6, 12, 4, 8), (8, 24, false));
}
impl<K, V> RawTable<K, V> {
// No need for `checked_mul` before a more restrictive check performed
// later in this method.
let hashes_size = capacity * size_of::<u64>();
- let keys_size = capacity * size_of::< K >();
- let vals_size = capacity * size_of::< V >();
+ let keys_size = capacity * size_of::<K>();
+ let vals_size = capacity * size_of::<V>();
// Allocating hashmaps is a little tricky. We need to allocate three
// arrays, but since we know their sizes and alignments up front,
// This is great in theory, but in practice getting the alignment
// right is a little subtle. Therefore, calculating offsets has been
// factored out into a different function.
- let (malloc_alignment, hash_offset, size, oflo) =
- calculate_allocation(
- hashes_size, align_of::<u64>(),
- keys_size, align_of::< K >(),
- vals_size, align_of::< V >());
+ let (malloc_alignment, hash_offset, size, oflo) = calculate_allocation(hashes_size,
+ align_of::<u64>(),
+ keys_size,
+ align_of::<K>(),
+ vals_size,
+ align_of::<V>());
assert!(!oflo, "capacity overflow");
// One check for overflow that covers calculation and rounding of size.
- let size_of_bucket = size_of::<u64>().checked_add(size_of::<K>()).unwrap()
- .checked_add(size_of::<V>()).unwrap();
- assert!(size >= capacity.checked_mul(size_of_bucket)
- .expect("capacity overflow"),
+ let size_of_bucket = size_of::<u64>()
+ .checked_add(size_of::<K>())
+ .unwrap()
+ .checked_add(size_of::<V>())
+ .unwrap();
+ assert!(size >=
+ capacity.checked_mul(size_of_bucket)
+ .expect("capacity overflow"),
"capacity overflow");
let buffer = allocate(size, malloc_alignment);
- if buffer.is_null() { ::alloc::oom() }
+ if buffer.is_null() {
+ ::alloc::oom()
+ }
let hashes = buffer.offset(hash_offset as isize) as *mut u64;
RawTable {
capacity: capacity,
- size: 0,
- hashes: Unique::new(hashes),
- marker: marker::PhantomData,
+ size: 0,
+ hashes: Unique::new(hashes),
+ marker: marker::PhantomData,
}
}
let keys_size = self.capacity * size_of::<K>();
let buffer = *self.hashes as *const u8;
- let (keys_offset, vals_offset, oflo) =
- calculate_offsets(hashes_size,
- keys_size, align_of::<K>(),
- align_of::<V>());
+ let (keys_offset, vals_offset, oflo) = calculate_offsets(hashes_size,
+ keys_size,
+ align_of::<K>(),
+ align_of::<V>());
debug_assert!(!oflo, "capacity overflow");
unsafe {
RawBucket {
hash: *self.hashes,
- key: buffer.offset(keys_offset as isize) as *const K,
- val: buffer.offset(vals_offset as isize) as *const V,
+ key: buffer.offset(keys_offset as isize) as *const K,
+ val: buffer.offset(vals_offset as isize) as *const V,
_marker: marker::PhantomData,
}
}
fn raw_buckets(&self) -> RawBuckets<K, V> {
RawBuckets {
raw: self.first_bucket_raw(),
- hashes_end: unsafe {
- self.hashes.offset(self.capacity as isize)
- },
+ hashes_end: unsafe { self.hashes.offset(self.capacity as isize) },
marker: marker::PhantomData,
}
}
raw: raw_bucket.offset(self.capacity as isize),
hashes_end: raw_bucket.hash,
elems_left: self.size,
- marker: marker::PhantomData,
+ marker: marker::PhantomData,
}
}
}
if *self.raw.hash != EMPTY_BUCKET {
self.elems_left -= 1;
- return Some((
- ptr::read(self.raw.key),
- ptr::read(self.raw.val)
- ));
+ return Some((ptr::read(self.raw.key), ptr::read(self.raw.val)));
}
}
}
fn clone(&self) -> Iter<'a, K, V> {
Iter {
iter: self.iter.clone(),
- elems_left: self.elems_left
+ elems_left: self.elems_left,
}
}
}
/// Iterator over the entries in a table, consuming the table.
pub struct IntoIter<K, V> {
table: RawTable<K, V>,
- iter: RawBuckets<'static, K, V>
+ iter: RawBuckets<'static, K, V>,
}
unsafe impl<K: Sync, V: Sync> Sync for IntoIter<K, V> {}
fn next(&mut self) -> Option<(&'a K, &'a V)> {
self.iter.next().map(|bucket| {
self.elems_left -= 1;
- unsafe {
- (&*bucket.key,
- &*bucket.val)
- }
+ unsafe { (&*bucket.key, &*bucket.val) }
})
}
}
}
impl<'a, K, V> ExactSizeIterator for Iter<'a, K, V> {
- fn len(&self) -> usize { self.elems_left }
+ fn len(&self) -> usize {
+ self.elems_left
+ }
}
impl<'a, K, V> Iterator for IterMut<'a, K, V> {
fn next(&mut self) -> Option<(&'a K, &'a mut V)> {
self.iter.next().map(|bucket| {
self.elems_left -= 1;
- unsafe {
- (&*bucket.key,
- &mut *(bucket.val as *mut V))
- }
+ unsafe { (&*bucket.key, &mut *(bucket.val as *mut V)) }
})
}
}
}
impl<'a, K, V> ExactSizeIterator for IterMut<'a, K, V> {
- fn len(&self) -> usize { self.elems_left }
+ fn len(&self) -> usize {
+ self.elems_left
+ }
}
impl<K, V> Iterator for IntoIter<K, V> {
self.iter.next().map(|bucket| {
self.table.size -= 1;
unsafe {
- (
- SafeHash {
- hash: *bucket.hash,
- },
- ptr::read(bucket.key),
- ptr::read(bucket.val)
- )
+ (SafeHash { hash: *bucket.hash }, ptr::read(bucket.key), ptr::read(bucket.val))
}
})
}
}
}
impl<K, V> ExactSizeIterator for IntoIter<K, V> {
- fn len(&self) -> usize { self.table.size() }
+ fn len(&self) -> usize {
+ self.table.size()
+ }
}
impl<'a, K, V> Iterator for Drain<'a, K, V> {
self.iter.next().map(|bucket| {
self.table.size -= 1;
unsafe {
- (
- SafeHash {
- hash: ptr::replace(bucket.hash, EMPTY_BUCKET),
- },
- ptr::read(bucket.key),
- ptr::read(bucket.val)
- )
+ (SafeHash { hash: ptr::replace(bucket.hash, EMPTY_BUCKET) },
+ ptr::read(bucket.key),
+ ptr::read(bucket.val))
}
})
}
}
}
impl<'a, K, V> ExactSizeIterator for Drain<'a, K, V> {
- fn len(&self) -> usize { self.table.size() }
+ fn len(&self) -> usize {
+ self.table.size()
+ }
}
impl<'a, K: 'a, V: 'a> Drop for Drain<'a, K, V> {
// dropping empty tables such as on resize.
// Also avoid double drop of elements that have been already moved out.
unsafe {
- if needs_drop::<(K, V)>() { // avoid linear runtime for types that don't need drop
+ if needs_drop::<(K, V)>() {
+ // avoid linear runtime for types that don't need drop
for _ in self.rev_move_buckets() {}
}
}
let hashes_size = self.capacity * size_of::<u64>();
let keys_size = self.capacity * size_of::<K>();
let vals_size = self.capacity * size_of::<V>();
- let (align, _, size, oflo) =
- calculate_allocation(hashes_size, align_of::<u64>(),
- keys_size, align_of::<K>(),
- vals_size, align_of::<V>());
+ let (align, _, size, oflo) = calculate_allocation(hashes_size,
+ align_of::<u64>(),
+ keys_size,
+ align_of::<K>(),
+ vals_size,
+ align_of::<V>());
debug_assert!(!oflo, "should be impossible");
/// No file is allowed to exist at the target location, also no (dangling)
/// symlink.
///
- /// This option is useful because it as atomic. Otherwise between checking
+ /// This option is useful because it is atomic. Otherwise between checking
/// whether a file exists and creating a new one, the file may have been
/// created by another process (a TOCTOU race condition / attack).
///
check!(fs::remove_dir(dir));
}
+ #[test]
+ fn file_create_new_already_exists_error() {
+ let tmpdir = tmpdir();
+ let file = &tmpdir.join("file_create_new_error_exists");
+ check!(fs::File::create(file));
+ let e = fs::OpenOptions::new().write(true).create_new(true).open(file).unwrap_err();
+ assert_eq!(e.kind(), ErrorKind::AlreadyExists);
+ }
+
#[test]
fn mkdir_path_already_exists_error() {
let tmpdir = tmpdir();
[(bits >> 24) as u8, (bits >> 16) as u8, (bits >> 8) as u8, bits as u8]
}
- /// Returns true for the special 'unspecified' address 0.0.0.0.
+ /// Returns true for the special 'unspecified' address (0.0.0.0).
pub fn is_unspecified(&self) -> bool {
self.inner.s_addr == 0
}
/// Returns true if this is a loopback address (127.0.0.0/8).
///
- /// This property is defined by RFC 6890.
+ /// This property is defined by [RFC 1122].
+ /// [RFC 1122]: https://tools.ietf.org/html/rfc1122
#[stable(since = "1.7.0", feature = "ip_17")]
pub fn is_loopback(&self) -> bool {
self.octets()[0] == 127
/// Returns true if this is a private address.
///
- /// The private address ranges are defined in RFC 1918 and include:
+ /// The private address ranges are defined in [RFC 1918] and include:
+ /// [RFC 1918]: https://tools.ietf.org/html/rfc1918
///
/// - 10.0.0.0/8
/// - 172.16.0.0/12
/// Returns true if the address is link-local (169.254.0.0/16).
///
- /// This property is defined by RFC 6890.
+ /// This property is defined by [RFC 3927].
+ /// [RFC 3927]: https://tools.ietf.org/html/rfc3927
#[stable(since = "1.7.0", feature = "ip_17")]
pub fn is_link_local(&self) -> bool {
self.octets()[0] == 169 && self.octets()[1] == 254
!self.is_broadcast() && !self.is_documentation() && !self.is_unspecified()
}
- /// Returns true if this is a multicast address.
+ /// Returns true if this is a multicast address (224.0.0.0/4).
///
/// Multicast addresses have a most significant octet between 224 and 239,
- /// and is defined by RFC 5771.
+ /// and is defined by [RFC 5771].
+ /// [RFC 5771]: https://tools.ietf.org/html/rfc5771
#[stable(since = "1.7.0", feature = "ip_17")]
pub fn is_multicast(&self) -> bool {
self.octets()[0] >= 224 && self.octets()[0] <= 239
}
- /// Returns true if this is a broadcast address.
+ /// Returns true if this is a broadcast address (255.255.255.255).
///
- /// A broadcast address has all octets set to 255 as defined in RFC 919.
+ /// A broadcast address has all octets set to 255 as defined in [RFC 919].
+ /// [RFC 919]: https://tools.ietf.org/html/rfc919
#[stable(since = "1.7.0", feature = "ip_17")]
pub fn is_broadcast(&self) -> bool {
self.octets()[0] == 255 && self.octets()[1] == 255 &&
/// Returns true if this address is in a range designated for documentation.
///
- /// This is defined in RFC 5737:
+ /// This is defined in [RFC 5737]:
+ /// [RFC 5737]: https://tools.ietf.org/html/rfc5737
///
/// - 192.0.2.0/24 (TEST-NET-1)
/// - 198.51.100.0/24 (TEST-NET-2)
]
}
- /// Returns true for the special 'unspecified' address ::.
+ /// Returns true for the special 'unspecified' address (::).
///
- /// This property is defined in RFC 6890.
+ /// This property is defined in [RFC 4291].
+ /// [RFC 4291]: https://tools.ietf.org/html/rfc4291
#[stable(since = "1.7.0", feature = "ip_17")]
pub fn is_unspecified(&self) -> bool {
self.segments() == [0, 0, 0, 0, 0, 0, 0, 0]
/// Returns true if this is a loopback address (::1).
///
- /// This property is defined in RFC 6890.
+ /// This property is defined in [RFC 4291].
+ /// [RFC 4291]: https://tools.ietf.org/html/rfc4291
#[stable(since = "1.7.0", feature = "ip_17")]
pub fn is_loopback(&self) -> bool {
self.segments() == [0, 0, 0, 0, 0, 0, 0, 1]
}
}
- /// Returns true if this is a unique local address (IPv6).
+ /// Returns true if this is a unique local address (fc00::/7).
///
- /// Unique local addresses are defined in RFC 4193 and have the form fc00::/7.
+ /// This property is defined in [RFC 4193].
+ /// [RFC 4193]: https://tools.ietf.org/html/rfc4193
pub fn is_unique_local(&self) -> bool {
(self.segments()[0] & 0xfe00) == 0xfc00
}
/// Returns true if the address is unicast and link-local (fe80::/10).
+ ///
+ /// This property is defined in [RFC 4291].
+ /// [RFC 4291]: https://tools.ietf.org/html/rfc4291
pub fn is_unicast_link_local(&self) -> bool {
(self.segments()[0] & 0xffc0) == 0xfe80
}
- /// Returns true if this is a deprecated unicast site-local address (IPv6
- /// fec0::/10).
+ /// Returns true if this is a deprecated unicast site-local address
+ /// (fec0::/10).
pub fn is_unicast_site_local(&self) -> bool {
(self.segments()[0] & 0xffc0) == 0xfec0
}
/// Returns true if this is an address reserved for documentation
- /// This is defined to be 2001:db8::/32 in RFC 3849.
+ /// (2001:db8::/32).
+ ///
+ /// This property is defined in [RFC 3849].
+ /// [RFC 3849]: https://tools.ietf.org/html/rfc3849
pub fn is_documentation(&self) -> bool {
(self.segments()[0] == 0x2001) && (self.segments()[1] == 0xdb8)
}
}
}
- /// Returns true if this is a multicast address.
+ /// Returns true if this is a multicast address (ff00::/8).
///
- /// Multicast addresses have the form ff00::/8, and this property is defined
- /// by RFC 3956.
+ /// This property is defined by [RFC 4291].
+ /// [RFC 4291]: https://tools.ietf.org/html/rfc4291
#[stable(since = "1.7.0", feature = "ip_17")]
pub fn is_multicast(&self) -> bool {
(self.segments()[0] & 0xff00) == 0xff00
}
}
-/// The `Command` type acts as a process builder, providing fine-grained control
-/// over how a new process should be spawned. A default configuration can be
+/// A process builder, providing fine-grained control
+/// over how a new process should be spawned.
+///
+/// A default configuration can be
/// generated using `Command::new(program)`, where `program` gives a path to the
/// program to be executed. Additional builder methods allow the configuration
/// to be changed (for example, by adding arguments) prior to spawning:
pub const ERROR_INVALID_HANDLE: DWORD = 6;
pub const ERROR_NO_MORE_FILES: DWORD = 18;
pub const ERROR_HANDLE_EOF: DWORD = 38;
+pub const ERROR_FILE_EXISTS: DWORD = 80;
pub const ERROR_BROKEN_PIPE: DWORD = 109;
pub const ERROR_CALL_NOT_IMPLEMENTED: DWORD = 120;
pub const ERROR_INSUFFICIENT_BUFFER: DWORD = 122;
match errno as c::DWORD {
c::ERROR_ACCESS_DENIED => return ErrorKind::PermissionDenied,
c::ERROR_ALREADY_EXISTS => return ErrorKind::AlreadyExists,
+ c::ERROR_FILE_EXISTS => return ErrorKind::AlreadyExists,
c::ERROR_BROKEN_PIPE => return ErrorKind::BrokenPipe,
c::ERROR_FILE_NOT_FOUND => return ErrorKind::NotFound,
c::ERROR_PATH_NOT_FOUND => return ErrorKind::NotFound,
pub fn as_str(self) -> token::InternedString {
token::InternedString::new_from_name(self)
}
-
- pub fn unhygienize(self) -> Name {
- token::intern(&self.as_str())
- }
}
impl fmt::Debug for Name {
use fold::Folder;
use {ast, fold, attr};
use codemap::{Spanned, respan};
+use parse::token;
use ptr::P;
use util::small_vector::SmallVector;
-pub trait CfgFolder: fold::Folder {
- // Check if a node with the given attributes is in this configuration.
- fn in_cfg(&mut self, attrs: &[ast::Attribute]) -> bool;
-
- // Update a node before checking if it is in this configuration (used to implement `cfg_attr`).
- fn process_attrs<T: HasAttrs>(&mut self, node: T) -> T { node }
-
- // Visit attributes on expression and statements (but not attributes on items in blocks).
- fn visit_stmt_or_expr_attrs(&mut self, _attrs: &[ast::Attribute]) {}
-
- // Visit unremovable (non-optional) expressions -- c.f. `fold_expr` vs `fold_opt_expr`.
- fn visit_unremovable_expr(&mut self, _expr: &ast::Expr) {}
-
- fn configure<T: HasAttrs>(&mut self, node: T) -> Option<T> {
- let node = self.process_attrs(node);
- if self.in_cfg(node.attrs()) { Some(node) } else { None }
- }
-}
-
-/// A folder that strips out items that do not belong in the current
-/// configuration.
+/// A folder that strips out items that do not belong in the current configuration.
pub struct StripUnconfigured<'a> {
diag: CfgDiagReal<'a, 'a>,
+ should_test: bool,
config: &'a ast::CrateConfig,
}
impl<'a> StripUnconfigured<'a> {
pub fn new(config: &'a ast::CrateConfig,
+ should_test: bool,
diagnostic: &'a Handler,
feature_gated_cfgs: &'a mut Vec<GatedCfgAttr>)
-> Self {
StripUnconfigured {
config: config,
+ should_test: should_test,
diag: CfgDiagReal { diag: diagnostic, feature_gated_cfgs: feature_gated_cfgs },
}
}
+ fn configure<T: HasAttrs>(&mut self, node: T) -> Option<T> {
+ let node = self.process_cfg_attrs(node);
+ if self.in_cfg(node.attrs()) { Some(node) } else { None }
+ }
+
+ fn process_cfg_attrs<T: HasAttrs>(&mut self, node: T) -> T {
+ node.map_attrs(|attrs| {
+ attrs.into_iter().filter_map(|attr| self.process_cfg_attr(attr)).collect()
+ })
+ }
+
fn process_cfg_attr(&mut self, attr: ast::Attribute) -> Option<ast::Attribute> {
if !attr.check_name("cfg_attr") {
return Some(attr);
None
}
}
-}
-impl<'a> CfgFolder for StripUnconfigured<'a> {
- // Determine if an item should be translated in the current crate
- // configuration based on the item's attributes
+ // Determine if a node with the given attributes should be included in this configuation.
fn in_cfg(&mut self, attrs: &[ast::Attribute]) -> bool {
attrs.iter().all(|attr| {
+ // When not compiling with --test we should not compile the #[test] functions
+ if !self.should_test && is_test_or_bench(attr) {
+ return false;
+ }
+
let mis = match attr.node.value.node {
ast::MetaItemKind::List(_, ref mis) if is_cfg(&attr) => mis,
_ => return true
})
}
- fn process_attrs<T: HasAttrs>(&mut self, node: T) -> T {
- node.map_attrs(|attrs| {
- attrs.into_iter().filter_map(|attr| self.process_cfg_attr(attr)).collect()
- })
- }
-
+ // Visit attributes on expression and statements (but not attributes on items in blocks).
fn visit_stmt_or_expr_attrs(&mut self, attrs: &[ast::Attribute]) {
// flag the offending attributes
for attr in attrs.iter() {
}
}
+ // Visit unremovable (non-optional) expressions -- c.f. `fold_expr` vs `fold_opt_expr`.
fn visit_unremovable_expr(&mut self, expr: &ast::Expr) {
- if let Some(attr) = expr.attrs().iter().find(|a| is_cfg(a)) {
+ if let Some(attr) = expr.attrs().iter().find(|a| is_cfg(a) || is_test_or_bench(a)) {
let msg = "removing an expression is not supported in this position";
self.diag.diag.span_err(attr.span, msg);
}
// Support conditional compilation by transforming the AST, stripping out
// any items that do not belong in the current configuration
-pub fn strip_unconfigured_items(diagnostic: &Handler, krate: ast::Crate,
+pub fn strip_unconfigured_items(diagnostic: &Handler, krate: ast::Crate, should_test: bool,
feature_gated_cfgs: &mut Vec<GatedCfgAttr>)
-> ast::Crate
{
let config = &krate.config.clone();
- StripUnconfigured::new(config, diagnostic, feature_gated_cfgs).fold_crate(krate)
+ StripUnconfigured::new(config, should_test, diagnostic, feature_gated_cfgs).fold_crate(krate)
}
-impl<T: CfgFolder> fold::Folder for T {
+impl<'a> fold::Folder for StripUnconfigured<'a> {
fn fold_foreign_mod(&mut self, foreign_mod: ast::ForeignMod) -> ast::ForeignMod {
ast::ForeignMod {
abi: foreign_mod.abi,
// NB: This is intentionally not part of the fold_expr() function
// in order for fold_opt_expr() to be able to avoid this check
self.visit_unremovable_expr(&expr);
- let expr = self.process_attrs(expr);
+ let expr = self.process_cfg_attrs(expr);
fold_expr(self, expr)
}
self.configure(item).map(|item| fold::noop_fold_trait_item(item, self))
.unwrap_or(SmallVector::zero())
}
+
+ fn fold_interpolated(&mut self, nt: token::Nonterminal) -> token::Nonterminal {
+ // Don't configure interpolated AST (c.f. #34171).
+ // Interpolated AST will get configured once the surrounding tokens are parsed.
+ nt
+ }
}
-fn fold_expr<F: CfgFolder>(folder: &mut F, expr: P<ast::Expr>) -> P<ast::Expr> {
+fn fold_expr(folder: &mut StripUnconfigured, expr: P<ast::Expr>) -> P<ast::Expr> {
expr.map(|ast::Expr {id, span, node, attrs}| {
fold::noop_fold_expr(ast::Expr {
id: id,
attr.check_name("cfg")
}
+fn is_test_or_bench(attr: &ast::Attribute) -> bool {
+ attr.check_name("test") || attr.check_name("bench")
+}
+
pub trait CfgDiag {
fn emit_error<F>(&mut self, f: F) where F: FnMut(&Handler);
fn flag_gated<F>(&mut self, f: F) where F: FnMut(&mut Vec<GatedCfgAttr>);
}
pub fn backtrace(&self) -> ExpnId { self.backtrace }
- /// Original span that caused the current exapnsion to happen.
- pub fn original_span(&self) -> Span {
- let mut expn_id = self.backtrace;
- let mut call_site = None;
- loop {
- match self.codemap().with_expn_info(expn_id, |ei| ei.map(|ei| ei.call_site)) {
- None => break,
- Some(cs) => {
- call_site = Some(cs);
- expn_id = cs.expn_id;
- }
- }
- }
- call_site.expect("missing expansion backtrace")
- }
-
/// Returns span for the macro which originally caused the current expansion to happen.
///
/// Stops backtracing at include! boundary.
},
});
- // The span that we pass to the expanders we want to
- // be the root of the call stack. That's the most
- // relevant span and it's the actual invocation of
- // the macro.
- let mac_span = fld.cx.original_span();
-
let marked_tts = mark_tts(&tts[..], mark);
- Some(expandfun.expand(fld.cx, mac_span, &marked_tts))
+ Some(expandfun.expand(fld.cx, call_site, &marked_tts))
}
IdentTT(ref expander, tt_span, allow_internal_unstable) => {
fn strip_unconfigured(&mut self) -> StripUnconfigured {
StripUnconfigured::new(&self.cx.cfg,
+ self.cx.ecfg.should_test,
&self.cx.parse_sess.span_diagnostic,
self.cx.feature_gated_cfgs)
}
pub features: Option<&'feat Features>,
pub recursion_limit: usize,
pub trace_mac: bool,
+ pub should_test: bool, // If false, strip `#[test]` nodes
}
macro_rules! feature_tests {
features: None,
recursion_limit: 64,
trace_mac: false,
+ should_test: false,
}
}
name: String,
id_sp: Span) -> PResult<'a, (ast::ItemKind, Vec<ast::Attribute> )> {
let mut included_mod_stack = self.sess.included_mod_stack.borrow_mut();
- match included_mod_stack.iter().position(|p| *p == path) {
- Some(i) => {
- let mut err = String::from("circular modules: ");
- let len = included_mod_stack.len();
- for p in &included_mod_stack[i.. len] {
- err.push_str(&p.to_string_lossy());
- err.push_str(" -> ");
- }
- err.push_str(&path.to_string_lossy());
- return Err(self.span_fatal(id_sp, &err[..]));
- }
- None => ()
+ if let Some(i) = included_mod_stack.iter().position(|p| *p == path) {
+ let mut err = String::from("circular modules: ");
+ let len = included_mod_stack.len();
+ for p in &included_mod_stack[i.. len] {
+ err.push_str(&p.to_string_lossy());
+ err.push_str(" -> ");
+ }
+ err.push_str(&path.to_string_lossy());
+ return Err(self.span_fatal(id_sp, &err[..]));
}
included_mod_stack.push(path.clone());
drop(included_mod_stack);
}
}
self.print_ident(path1.node)?;
- match *sub {
- Some(ref p) => {
- word(&mut self.s, "@")?;
- self.print_pat(&p)?;
- }
- None => ()
+ if let Some(ref p) = *sub {
+ word(&mut self.s, "@")?;
+ self.print_pat(&p)?;
}
}
PatKind::TupleStruct(ref path, ref elts, ddpos) => {
Some(cm) => cm,
_ => return Ok(())
};
- match self.next_comment() {
- Some(ref cmnt) => {
- if (*cmnt).style != comments::Trailing { return Ok(()) }
- let span_line = cm.lookup_char_pos(span.hi);
- let comment_line = cm.lookup_char_pos((*cmnt).pos);
- let mut next = (*cmnt).pos + BytePos(1);
- match next_pos { None => (), Some(p) => next = p }
- if span.hi < (*cmnt).pos && (*cmnt).pos < next &&
- span_line.line == comment_line.line {
- self.print_comment(cmnt)?;
- self.cur_cmnt_and_lit.cur_cmnt += 1;
- }
+ if let Some(ref cmnt) = self.next_comment() {
+ if (*cmnt).style != comments::Trailing { return Ok(()) }
+ let span_line = cm.lookup_char_pos(span.hi);
+ let comment_line = cm.lookup_char_pos((*cmnt).pos);
+ let mut next = (*cmnt).pos + BytePos(1);
+ if let Some(p) = next_pos {
+ next = p;
+ }
+ if span.hi < (*cmnt).pos && (*cmnt).pos < next &&
+ span_line.line == comment_line.line {
+ self.print_comment(cmnt)?;
+ self.cur_cmnt_and_lit.cur_cmnt += 1;
}
- _ => ()
}
Ok(())
}
if should_test {
generate_test_harness(sess, reexport_test_harness_main, krate, span_diagnostic)
} else {
- strip_test_functions(krate)
+ krate
}
}
return res;
}
-fn strip_test_functions(krate: ast::Crate) -> ast::Crate {
- // When not compiling with --test we should not compile the
- // #[test] functions
- struct StripTests;
- impl config::CfgFolder for StripTests {
- fn in_cfg(&mut self, attrs: &[ast::Attribute]) -> bool {
- !attr::contains_name(attrs, "test") && !attr::contains_name(attrs, "bench")
- }
- }
-
- StripTests.fold_crate(krate)
-}
-
/// Craft a span that will be ignored by the stability lint's
/// call to codemap's is_internal check.
/// The expanded code calls some unstable functions in the test crate.
pub fn intern(&self, val: T) -> Name {
let mut map = self.map.borrow_mut();
- match (*map).get(&val) {
- Some(&idx) => return idx,
- None => (),
+ if let Some(&idx) = (*map).get(&val) {
+ return idx;
}
let mut vect = self.vect.borrow_mut();
pub fn intern(&self, val: &str) -> Name {
let mut map = self.map.borrow_mut();
- match map.get(val) {
- Some(&idx) => return idx,
- None => (),
+ if let Some(&idx) = map.get(val) {
+ return idx;
}
let new_idx = Name(self.len() as u32);
}
pub fn walk_local<'v, V: Visitor<'v>>(visitor: &mut V, local: &'v Local) {
+ for attr in local.attrs.as_attr_slice() {
+ visitor.visit_attribute(attr);
+ }
visitor.visit_pat(&local.pat);
walk_list!(visitor, visit_ty, &local.ty);
walk_list!(visitor, visit_expr, &local.init);
}
pub fn walk_expr<'v, V: Visitor<'v>>(visitor: &mut V, expression: &'v Expr) {
+ for attr in expression.attrs.as_attr_slice() {
+ visitor.visit_attribute(attr);
+ }
match expression.node {
ExprKind::Box(ref subexpression) => {
visitor.visit_expr(subexpression)
#include "rustllvm.h"
#include "llvm/Object/Archive.h"
-
-#if LLVM_VERSION_MINOR >= 7
#include "llvm/Object/ArchiveWriter.h"
-#endif
using namespace llvm;
using namespace llvm::object;
~LLVMRustArchiveMember() {}
};
-#if LLVM_VERSION_MINOR >= 6
typedef OwningBinary<Archive> RustArchive;
-#define GET_ARCHIVE(a) ((a)->getBinary())
-#else
-typedef Archive RustArchive;
-#define GET_ARCHIVE(a) (a)
-#endif
extern "C" void*
LLVMRustOpenArchive(char *path) {
return nullptr;
}
-#if LLVM_VERSION_MINOR >= 6
ErrorOr<std::unique_ptr<Archive>> archive_or =
Archive::create(buf_or.get()->getMemBufferRef());
OwningBinary<Archive> *ret = new OwningBinary<Archive>(
std::move(archive_or.get()), std::move(buf_or.get()));
-#else
- std::error_code err;
- Archive *ret = new Archive(std::move(buf_or.get()), err);
- if (err) {
- LLVMRustSetLastError(err.message().c_str());
- return nullptr;
- }
-#endif
return ret;
}
extern "C" RustArchiveIterator*
LLVMRustArchiveIteratorNew(RustArchive *ra) {
- Archive *ar = GET_ARCHIVE(ra);
+ Archive *ar = ra->getBinary();
RustArchiveIterator *rai = new RustArchiveIterator();
rai->cur = ar->child_begin();
rai->end = ar->child_end();
extern "C" const char*
LLVMRustArchiveChildData(Archive::Child *child, size_t *size) {
StringRef buf;
-#if LLVM_VERSION_MINOR >= 7
ErrorOr<StringRef> buf_or_err = child->getBuffer();
if (buf_or_err.getError()) {
LLVMRustSetLastError(buf_or_err.getError().message().c_str());
return NULL;
}
buf = buf_or_err.get();
-#else
- buf = child->getBuffer();
-#endif
*size = buf.size();
return buf.data();
}
const LLVMRustArchiveMember **NewMembers,
bool WriteSymbtab,
Archive::Kind Kind) {
-#if LLVM_VERSION_MINOR >= 7
std::vector<NewArchiveIterator> Members;
for (size_t i = 0; i < NumMembers; i++) {
if (!pair.second)
return 0;
LLVMRustSetLastError(pair.second.message().c_str());
-#else
- LLVMRustSetLastError("writing archives not supported with this LLVM version");
-#endif
return -1;
}
RustJITMemoryManager *mm = new RustJITMemoryManager;
ExecutionEngine *ee =
- #if LLVM_VERSION_MINOR >= 6
EngineBuilder(std::unique_ptr<Module>(unwrap(mod)))
.setMCJITMemoryManager(std::unique_ptr<RustJITMemoryManager>(mm))
- #else
- EngineBuilder(unwrap(mod))
- .setMCJITMemoryManager(mm)
- #endif
.setEngineKind(EngineKind::JIT)
.setErrorStr(&error_str)
.setTargetOptions(options)
#include "llvm/Support/CBindingWrapping.h"
#include "llvm/Support/FileSystem.h"
#include "llvm/Support/Host.h"
-#if LLVM_VERSION_MINOR >= 7
#include "llvm/Analysis/TargetLibraryInfo.h"
#include "llvm/Analysis/TargetTransformInfo.h"
-#else
-#include "llvm/Target/TargetLibraryInfo.h"
-#endif
#include "llvm/Target/TargetMachine.h"
#include "llvm/Target/TargetSubtargetInfo.h"
#include "llvm/Transforms/IPO/PassManagerBuilder.h"
initializeVectorization(Registry);
initializeIPO(Registry);
initializeAnalysis(Registry);
-#if LLVM_VERSION_MINOR <= 7
+#if LLVM_VERSION_MINOR == 7
initializeIPA(Registry);
#endif
initializeTransformUtils(Registry);
LLVMPassManagerRef PMR,
LLVMModuleRef M) {
PassManagerBase *PM = unwrap(PMR);
-#if LLVM_VERSION_MINOR >= 7
PM->add(createTargetTransformInfoWrapperPass(
unwrap(TM)->getTargetIRAnalysis()));
-#else
-#if LLVM_VERSION_MINOR == 6
- PM->add(new DataLayoutPass());
-#else
- PM->add(new DataLayoutPass(unwrap(M)));
-#endif
- unwrap(TM)->addAnalysisPasses(*PM);
-#endif
}
extern "C" void
bool MergeFunctions,
bool SLPVectorize,
bool LoopVectorize) {
-#if LLVM_VERSION_MINOR >= 6
// Ignore mergefunc for now as enabling it causes crashes.
//unwrap(PMB)->MergeFunctions = MergeFunctions;
-#endif
unwrap(PMB)->SLPVectorize = SLPVectorize;
unwrap(PMB)->OptLevel = OptLevel;
unwrap(PMB)->LoopVectorize = LoopVectorize;
LLVMModuleRef M,
bool DisableSimplifyLibCalls) {
Triple TargetTriple(unwrap(M)->getTargetTriple());
-#if LLVM_VERSION_MINOR >= 7
TargetLibraryInfoImpl *TLI = new TargetLibraryInfoImpl(TargetTriple);
-#else
- TargetLibraryInfo *TLI = new TargetLibraryInfo(TargetTriple);
-#endif
if (DisableSimplifyLibCalls)
TLI->disableAllFunctions();
unwrap(PMB)->LibraryInfo = TLI;
LLVMModuleRef M,
bool DisableSimplifyLibCalls) {
Triple TargetTriple(unwrap(M)->getTargetTriple());
-#if LLVM_VERSION_MINOR >= 7
TargetLibraryInfoImpl TLII(TargetTriple);
if (DisableSimplifyLibCalls)
TLII.disableAllFunctions();
unwrap(PMB)->add(new TargetLibraryInfoWrapperPass(TLII));
-#else
- TargetLibraryInfo *TLI = new TargetLibraryInfo(TargetTriple);
- if (DisableSimplifyLibCalls)
- TLI->disableAllFunctions();
- unwrap(PMB)->add(TLI);
-#endif
}
// Unfortunately, the LLVM C API doesn't provide an easy way of iterating over
PassManager *PM = unwrap<PassManager>(PMR);
std::string ErrorInfo;
-#if LLVM_VERSION_MINOR >= 6
std::error_code EC;
raw_fd_ostream OS(path, EC, sys::fs::F_None);
if (EC)
ErrorInfo = EC.message();
-#else
- raw_fd_ostream OS(path, ErrorInfo, sys::fs::F_None);
-#endif
if (ErrorInfo != "") {
LLVMRustSetLastError(ErrorInfo.c_str());
return false;
}
-#if LLVM_VERSION_MINOR >= 7
unwrap(Target)->addPassesToEmitFile(*PM, OS, FileType, false);
-#else
- formatted_raw_ostream FOS(OS);
- unwrap(Target)->addPassesToEmitFile(*PM, FOS, FileType, false);
-#endif
PM->run(*unwrap(M));
// Apparently `addPassesToEmitFile` adds a pointer to our on-the-stack output
PassManager *PM = unwrap<PassManager>(PMR);
std::string ErrorInfo;
-#if LLVM_VERSION_MINOR >= 6
std::error_code EC;
raw_fd_ostream OS(path, EC, sys::fs::F_None);
if (EC)
ErrorInfo = EC.message();
-#else
- raw_fd_ostream OS(path, ErrorInfo, sys::fs::F_None);
-#endif
formatted_raw_ostream FOS(OS);
LLVMRustSetDataLayoutFromTargetMachine(LLVMModuleRef Module,
LLVMTargetMachineRef TMR) {
TargetMachine *Target = unwrap(TMR);
-#if LLVM_VERSION_MINOR >= 7
unwrap(Module)->setDataLayout(Target->createDataLayout());
-#elif LLVM_VERSION_MINOR >= 6
- if (const DataLayout *DL = Target->getSubtargetImpl()->getDataLayout())
- unwrap(Module)->setDataLayout(DL);
-#else
- if (const DataLayout *DL = Target->getDataLayout())
- unwrap(Module)->setDataLayout(DL);
-#endif
}
extern "C" LLVMTargetDataRef
LLVMRustGetModuleDataLayout(LLVMModuleRef M) {
-#if LLVM_VERSION_MINOR >= 7
return wrap(&unwrap(M)->getDataLayout());
-#else
- return wrap(unwrap(M)->getDataLayout());
-#endif
}
typedef DIBuilder* DIBuilderRef;
-#if LLVM_VERSION_MINOR >= 6
typedef struct LLVMOpaqueMetadata *LLVMMetadataRef;
namespace llvm {
return reinterpret_cast<Metadata**>(Vals);
}
}
-#else
-typedef LLVMValueRef LLVMMetadataRef;
-#endif
template<typename DIT>
DIT* unwrapDIptr(LLVMMetadataRef ref) {
return (DIT*) (ref ? unwrap<MDNode>(ref) : NULL);
}
-#if LLVM_VERSION_MINOR <= 6
-template<typename DIT>
-DIT unwrapDI(LLVMMetadataRef ref) {
- return DIT(ref ? unwrap<MDNode>(ref) : NULL);
-}
-#else
#define DIDescriptor DIScope
#define DIArray DINodeArray
#define unwrapDI unwrapDIptr
-#endif
-
-#if LLVM_VERSION_MINOR <= 5
-#define DISubroutineType DICompositeType
-#endif
extern "C" uint32_t LLVMRustDebugMetadataVersion() {
return DEBUG_METADATA_VERSION;
LLVMMetadataRef File,
LLVMMetadataRef ParameterTypes) {
return wrap(Builder->createSubroutineType(
-#if LLVM_VERSION_MINOR <= 7
+#if LLVM_VERSION_MINOR == 7
unwrapDI<DIFile>(File),
#endif
-#if LLVM_VERSION_MINOR >= 7
DITypeRefArray(unwrap<MDTuple>(ParameterTypes))));
-#elif LLVM_VERSION_MINOR >= 6
- unwrapDI<DITypeArray>(ParameterTypes)));
-#else
- unwrapDI<DIArray>(ParameterTypes)));
-#endif
}
extern "C" LLVMMetadataRef LLVMDIBuilderCreateFunction(
AlignInBits,
Flags,
unwrapDI<DIType>(DerivedFrom),
-#if LLVM_VERSION_MINOR >= 7
DINodeArray(unwrapDI<MDTuple>(Elements)),
-#else
- unwrapDI<DIArray>(Elements),
-#endif
RunTimeLang,
unwrapDI<DIType>(VTableHolder),
UniqueId
return wrap(Builder->createLexicalBlock(
unwrapDI<DIDescriptor>(Scope),
unwrapDI<DIFile>(File), Line, Col
-#if LLVM_VERSION_MINOR == 5
- , 0
-#endif
));
}
bool isLocalToUnit,
LLVMValueRef Val,
LLVMMetadataRef Decl = NULL) {
-#if LLVM_VERSION_MINOR >= 6
return wrap(Builder->createGlobalVariable(unwrapDI<DIDescriptor>(Context),
-#else
- return wrap(Builder->createStaticVariable(unwrapDI<DIDescriptor>(Context),
-#endif
Name,
LinkageName,
unwrapDI<DIFile>(File),
int64_t* AddrOps,
unsigned AddrOpsCount,
unsigned ArgNo) {
-#if LLVM_VERSION_MINOR == 5
- if (AddrOpsCount > 0) {
- SmallVector<llvm::Value *, 16> addr_ops;
- llvm::Type *Int64Ty = Type::getInt64Ty(unwrap<MDNode>(Scope)->getContext());
- for (unsigned i = 0; i < AddrOpsCount; ++i)
- addr_ops.push_back(ConstantInt::get(Int64Ty, AddrOps[i]));
-
- return wrap(Builder->createComplexVariable(
- Tag,
- unwrapDI<DIDescriptor>(Scope),
- Name,
- unwrapDI<DIFile>(File),
- LineNo,
- unwrapDI<DIType>(Ty),
- addr_ops,
- ArgNo
- ));
- }
-#endif
#if LLVM_VERSION_MINOR >= 8
if (Tag == 0x100) { // DW_TAG_auto_variable
return wrap(Builder->createAutoVariable(
LLVMMetadataRef Subscripts) {
return wrap(Builder->createArrayType(Size, AlignInBits,
unwrapDI<DIType>(Ty),
-#if LLVM_VERSION_MINOR >= 7
DINodeArray(unwrapDI<MDTuple>(Subscripts))
-#else
- unwrapDI<DIArray>(Subscripts)
-#endif
));
}
LLVMMetadataRef Subscripts) {
return wrap(Builder->createVectorType(Size, AlignInBits,
unwrapDI<DIType>(Ty),
-#if LLVM_VERSION_MINOR >= 7
DINodeArray(unwrapDI<MDTuple>(Subscripts))
-#else
- unwrapDI<DIArray>(Subscripts)
-#endif
));
}
DIBuilderRef Builder,
LLVMMetadataRef* Ptr,
unsigned Count) {
-#if LLVM_VERSION_MINOR >= 7
Metadata **DataValue = unwrap(Ptr);
return wrap(Builder->getOrCreateArray(
ArrayRef<Metadata*>(DataValue, Count)).get());
-#else
- return wrap(Builder->getOrCreateArray(
-#if LLVM_VERSION_MINOR >= 6
- ArrayRef<Metadata*>(unwrap(Ptr), Count)));
-#else
- ArrayRef<Value*>(reinterpret_cast<Value**>(Ptr), Count)));
-#endif
-#endif
}
extern "C" LLVMValueRef LLVMDIBuilderInsertDeclareAtEnd(
LLVMBasicBlockRef InsertAtEnd) {
return wrap(Builder->insertDeclare(
unwrap(Val),
-#if LLVM_VERSION_MINOR >= 7
unwrap<DILocalVariable>(VarInfo),
-#else
- unwrapDI<DIVariable>(VarInfo),
-#endif
-#if LLVM_VERSION_MINOR >= 6
Builder->createExpression(
llvm::ArrayRef<int64_t>(AddrOps, AddrOpsCount)),
-#endif
-#if LLVM_VERSION_MINOR >= 7
DebugLoc(cast<MDNode>(unwrap<MetadataAsValue>(DL)->getMetadata())),
-#endif
unwrap(InsertAtEnd)));
}
unsigned AddrOpsCount,
LLVMValueRef DL,
LLVMValueRef InsertBefore) {
-#if LLVM_VERSION_MINOR >= 6
-#endif
return wrap(Builder->insertDeclare(
unwrap(Val),
-#if LLVM_VERSION_MINOR >= 7
unwrap<DILocalVariable>(VarInfo),
-#else
- unwrapDI<DIVariable>(VarInfo),
-#endif
-#if LLVM_VERSION_MINOR >= 6
Builder->createExpression(
llvm::ArrayRef<int64_t>(AddrOps, AddrOpsCount)),
-#endif
-#if LLVM_VERSION_MINOR >= 7
DebugLoc(cast<MDNode>(unwrap<MetadataAsValue>(DL)->getMetadata())),
-#endif
unwrap<Instruction>(InsertBefore)));
}
LineNumber,
SizeInBits,
AlignInBits,
-#if LLVM_VERSION_MINOR >= 7
DINodeArray(unwrapDI<MDTuple>(Elements)),
-#else
- unwrapDI<DIArray>(Elements),
-#endif
unwrapDI<DIType>(ClassType)));
}
SizeInBits,
AlignInBits,
Flags,
-#if LLVM_VERSION_MINOR >= 7
DINodeArray(unwrapDI<MDTuple>(Elements)),
-#else
- unwrapDI<DIArray>(Elements),
-#endif
RunTimeLang,
UniqueId
));
unwrapDI<DIDescriptor>(Scope),
Name,
unwrapDI<DIType>(Ty)
-#if LLVM_VERSION_MINOR <= 6
- ,
- unwrapDI<MDNode*>(File),
- LineNo,
- ColumnNo
-#endif
));
}
LLVMMetadataRef CompositeType,
LLVMMetadataRef TypeArray)
{
-#if LLVM_VERSION_MINOR >= 7
DICompositeType *tmp = unwrapDI<DICompositeType>(CompositeType);
Builder->replaceArrays(tmp, DINodeArray(unwrap<MDTuple>(TypeArray)));
-#elif LLVM_VERSION_MINOR >= 6
- DICompositeType tmp = unwrapDI<DICompositeType>(CompositeType);
- Builder->replaceArrays(tmp, unwrapDI<DIArray>(TypeArray));
-#else
- unwrapDI<DICompositeType>(CompositeType).setTypeArray(unwrapDI<DIArray>(TypeArray));
-#endif
}
extern "C" LLVMValueRef LLVMDIBuilderCreateDebugLocation(
unwrapDIptr<MDNode>(Scope),
unwrapDIptr<MDNode>(InlinedAt));
-#if LLVM_VERSION_MINOR >= 6
- return wrap(MetadataAsValue::get(context, debug_loc.getAsMDNode(
-#if LLVM_VERSION_MINOR <= 6
- context
-#endif
- )));
-#else
- return wrap(debug_loc.getAsMDNode(context));
-#endif
+ return wrap(MetadataAsValue::get(context, debug_loc.getAsMDNode()));
}
extern "C" void LLVMWriteTypeToString(LLVMTypeRef Type, RustStringRef str) {
extern "C" bool
LLVMRustLinkInExternalBitcode(LLVMModuleRef dst, char *bc, size_t len) {
Module *Dst = unwrap(dst);
-#if LLVM_VERSION_MINOR >= 6
std::unique_ptr<MemoryBuffer> buf = MemoryBuffer::getMemBufferCopy(StringRef(bc, len));
-#if LLVM_VERSION_MINOR >= 7
ErrorOr<std::unique_ptr<Module>> Src =
llvm::getLazyBitcodeModule(std::move(buf), Dst->getContext());
-#else
- ErrorOr<Module *> Src = llvm::getLazyBitcodeModule(std::move(buf), Dst->getContext());
-#endif
-#else
- MemoryBuffer* buf = MemoryBuffer::getMemBufferCopy(StringRef(bc, len));
- ErrorOr<Module *> Src = llvm::getLazyBitcodeModule(buf, Dst->getContext());
-#endif
if (!Src) {
LLVMRustSetLastError(Src.getError().message().c_str());
-#if LLVM_VERSION_MINOR == 5
- delete buf;
-#endif
return false;
}
std::string Err;
-#if LLVM_VERSION_MINOR >= 6
raw_string_ostream Stream(Err);
DiagnosticPrinterRawOStream DP(Stream);
#if LLVM_VERSION_MINOR >= 8
if (Linker::linkModules(*Dst, std::move(Src.get()))) {
-#elif LLVM_VERSION_MINOR >= 7
- if (Linker::LinkModules(Dst, Src->get(), [&](const DiagnosticInfo &DI) { DI.print(DP); })) {
-#else
- if (Linker::LinkModules(Dst, *Src, [&](const DiagnosticInfo &DI) { DI.print(DP); })) {
-#endif
#else
- if (Linker::LinkModules(Dst, *Src, Linker::DestroySource, &Err)) {
+ if (Linker::LinkModules(Dst, Src->get(), [&](const DiagnosticInfo &DI) { DI.print(DP); })) {
#endif
LLVMRustSetLastError(Err.c_str());
return false;
RustStringRef str)
{
raw_rust_string_ostream os(str);
-#if LLVM_VERSION_MINOR >= 7
unwrap(dl)->print(os);
-#else
- unwrap(dl)->print(*unwrap(C), os);
-#endif
}
DEFINE_SIMPLE_CONVERSION_FUNCTIONS(SMDiagnostic, LLVMSMDiagnosticRef)
//~ TRANS_ITEM fn generic_impl::id[0]<generic_impl::Struct[0]<&str>>
let _ = (Struct::new(Struct::new("str")).f)(Struct::new("str"));
}
-
-//~ TRANS_ITEM drop-glue i8
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// compile-flags: -C no-prepopulate-passes
+
+#![crate_type = "lib"]
+
+#[no_mangle]
+pub struct F32(f32);
+
+// CHECK: define float @add_newtype_f32(float, float)
+#[inline(never)]
+#[no_mangle]
+pub fn add_newtype_f32(a: F32, b: F32) -> F32 {
+ F32(a.0 + b.0)
+}
+
+#[no_mangle]
+pub struct F64(f64);
+
+// CHECK: define double @add_newtype_f64(double, double)
+#[inline(never)]
+#[no_mangle]
+pub fn add_newtype_f64(a: F64, b: F64) -> F64 {
+ F64(a.0 + b.0)
+}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// compile-flags: -C no-prepopulate-passes
+
+struct Foo;
+
+impl Foo {
+// CHECK: define internal x86_stdcallcc void @{{.*}}foo{{.*}}()
+ #[inline(never)]
+ pub extern "stdcall" fn foo<T>() {
+ }
+}
+
+fn main() {
+ Foo::foo::<Foo>();
+}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// error-pattern: requires at least a format string argument
+// error-pattern: bad-format-args.rs:19:5: 19:15 note: in this expansion
+
+// error-pattern: expected token: `,`
+// error-pattern: bad-format-args.rs:20:5: 20:19 note: in this expansion
+// error-pattern: bad-format-args.rs:21:5: 21:22 note: in this expansion
+
+fn main() {
+ format!();
+ format!("" 1);
+ format!("", 1 1);
+}
//~^ ERROR removing an expression is not supported in this position
let _ = [1, 2, 3][#[cfg(unset)] 1];
//~^ ERROR removing an expression is not supported in this position
+ let _ = #[test] ();
+ //~^ ERROR removing an expression is not supported in this position
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+#![feature(stmt_expr_attributes)]
+
#[foo] //~ ERROR The attribute `foo`
fn main() {
-
+ #[foo] //~ ERROR The attribute `foo`
+ let x = ();
+ #[foo] //~ ERROR The attribute `foo`
+ x
}
format!("foo } bar"); //~ ERROR: unmatched `}` found
format!("foo }"); //~ ERROR: unmatched `}` found
-
- format!(); //~ ERROR: requires at least a format string argument
- format!("" 1); //~ ERROR: expected token: `,`
- format!("", 1 1); //~ ERROR: expected token: `,`
}
fn main() {
print_x(X); //~error this function takes 2 parameters but 1 parameter was supplied
+ //~^ NOTE the following parameter types were expected: &Foo<Item=bool>, &str
}
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+enum Sexpression {
+ Num(()),
+ Cons(&'static mut Sexpression)
+}
+
+fn causes_ice(mut l: &mut Sexpression) {
+ loop { match l {
+ &mut Sexpression::Num(ref mut n) => {},
+ &mut Sexpression::Cons(ref mut expr) => { //~ ERROR cannot borrow `l.0`
+ //~| ERROR cannot borrow `l.0`
+ l = &mut **expr; //~ ERROR cannot assign to `l`
+ }
+ }}
+}
+
+fn main() {
+}
needlesArr.iter().fold(|x, y| {
});
//~^^ ERROR this function takes 2 parameters but 1 parameter was supplied
+ //~^^^ NOTE the following parameter types were expected
//
// the first error is, um, non-ideal.
}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+fn main() {
+ match 0 {
+ aaa::bbb(_) => ()
+ //~^ ERROR failed to resolve. Use of undeclared type or module `aaa`
+ };
+}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![feature(rustc_attrs)]
+
+macro_rules! null { ($i:tt) => {} }
+macro_rules! apply_null {
+ ($i:item) => { null! { $i } }
+}
+
+#[rustc_error]
+fn main() { //~ ERROR compilation successful
+ apply_null!(#[cfg(all())] fn f() {});
+}
fn foo(a: usize) {}
fn main() { foo(5, 6) } //~ ERROR this function takes 1 parameter but 2 parameters were supplied
+//~^ NOTE the following parameter type was expected
}
macro_rules! myprintln {
- ($fmt:expr) => (myprint!(concat!($fmt, "\n"))); //~ NOTE in this expansion of myprint!
- //~^ NOTE in this expansion of concat!
+ ($fmt:expr) => (myprint!(concat!($fmt, "\n"))); //~ ERROR invalid reference to argument `0`
+ //~| NOTE in this expansion of concat!
+ //~| NOTE in this expansion of myprint!
}
fn main() {
- myprintln!("{}"); //~ ERROR invalid reference to argument `0`
- //~^ NOTE in this expansion of
+ myprintln!("{}"); //~ NOTE in this expansion of
}
foo::blah!(); //~ ERROR
- format!(); //~ ERROR
format!(invalid); //~ ERROR
include!(invalid); //~ ERROR
let x = Foo;
x.zero(0) //~ ERROR this function takes 0 parameters but 1 parameter was supplied
.one() //~ ERROR this function takes 1 parameter but 0 parameters were supplied
+ //~^ NOTE the following parameter type was expected
.two(0); //~ ERROR this function takes 2 parameters but 1 parameter was supplied
+ //~^ NOTE the following parameter types were expected
let y = Foo;
y.zero()
.take() //~ ERROR no method named `take` found for type `Foo` in the current scope
+ //~^ NOTE the method `take` exists but the following trait bounds were not satisfied
.one(0);
}
fn main() {
foo(1, 2, 3);
//~^ ERROR this function takes 4 parameters but 3
+ //~^^ NOTE the following parameter types were expected
}
y: 3,
};
let ans = s("what"); //~ ERROR mismatched types
- let ans = s(); //~ ERROR this function takes 1 parameter but 0 parameters were supplied
+ //~^ NOTE expected isize, found &-ptr
+ //~| NOTE expected type
+ //~| NOTE found type
+ let ans = s();
+ //~^ ERROR this function takes 1 parameter but 0 parameters were supplied
+ //~| NOTE the following parameter type was expected
let ans = s("burma", "shave");
//~^ ERROR this function takes 1 parameter but 2 parameters were supplied
+ //~| NOTE the following parameter type was expected
}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![feature(inclusive_range)]
+
+use std::ops::*;
+
+// FIXME #34229 duplicated errors
+#[derive(Clone, PartialEq, Eq, PartialOrd, Ord, Hash, Debug)]
+struct AllTheRanges {
+ a: Range<usize>,
+ //~^ ERROR PartialOrd
+ //~^^ ERROR PartialOrd
+ //~^^^ ERROR Ord
+ //~^^^^ ERROR binary operation
+ //~^^^^^ ERROR binary operation
+ //~^^^^^^ ERROR binary operation
+ //~^^^^^^^ ERROR binary operation
+ //~^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^^^ ERROR binary operation
+ b: RangeTo<usize>,
+ //~^ ERROR PartialOrd
+ //~^^ ERROR PartialOrd
+ //~^^^ ERROR Ord
+ //~^^^^ ERROR binary operation
+ //~^^^^^ ERROR binary operation
+ //~^^^^^^ ERROR binary operation
+ //~^^^^^^^ ERROR binary operation
+ //~^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^^^ ERROR binary operation
+ c: RangeFrom<usize>,
+ //~^ ERROR PartialOrd
+ //~^^ ERROR PartialOrd
+ //~^^^ ERROR Ord
+ //~^^^^ ERROR binary operation
+ //~^^^^^ ERROR binary operation
+ //~^^^^^^ ERROR binary operation
+ //~^^^^^^^ ERROR binary operation
+ //~^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^^^ ERROR binary operation
+ d: RangeFull,
+ //~^ ERROR PartialOrd
+ //~^^ ERROR PartialOrd
+ //~^^^ ERROR Ord
+ //~^^^^ ERROR binary operation
+ //~^^^^^ ERROR binary operation
+ //~^^^^^^ ERROR binary operation
+ //~^^^^^^^ ERROR binary operation
+ //~^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^^^ ERROR binary operation
+ e: RangeInclusive<usize>,
+ //~^ ERROR PartialOrd
+ //~^^ ERROR PartialOrd
+ //~^^^ ERROR Ord
+ //~^^^^ ERROR binary operation
+ //~^^^^^ ERROR binary operation
+ //~^^^^^^ ERROR binary operation
+ //~^^^^^^^ ERROR binary operation
+ //~^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^^^ ERROR binary operation
+ f: RangeToInclusive<usize>,
+ //~^ ERROR PartialOrd
+ //~^^ ERROR PartialOrd
+ //~^^^ ERROR Ord
+ //~^^^^ ERROR binary operation
+ //~^^^^^ ERROR binary operation
+ //~^^^^^^ ERROR binary operation
+ //~^^^^^^^ ERROR binary operation
+ //~^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^^ ERROR binary operation
+ //~^^^^^^^^^^^ ERROR binary operation
+}
+
+fn main() {}
+
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use std::ops::*;
+
+#[derive(Copy, Clone)] //~ ERROR Copy
+struct R(Range<usize>);
+
+fn main() {}
+
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use std::ops::*;
+
+#[derive(Copy, Clone)] //~ ERROR Copy
+struct R(RangeFrom<usize>);
+
+fn main() {}
+
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![feature(rustc_attrs)]
+
+use std::ops::*;
+
+#[derive(Copy, Clone)]
+struct R(RangeTo<usize>);
+
+#[rustc_error]
+fn main() {} //~ ERROR success
+
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![feature(rustc_attrs)]
+
+use std::ops::*;
+
+#[derive(Copy, Clone)]
+struct R(RangeFull);
+
+#[rustc_error]
+fn main() {} //~ ERROR success
+
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![feature(inclusive_range)]
+
+use std::ops::*;
+
+#[derive(Copy, Clone)] //~ ERROR Copy
+struct R(RangeInclusive<usize>);
+
+fn main() {}
+
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![feature(rustc_attrs, inclusive_range)]
+
+use std::ops::*;
+
+#[derive(Copy, Clone)]
+struct R(RangeToInclusive<usize>);
+
+#[rustc_error]
+fn main() {} //~ ERROR success
+
// of the below being caught.
macro_rules! expando {
- ($x: ident) => { trace_macros!($x) }
+ ($x: ident) => { trace_macros!($x) } //~ ERROR `trace_macros` is not stable
}
- expando!(true); //~ ERROR `trace_macros` is not stable
+ expando!(true); //~ NOTE in this expansion
+ //~^ NOTE in this expansion
}
fn main() {
unsafe {
foo(); //~ ERROR: this function takes at least 2 parameters but 0 parameters were supplied
+ //~^ NOTE the following parameter types were expected
foo(1); //~ ERROR: this function takes at least 2 parameters but 1 parameter was supplied
+ //~^ NOTE the following parameter types were expected
let x: unsafe extern "C" fn(f: isize, x: u8) = foo;
//~^ ERROR: mismatched types
#![omit_gdb_pretty_printer_section]
fn immediate_args(a: isize, b: bool, c: f64) {
- println!("") // #break
+ zzz(); // #break
}
struct BigStruct {
}
fn non_immediate_args(a: BigStruct, b: BigStruct) {
- println!("") // #break
+ zzz(); // #break
}
fn binding(a: i64, b: u64, c: f64) {
}
fn function_call(x: u64, y: u64, z: f64) {
- println!("Hi!") // #break
+ zzz(); // #break
}
fn identifier(x: u64, y: u64, z: f64) -> u64 {
while_expr(40, 41, 42);
loop_expr(43, 44, 45);
}
+
+fn zzz() {()}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+// compiler-flags: -g
+
+pub struct Dst {
+ pub a: (),
+ pub b: (),
+ pub data: [u8],
+}
+
+pub unsafe fn borrow(bytes: &[u8]) -> &Dst {
+ let dst: &Dst = std::mem::transmute((bytes.as_ptr(), bytes.len()));
+ dst
+}
+
+fn main() {}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// aux-build:rustdoc-hidden.rs
+// build-aux-docs
+// ignore-cross-compile
+
+extern crate rustdoc_hidden;
+
+// @has hidden_use/index.html
+// @!has - 'rustdoc_hidden'
+// @!has - 'Bar'
+// @!has hidden_use/struct.Bar.html
+#[doc(hidden)]
+pub use rustdoc_hidden::Bar;
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+mod private {
+ pub struct Foo {}
+}
+
+// @has hidden_use/index.html
+// @!has - 'private'
+// @!has - 'Foo'
+// @!has hidden_use/struct.Foo.html
+#[doc(hidden)]
+pub use private::Foo;
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![crate_name = "foo"]
+
+// @!has 'foo/sys/index.html'
+// @!has 'foo/sys/sidebar-items.js'
+#[doc(hidden)]
+pub mod sys {
+ extern "C" {
+ // @!has 'foo/sys/fn.foo.html'
+ #[doc(hidden)]
+ pub fn foo();
+ }
+}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![crate_name = "foo"]
+
+mod hidden {
+ // @has foo/hidden/struct.Foo.html
+ // @has - '//p/a' '../../foo/struct.FooBar.html'
+ pub struct Foo {}
+
+ // @has foo/hidden/bar/index.html
+ // @has - '//p/a' '../../foo/baz/index.html'
+ pub mod bar {
+ // @has foo/hidden/bar/struct.Thing.html
+ // @has - '//p/a' '../../foo/baz/struct.Thing.html'
+ pub struct Thing {}
+ }
+}
+
+// @has foo/struct.FooBar.html
+pub use hidden::Foo as FooBar;
+
+// @has foo/baz/index.html
+// @has foo/baz/struct.Thing.html
+pub use hidden::bar as baz;
lock: Option<&'static str>,
}
-const TEST_REPOS: &'static [Test] = &[
- Test {
- name: "cargo",
- repo: "https://github.com/rust-lang/cargo",
- sha: "7d79da08238e3d47e0bc4406155bdcc45ccb8c82",
- lock: None,
- },
- Test {
- name: "iron",
- repo: "https://github.com/iron/iron",
- sha: "16c858ec2901e2992fe5e529780f59fa8ed12903",
- lock: Some(include_str!("lockfiles/iron-Cargo.lock")),
- },
-];
+const TEST_REPOS: &'static [Test] = &[Test {
+ name: "cargo",
+ repo: "https://github.com/rust-lang/cargo",
+ sha: "7d79da08238e3d47e0bc4406155bdcc45ccb8c82",
+ lock: None,
+ },
+ Test {
+ name: "iron",
+ repo: "https://github.com/iron/iron",
+ sha: "16c858ec2901e2992fe5e529780f59fa8ed12903",
+ lock: Some(include_str!("lockfiles/iron-Cargo.lock")),
+ }];
fn main() {
println!("testing {}", test.repo);
let dir = clone_repo(test, out_dir);
if let Some(lockfile) = test.lock {
- File::create(&dir.join("Cargo.lock")).expect("")
- .write_all(lockfile.as_bytes()).expect("");
+ File::create(&dir.join("Cargo.lock"))
+ .expect("")
+ .write_all(lockfile.as_bytes())
+ .expect("");
}
if !run_cargo_test(cargo, &dir) {
panic!("tests failed for {}", test.repo);
if !out_dir.join(".git").is_dir() {
let status = Command::new("git")
- .arg("init")
- .arg(&out_dir)
- .status()
- .expect("");
+ .arg("init")
+ .arg(&out_dir)
+ .status()
+ .expect("");
assert!(status.success());
}
for depth in &[0, 1, 10, 100, 1000, 100000] {
if *depth > 0 {
let status = Command::new("git")
- .arg("fetch")
- .arg(test.repo)
- .arg("master")
- .arg(&format!("--depth={}", depth))
- .current_dir(&out_dir)
- .status()
- .expect("");
+ .arg("fetch")
+ .arg(test.repo)
+ .arg("master")
+ .arg(&format!("--depth={}", depth))
+ .current_dir(&out_dir)
+ .status()
+ .expect("");
assert!(status.success());
}
let status = Command::new("git")
- .arg("reset")
- .arg(test.sha)
- .arg("--hard")
- .current_dir(&out_dir)
- .status()
- .expect("");
+ .arg("reset")
+ .arg(test.sha)
+ .arg("--hard")
+ .current_dir(&out_dir)
+ .status()
+ .expect("");
if status.success() {
found = true;
panic!("unable to find commit {}", test.sha)
}
let status = Command::new("git")
- .arg("clean")
- .arg("-fdx")
- .current_dir(&out_dir)
- .status()
- .unwrap();
+ .arg("clean")
+ .arg("-fdx")
+ .current_dir(&out_dir)
+ .status()
+ .unwrap();
assert!(status.success());
out_dir
type Cache = HashMap<PathBuf, FileEntry>;
impl FileEntry {
- fn parse_ids(&mut self,
- file: &Path,
- contents: &str,
- errors: &mut bool)
-{
+ fn parse_ids(&mut self, file: &Path, contents: &str, errors: &mut bool) {
if self.ids.is_empty() {
with_attrs_in_source(contents, " id", |fragment, i| {
let frag = fragment.trim_left_matches("#").to_owned();
if !self.ids.insert(frag) {
*errors = true;
- println!("{}:{}: id is not unique: `{}`",
- file.display(), i, fragment);
+ println!("{}:{}: id is not unique: `{}`", file.display(), i, fragment);
}
});
}
}
}
-fn walk(cache: &mut Cache,
- root: &Path,
- dir: &Path,
- url: &mut Url,
- errors: &mut bool)
-{
+fn walk(cache: &mut Cache, root: &Path, dir: &Path, url: &mut Url, errors: &mut bool) {
for entry in t!(dir.read_dir()).map(|e| t!(e)) {
let path = entry.path();
let kind = t!(entry.file_type());
root: &Path,
file: &Path,
base: &Url,
- errors: &mut bool) -> Option<PathBuf>
-{
+ errors: &mut bool)
+ -> Option<PathBuf> {
// ignore js files as they are not prone to errors as the rest of the
// documentation is and they otherwise bring up false positives.
if file.extension().and_then(|s| s.to_str()) == Some("js") {
Err(_) => return None,
};
{
- cache.get_mut(&pretty_file).unwrap()
- .parse_ids(&pretty_file, &contents, errors);
+ cache.get_mut(&pretty_file)
+ .unwrap()
+ .parse_ids(&pretty_file, &contents, errors);
}
// Search for anything that's the regex 'href[ ]*=[ ]*".*?"'
// the docs offline so it's best to avoid them.
*errors = true;
let pretty_path = path.strip_prefix(root).unwrap_or(&path);
- println!("{}:{}: directory link - {}", pretty_file.display(),
- i + 1, pretty_path.display());
+ println!("{}:{}: directory link - {}",
+ pretty_file.display(),
+ i + 1,
+ pretty_path.display());
return;
}
let res = load_file(cache, root, path.clone(), FromRedirect(false));
Err(LoadError::IOError(err)) => panic!(format!("{}", err)),
Err(LoadError::BrokenRedirect(target, _)) => {
print!("{}:{}: broken redirect to {}",
- pretty_file.display(), i + 1, target.display());
+ pretty_file.display(),
+ i + 1,
+ target.display());
return;
}
Err(LoadError::IsRedirect) => unreachable!(),
if !entry.ids.contains(fragment) {
*errors = true;
print!("{}:{}: broken link fragment ",
- pretty_file.display(), i + 1);
- println!("`#{}` pointing to `{}`",
- fragment, pretty_path.display());
+ pretty_file.display(),
+ i + 1);
+ println!("`#{}` pointing to `{}`", fragment, pretty_path.display());
};
}
} else {
fn load_file(cache: &mut Cache,
root: &Path,
file: PathBuf,
- redirect: Redirect) -> Result<(PathBuf, String), LoadError> {
+ redirect: Redirect)
+ -> Result<(PathBuf, String), LoadError> {
let mut contents = String::new();
let pretty_file = PathBuf::from(file.strip_prefix(root).unwrap_or(&file));
Entry::Occupied(entry) => {
contents = entry.get().source.clone();
None
- },
+ }
Entry::Vacant(entry) => {
let mut fp = try!(File::open(file.clone()).map_err(|err| {
if let FromRedirect(true) = redirect {
});
}
maybe
- },
+ }
};
let base = Url::from_file_path(&file).unwrap();
let mut parser = UrlParser::new();
let path = PathBuf::from(redirect_file);
load_file(cache, root, path, FromRedirect(true))
}
- None => Ok((pretty_file, contents))
+ None => Ok((pretty_file, contents)),
}
}
}
fn url_to_file_path(parser: &UrlParser, url: &str) -> Option<(Url, PathBuf)> {
- parser.parse(url).ok().and_then(|parsed_url| {
- parsed_url.to_file_path().ok().map(|f| (parsed_url, f))
- })
+ parser.parse(url)
+ .ok()
+ .and_then(|parsed_url| parsed_url.to_file_path().ok().map(|f| (parsed_url, f)))
}
-fn with_attrs_in_source<F: FnMut(&str, usize)>(contents: &str,
- attr: &str,
- mut f: F)
-{
+fn with_attrs_in_source<F: FnMut(&str, usize)>(contents: &str, attr: &str, mut f: F) {
for (i, mut line) in contents.lines().enumerate() {
while let Some(j) = line.find(attr) {
- let rest = &line[j + attr.len() ..];
+ let rest = &line[j + attr.len()..];
line = rest;
let pos_equals = match rest.find("=") {
Some(i) => i,
None => continue,
};
if rest[..pos_equals].trim_left_matches(" ") != "" {
- continue
+ continue;
}
let rest = &rest[pos_equals + 1..];
let quote_delim = rest.as_bytes()[pos_quote] as char;
if rest[..pos_quote].trim_left_matches(" ") != "" {
- continue
+ continue;
}
let rest = &rest[pos_quote + 1..];
let url = match rest.find(quote_delim) {